[CalendarServer-changes] [11871] CalendarServer/branches/users/cdaboo/fix-no-ischedule

source_changes at macosforge.org source_changes at macosforge.org
Wed Mar 12 11:23:35 PDT 2014


Revision: 11871
          http://trac.calendarserver.org//changeset/11871
Author:   cdaboo at apple.com
Date:     2013-11-01 15:25:30 -0700 (Fri, 01 Nov 2013)
Log Message:
-----------
Merged from trunk.

Modified Paths:
--------------
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/accesslog.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/provision/root.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/amppush.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/notifier.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/test/test_notifier.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/caldav.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/agent.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/dbinspect.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/gateway.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/test/test_agent.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/upgrade.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/auth/accounts-test.xml
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/caldavd-apple.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.dist.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/population.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/sim.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/test_sim.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/requests/httpTests.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/sqlusage.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/fix_calendar
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/protocolanalysis.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/build.sh
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/version.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/testserver
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/adbapi2.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/syntax.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/test/test_sqlsyntax.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/ienterprise.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/test/test_adbapi2.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/sendfdport.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/test/test_sendfdport.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/patches.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/log.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/test/test_log.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/channel/http.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/test/test_util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/metafd.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_http.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_metafd.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/aggregate.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/directory.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/expression.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/idirectory.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/index.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/xml.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/caldavxml.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/appleopendirectory.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/directory.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/ldapdirectory.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_buildquery.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_directory.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/method/report_sync_collection.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/resource.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/scheduling_store/caldav/resource.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/storebridge.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Africa/Juba.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Anguilla.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Araguaina.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Aruba.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Cayman.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Dominica.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grand_Turk.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grenada.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Guadeloupe.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Jamaica.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Marigot.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Montserrat.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Barthelemy.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Kitts.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Lucia.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Thomas.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Vincent.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Tortola.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Virgin.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Amman.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Dili.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Gaza.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Hebron.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jakarta.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jayapura.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Makassar.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Pontianak.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Busingen.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Vaduz.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Zurich.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Jamaica.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Fiji.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Johnston.ics
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/links.txt
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/timezones.xml
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/version.txt
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/test/test_subpostgres.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/file.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/schedule.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/inbound.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/implicit.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/itip.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/processing.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/test/test_implicit.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/utils.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/sql.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/common.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_implicit.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/sql.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/test/common.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/file.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_13_to_14.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_tables.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/test/test_upgrade.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrade.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/addressbook_upgrade_from_1_to_2.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_1_to_2.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_3_to_4.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_4_to_5.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_3_to_4.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_4_to_5.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/util.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/base.py
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/rfc6578.py

Added Paths:
-----------
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/clients.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v24.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v24.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_24_to_25.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_24_to_25.sql
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql

Removed Paths:
-------------
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist

Property Changed:
----------------
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/
    CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql.py


Property changes on: CalendarServer/branches/users/cdaboo/fix-no-ischedule
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
   + /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/release/CalendarServer-5.1-dev:11846
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/performance-tweaks:11824-11836
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/log-cleanups:11691-11731
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:11607-11870

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/accesslog.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/accesslog.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/accesslog.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -173,7 +173,7 @@
                     formatArgs["t"] = (nowtime - request.timeStamps[0][1]) * 1000
 
                 if hasattr(request, "extendedLogItems"):
-                    for k, v in request.extendedLogItems.iteritems():
+                    for k, v in sorted(request.extendedLogItems.iteritems(), key=lambda x: x[0]):
                         k = str(k).replace('"', "%22")
                         v = str(v).replace('"', "%22")
                         if " " in v:

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/provision/root.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/provision/root.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/provision/root.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -94,15 +94,7 @@
             from twext.web2.filter import gzip
             self.contentFilters.append((gzip.gzipfilter, True))
 
-        if not config.EnableKeepAlive:
-            def addConnectionClose(request, response):
-                response.headers.setHeader("connection", ("close",))
-                if request.chanRequest is not None:
-                    request.chanRequest.channel.setReadPersistent(False)
-                return response
-            self.contentFilters.append((addConnectionClose, True))
 
-
     def deadProperties(self):
         if not hasattr(self, "_dead_properties"):
             # Get the property store from super

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/amppush.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/amppush.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/amppush.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -48,7 +48,8 @@
 # AMP Commands sent to client (and forwarded to Master)
 
 class NotificationForID(amp.Command):
-    arguments = [('id', amp.String()), ('dataChangedTimestamp', amp.Integer())]
+    arguments = [('id', amp.String()),
+                 ('dataChangedTimestamp', amp.Integer(optional=True))]
     response = [('status', amp.String())]
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/notifier.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/notifier.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/notifier.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -84,10 +84,13 @@
 
 
     @inlineCallbacks
-    def notify(self):
+    def notify(self, txn):
         """
         Send the notification. For a home object we just push using the home id. For a home
         child we push both the owner home id and the owned home child id.
+
+        @param txn: The transaction to create the work item with
+        @type txn: L{CommonStoreTransaction}
         """
         # Push ids from the store objects are a tuple of (prefix, name,) and we need to compose that
         # into a single token.
@@ -100,7 +103,7 @@
         for prefix, id in ids:
             if self._notify:
                 self.log.debug("Notifications are enabled: %s %s/%s" % (self._storeObject, prefix, id,))
-                yield self._notifierFactory.send(prefix, id)
+                yield self._notifierFactory.send(prefix, id, txn)
             else:
                 self.log.debug("Skipping notification for: %s %s/%s" % (self._storeObject, prefix, id,))
 
@@ -147,11 +150,12 @@
 
 
     @inlineCallbacks
-    def send(self, prefix, id):
-        txn = self.store.newTransaction()
+    def send(self, prefix, id, txn):
+        """
+        Enqueue a push notification work item on the provided transaction.
+        """
         notBefore = datetime.datetime.utcnow() + datetime.timedelta(seconds=self.coalesceSeconds)
         yield txn.enqueue(PushNotificationWork, pushID=self.pushKeyForId(prefix, id), notBefore=notBefore)
-        yield txn.commit()
 
 
     def newNotifier(self, storeObject):

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/test/test_notifier.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/test/test_notifier.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/push/test/test_notifier.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -169,8 +169,8 @@
 
         home = yield self.homeUnderTest()
         yield home.notifyChanged()
+        self.assertEquals(self.notifierFactory.history, ["/CalDAV/example.com/home1/"])
         yield self.commit()
-        self.assertEquals(self.notifierFactory.history, ["/CalDAV/example.com/home1/"])
 
 
     @inlineCallbacks
@@ -178,11 +178,11 @@
 
         calendar = yield self.calendarUnderTest()
         yield calendar.notifyChanged()
-        yield self.commit()
         self.assertEquals(
             set(self.notifierFactory.history),
             set(["/CalDAV/example.com/home1/", "/CalDAV/example.com/home1/calendar_1/"])
         )
+        yield self.commit()
 
 
     @inlineCallbacks
@@ -191,7 +191,6 @@
         calendar = yield self.calendarUnderTest()
         home2 = yield self.homeUnderTest(name="home2")
         yield calendar.shareWith(home2, _BIND_MODE_WRITE)
-        yield self.commit()
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -200,11 +199,11 @@
                 "/CalDAV/example.com/home2/"
             ])
         )
+        yield self.commit()
 
         calendar = yield self.calendarUnderTest()
         home2 = yield self.homeUnderTest(name="home2")
         yield calendar.unshareWith(home2)
-        yield self.commit()
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -213,6 +212,7 @@
                 "/CalDAV/example.com/home2/"
             ])
         )
+        yield self.commit()
 
 
     @inlineCallbacks
@@ -226,11 +226,11 @@
 
         shared = yield self.calendarUnderTest(home="home2", name=shareName)
         yield shared.notifyChanged()
-        yield self.commit()
         self.assertEquals(
             set(self.notifierFactory.history),
             set(["/CalDAV/example.com/home1/", "/CalDAV/example.com/home1/calendar_1/"])
         )
+        yield self.commit()
 
 
     @inlineCallbacks
@@ -238,8 +238,8 @@
 
         notifications = yield self.transactionUnderTest().notificationsWithUID("home1")
         yield notifications.notifyChanged()
-        yield self.commit()
         self.assertEquals(
             set(self.notifierFactory.history),
             set(["/CalDAV/example.com/home1/", "/CalDAV/example.com/home1/notification/"])
         )
+        yield self.commit()

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/caldav.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/caldav.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/caldav.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -58,13 +58,15 @@
 from twext.internet.ssl import ChainingOpenSSLContextFactory
 from twext.internet.tcp import MaxAcceptTCPServer, MaxAcceptSSLServer
 from twext.internet.fswatch import DirectoryChangeListener, IDirectoryChangeListenee
-from twext.web2.channel.http import LimitingHTTPFactory, SSLRedirectRequest
+from twext.web2.channel.http import LimitingHTTPFactory, SSLRedirectRequest, \
+    HTTPChannel
 from twext.web2.metafd import ConnectionLimiter, ReportingHTTPService
 from twext.enterprise.ienterprise import POSTGRES_DIALECT
 from twext.enterprise.ienterprise import ORACLE_DIALECT
 from twext.enterprise.adbapi2 import ConnectionPool
+from twext.enterprise.queue import NonPerformingQueuer
+from twext.enterprise.queue import PeerConnectionPool
 from twext.enterprise.queue import WorkerFactory as QueueWorkerFactory
-from twext.enterprise.queue import PeerConnectionPool
 
 from txdav.common.datastore.sql_tables import schema
 from txdav.common.datastore.upgrade.sql.upgrade import (
@@ -225,14 +227,32 @@
     """ Registers a rotating file logger for error logging, if
         config.ErrorLogEnabled is True. """
 
+    def __init__(self, logEnabled, logPath, logRotateLength, logMaxFiles):
+        """
+        @param logEnabled: Whether to write to a log file
+        @type logEnabled: C{boolean}
+        @param logPath: the full path to the log file
+        @type logPath: C{str}
+        @param logRotateLength: rotate when files exceed this many bytes
+        @type logRotateLength: C{int}
+        @param logMaxFiles: keep at most this many files
+        @type logMaxFiles: C{int}
+        """
+        MultiService.__init__(self)
+        self.logEnabled = logEnabled
+        self.logPath = logPath
+        self.logRotateLength = logRotateLength
+        self.logMaxFiles = logMaxFiles
+
+
     def setServiceParent(self, app):
         MultiService.setServiceParent(self, app)
 
-        if config.ErrorLogEnabled:
+        if self.logEnabled:
             errorLogFile = LogFile.fromFullPath(
-                config.ErrorLogFile,
-                rotateLength=config.ErrorLogRotateMB * 1024 * 1024,
-                maxRotatedFiles=config.ErrorLogMaxRotatedFiles
+                self.logPath,
+                rotateLength=self.logRotateLength,
+                maxRotatedFiles=self.logMaxFiles
             )
             errorLogObserver = FileLogObserver(errorLogFile).emit
 
@@ -251,7 +271,9 @@
 
     def __init__(self, logObserver):
         self.logObserver = logObserver # accesslog observer
-        MultiService.__init__(self)
+        ErrorLoggingMultiService.__init__(self, config.ErrorLogEnabled,
+            config.ErrorLogFile, config.ErrorLogRotateMB * 1024 * 1024,
+            config.ErrorLogMaxRotatedFiles)
 
 
     def privilegedStartService(self):
@@ -958,6 +980,13 @@
             def requestFactory(*args, **kw):
                 return SSLRedirectRequest(site=underlyingSite, *args, **kw)
 
+        # Setup HTTP connection behaviors
+        HTTPChannel.allowPersistentConnections = config.EnableKeepAlive
+        HTTPChannel.betweenRequestsTimeOut = config.PipelineIdleTimeOut
+        HTTPChannel.inputTimeOut = config.IncomingDataTimeOut
+        HTTPChannel.idleTimeOut = config.IdleConnectionTimeOut
+        HTTPChannel.closeTimeOut = config.CloseConnectionTimeOut
+
         # Add the Strict-Transport-Security header to all secured requests
         # if enabled.
         if config.StrictTransportSecuritySeconds:
@@ -971,6 +1000,7 @@
                             "max-age={max_age:d}"
                             .format(max_age=config.StrictTransportSecuritySeconds))
                     return response
+                responseFilter.handleErrors = True
                 request.addResponseFilter(responseFilter)
                 return request
 
@@ -1182,6 +1212,28 @@
             else:
                 groupCacher = None
 
+            # Optionally enable Manhole access
+            if config.Manhole.Enabled:
+                try:
+                    from twisted.conch.manhole_tap import makeService as manholeMakeService
+                    portString = "tcp:%d:interface=127.0.0.1" % (config.Manhole.StartingPortNumber,)
+                    manholeService = manholeMakeService({
+                        "sshPort" : None,
+                        "telnetPort" : portString,
+                        "namespace" : {
+                            "config" : config,
+                            "service" : result,
+                            "store" : store,
+                            "directory" : directory,
+                            },
+                        "passwd" : config.Manhole.PasswordFilePath,
+                    })
+                    manholeService.setServiceParent(result)
+                    # Using print(because logging isn't ready at this point)
+                    print("Manhole access enabled: %s" % (portString,))
+                except ImportError:
+                    print("Manhole access could not enabled because manhole_tap could not be imported")
+
             def decorateTransaction(txn):
                 txn._pushDistributor = pushDistributor
                 txn._rootResource = result.rootResource
@@ -1247,8 +1299,9 @@
         Create an agent service which listens for configuration requests
         """
 
-        # Don't use memcached -- calendar server might take it away at any
-        # moment
+        # Don't use memcached initially -- calendar server might take it away at
+        # any moment.  However, when we run a command through the gateway, it
+        # will conditionally set ClientEnabled at that time.
         def agentPostUpdateHook(configDict, reloading=False):
             configDict.Memcached.Pools.Default.ClientEnabled = False
 
@@ -1266,10 +1319,20 @@
                 dataStoreWatcher = DirectoryChangeListener(reactor,
                     config.DataRoot, DataStoreMonitor(reactor, storageService))
                 dataStoreWatcher.startListening()
+            if store is not None:
+                store.queuer = NonPerformingQueuer()
             return makeAgentService(store)
 
         uid, gid = getSystemIDs(config.UserName, config.GroupName)
-        return self.storageService(agentServiceCreator, None, uid=uid, gid=gid)
+        svc = self.storageService(agentServiceCreator, None, uid=uid, gid=gid)
+        agentLoggingService = ErrorLoggingMultiService(
+            config.ErrorLogEnabled,
+            config.AgentLogFile,
+            config.ErrorLogRotateMB * 1024 * 1024,
+            config.ErrorLogMaxRotatedFiles
+            )
+        svc.setServiceParent(agentLoggingService)
+        return agentLoggingService
 
 
     def storageService(self, createMainService, logObserver, uid=None, gid=None):
@@ -1366,7 +1429,9 @@
 
                 # Conditionally stop after upgrade at this point
                 pps.addStep(
-                    QuitAfterUpgradeStep(config.StopAfterUpgradeTriggerFile)
+                    QuitAfterUpgradeStep(
+                        config.StopAfterUpgradeTriggerFile or config.UpgradeHomePrefix
+                    )
                 )
 
                 pps.addStep(
@@ -1428,7 +1493,12 @@
         Create a master service to coordinate a multi-process configuration,
         spawning subprocesses that use L{makeService_Slave} to perform work.
         """
-        s = ErrorLoggingMultiService()
+        s = ErrorLoggingMultiService(
+            config.ErrorLogEnabled,
+            config.ErrorLogFile,
+            config.ErrorLogRotateMB * 1024 * 1024,
+            config.ErrorLogMaxRotatedFiles
+        )
 
         # Add a service to re-exec the master when it receives SIGHUP
         ReExecService(config.PIDFile).setServiceParent(s)
@@ -2387,6 +2457,7 @@
     return uid, gid
 
 
+
 class DataStoreMonitor(object):
     implements(IDirectoryChangeListenee)
 
@@ -2398,18 +2469,21 @@
         self._reactor = reactor
         self._storageService = storageService
 
+
     def disconnected(self):
         self._storageService.hardStop()
         self._reactor.stop()
 
+
     def deleted(self):
         self._storageService.hardStop()
         self._reactor.stop()
 
+
     def renamed(self):
         self._storageService.hardStop()
         self._reactor.stop()
 
+
     def connectionLost(self, reason):
         pass
-

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tap/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -95,6 +95,7 @@
 from txdav.common.datastore.sql import CommonDataStore as CommonSQLDataStore
 from txdav.common.datastore.file import CommonDataStore as CommonFileDataStore
 from txdav.common.datastore.sql import current_sql_schema
+from txdav.common.datastore.upgrade.sql.upgrade import NotAllowedToUpgrade
 from twext.python.filepath import CachingFilePath
 from urllib import quote
 from twisted.python.usage import UsageError
@@ -1088,7 +1089,8 @@
 
 
     def defaultStepWithFailure(self, failure):
-        log.failure("Step failure", failure=failure)
+        if failure.type != NotAllowedToUpgrade:
+            log.failure("Step failure", failure=failure)
         return failure
 
     # def protectStep(self, callback):

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/agent.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/agent.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/agent.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -243,7 +243,8 @@
         log.warn("Agent inactive; shutting down")
         reactor.stop()
 
-    inactivityDetector = InactivityDetector(reactor, 60 * 10, becameInactive)
+    inactivityDetector = InactivityDetector(reactor,
+        config.AgentInactivityTimeoutSeconds, becameInactive)
     root = Resource()
     root.putChild("gateway", AgentGatewayResource(store,
         davRootResource, directory, inactivityDetector))
@@ -278,8 +279,9 @@
         self._timeoutSeconds = timeoutSeconds
         self._becameInactive = becameInactive
 
-        self._delayedCall = self._reactor.callLater(self._timeoutSeconds,
-            self._inactivityThresholdReached)
+        if self._timeoutSeconds > 0:
+            self._delayedCall = self._reactor.callLater(self._timeoutSeconds,
+                self._inactivityThresholdReached)
 
 
     def _inactivityThresholdReached(self):
@@ -295,19 +297,21 @@
         Call this to let the InactivityMonitor that there has been activity.
         It will reset the timeout.
         """
-        if self._delayedCall.active():
-            self._delayedCall.reset(self._timeoutSeconds)
-        else:
-            self._delayedCall = self._reactor.callLater(self._timeoutSeconds,
-                self._inactivityThresholdReached)
+        if self._timeoutSeconds > 0:
+            if self._delayedCall.active():
+                self._delayedCall.reset(self._timeoutSeconds)
+            else:
+                self._delayedCall = self._reactor.callLater(self._timeoutSeconds,
+                    self._inactivityThresholdReached)
 
 
     def stop(self):
         """
         Cancels the delayed call
         """
-        if self._delayedCall.active():
-            self._delayedCall.cancel()
+        if self._timeoutSeconds > 0:
+            if self._delayedCall.active():
+                self._delayedCall.cancel()
 
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/dbinspect.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/dbinspect.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/dbinspect.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -22,8 +22,6 @@
 of simple commands.
 """
 
-from caldavclientlibrary.admin.xmlaccounts.recordtypes import recordType_users, \
-    recordType_locations, recordType_resources, recordType_groups
 from calendarserver.tools import tables
 from calendarserver.tools.cmdline import utilityMain
 from pycalendar.datetime import PyCalendarDateTime
@@ -38,6 +36,7 @@
 from twistedcaldav.config import config
 from twistedcaldav.datafilters.peruserdata import PerUserDataFilter
 from twistedcaldav.directory import calendaruserproxy
+from twistedcaldav.directory.directory import DirectoryService
 from twistedcaldav.query import calendarqueryfilter
 from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
 from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
@@ -104,13 +103,13 @@
     except (ValueError, TypeError):
         pass
 
-    record = txn.directoryService().recordWithShortName(recordType_users, value)
+    record = txn.directoryService().recordWithShortName(DirectoryService.recordType_users, value)
     if record is None:
-        record = txn.directoryService().recordWithShortName(recordType_locations, value)
+        record = txn.directoryService().recordWithShortName(DirectoryService.recordType_locations, value)
     if record is None:
-        record = txn.directoryService().recordWithShortName(recordType_resources, value)
+        record = txn.directoryService().recordWithShortName(DirectoryService.recordType_resources, value)
     if record is None:
-        record = txn.directoryService().recordWithShortName(recordType_groups, value)
+        record = txn.directoryService().recordWithShortName(DirectoryService.recordType_groups, value)
     return record.guid if record else None
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/gateway.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/gateway.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/gateway.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -30,7 +30,7 @@
 
 from calendarserver.tools.util import (
     principalForPrincipalID, proxySubprincipal, addProxy, removeProxy,
-    ProxyError, ProxyWarning
+    ProxyError, ProxyWarning, autoDisableMemcached
 )
 from calendarserver.tools.principals import getProxies, setProxies, updateRecord
 from calendarserver.tools.purge import WorkerService, PurgeOldEventsService, DEFAULT_BATCH_SIZE, DEFAULT_RETAIN_DAYS
@@ -188,6 +188,22 @@
 
     @inlineCallbacks
     def run(self):
+
+        # This method can be called as the result of an agent request.  We
+        # check to see if memcached is there for each call because the server
+        # could have stopped/started since the last time.
+
+        for pool in config.Memcached.Pools.itervalues():
+            pool.ClientEnabled = True
+        autoDisableMemcached(config)
+
+        from twistedcaldav.directory import calendaruserproxy
+        if calendaruserproxy.ProxyDBService is not None:
+            # Reset the proxy db memcacher because memcached may have come or
+            # gone since the last time through here.
+            # TODO: figure out a better way to do this
+            calendaruserproxy.ProxyDBService._memcacher._memcacheProtocol = None
+
         try:
             for command in self.commands:
                 commandName = command['command']

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/test/test_agent.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/test/test_agent.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/test/test_agent.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -145,6 +145,9 @@
 
             id.stop()
 
+            # Verify a timeout of 0 does not ever fire
+            id = InactivityDetector(clock, 0, becameInactive)
+            self.assertEquals(clock.getDelayedCalls(), [])
 
 
     class FakeRequest(object):

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/upgrade.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/upgrade.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -82,6 +82,7 @@
 
     optParameters = [
         ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
+        ['prefix', 'x', "", "Only upgrade homes with the specified GUID prefix - partial upgrade only."],
     ]
 
     def __init__(self):
@@ -142,11 +143,17 @@
         """
         Immediately stop.  The upgrade will have been run before this.
         """
-        # If we get this far the database is OK
-        if self.options["status"]:
-            self.output.write("Database OK.\n")
+        if self.store is None:
+            if self.options["status"]:
+                self.output.write("Upgrade needed.\n")
+            else:
+                self.output.write("Upgrade failed.\n")
         else:
-            self.output.write("Upgrade complete, shutting down.\n")
+            # If we get this far the database is OK
+            if self.options["status"]:
+                self.output.write("Database OK.\n")
+            else:
+                self.output.write("Upgrade complete, shutting down.\n")
         UpgraderService.started = True
 
         from twisted.internet import reactor
@@ -191,9 +198,11 @@
             data.MergeUpgrades = True
         config.addPostUpdateHooks([setMerge])
 
+
     def makeService(store):
         return UpgraderService(store, options, output, reactor, config)
 
+
     def onlyUpgradeEvents(eventDict):
         text = formatEvent(eventDict)
         output.write(logDateString() + " " + text + "\n")
@@ -203,14 +212,19 @@
         log.publisher.levels.setLogLevelForNamespace(None, LogLevel.debug)
         addObserver(onlyUpgradeEvents)
 
+
     def customServiceMaker():
         customService = CalDAVServiceMaker()
         customService.doPostImport = options["postprocess"]
         return customService
 
+
     def _patchConfig(config):
         config.FailIfUpgradeNeeded = options["status"]
+        if options["prefix"]:
+            config.UpgradeHomePrefix = options["prefix"]
 
+
     def _onShutdown():
         if not UpgraderService.started:
             print("Failed to start service.")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/calendarserver/tools/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -235,23 +235,21 @@
 
 def autoDisableMemcached(config):
     """
-    If memcached is not running, set config.Memcached.ClientEnabled to False
+    Set ClientEnabled to False for each pool whose memcached is not running
     """
 
-    if not config.Memcached.Pools.Default.ClientEnabled:
-        return
+    for pool in config.Memcached.Pools.itervalues():
+        if pool.ClientEnabled:
+            try:
+                s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+                s.connect((pool.BindAddress, pool.Port))
+                s.close()
 
-    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+            except socket.error:
+                pool.ClientEnabled = False
 
-    try:
-        s.connect((config.Memcached.Pools.Default.BindAddress, config.Memcached.Pools.Default.Port))
-        s.close()
 
-    except socket.error:
-        config.Memcached.Pools.Default.ClientEnabled = False
 
-
-
 def setupMemcached(config):
     #
     # Connect to memcached

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/auth/accounts-test.xml
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/auth/accounts-test.xml	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/auth/accounts-test.xml	2013-11-01 22:25:30 UTC (rev 11871)
@@ -89,7 +89,7 @@
     <first-name>ま</first-name>
     <last-name>だ</last-name>
   </user>
-  <user repeat="99">
+  <user repeat="101">
     <uid>user%02d</uid>
     <uid>User %02d</uid>
     <guid>user%02d</guid>

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/caldavd-apple.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/caldavd-apple.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/conf/caldavd-apple.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -111,11 +111,18 @@
             <string>-c log_lock_waits=TRUE</string>
             <string>-c deadlock_timeout=10</string>
             <string>-c log_line_prefix='%m [%p] '</string>
+            <string>-c logging_collector=on</string>
+            <string>-c log_truncate_on_rotation=on</string>
+            <string>-c log_directory=/var/log/caldavd/postgresql</string>
+            <string>-c log_filename=postgresql_%w.log</string>
+            <string>-c log_rotation_age=1440</string>
         </array>
         <key>ExtraConnections</key>
         <integer>20</integer>
         <key>ClusterName</key>
         <string>cluster.pg</string>
+        <key>LogFile</key>
+        <string>xpg_ctl.log</string>
     </dict>
 
     <!-- Data root -->

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/clients.plist (from rev 11870, CalendarServer/trunk/contrib/performance/loadtest/clients.plist)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/clients.plist	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/clients.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,445 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+    Copyright (c) 2011-2013 Apple Inc. All rights reserved.
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+	<dict>
+		<!-- Define the kinds of software and user behavior the load simulation
+			will simulate. -->
+		<key>clients</key>
+
+		<!-- Have as many different kinds of software and user behavior configurations
+			as you want. Each is a dict -->
+		<array>
+
+			<dict>
+
+				<!-- Here is a OS X client simulator. -->
+				<key>software</key>
+				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
+
+				<!-- Arguments to use to initialize the OS_X_10_7 instance. -->
+				<key>params</key>
+				<dict>
+					<!-- Name that appears in logs. -->
+					<key>title</key>
+					<string>10.7</string>
+	
+					<!-- OS_X_10_7 can poll the calendar home at some interval. This is
+						in seconds. -->
+					<key>calendarHomePollInterval</key>
+					<integer>30</integer>
+
+					<!-- If the server advertises xmpp push, OS_X_10_7 can wait for notifications
+						about calendar home changes instead of polling for them periodically. If
+						this option is true, then look for the server advertisement for xmpp push
+						and use it if possible. Still fall back to polling if there is no xmpp push
+						advertised. -->
+					<key>supportPush</key>
+					<false />
+
+					<key>supportAmpPush</key>
+					<true/>
+					<key>ampPushHost</key>
+					<string>localhost</string>
+					<key>ampPushPort</key>
+					<integer>62311</integer>
+				</dict>
+
+				<!-- The profiles define certain types of user behavior on top of the
+					client software being simulated. -->
+				<key>profiles</key>
+				<array>
+
+					<!-- First an event-creating profile, which will periodically create
+						new events at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Eventer</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the interval (in seconds) at which this profile will use
+								its client to create a new event. -->
+							<key>interval</key>
+							<integer>60</integer>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events
+								will be selected. This is an example of a "Distribution" parameter. The value
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps
+									in the near future, limited to certain days of the week and certain hours
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<true/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile invites some number of new attendees to new events. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the frequency at which new invitations will be sent out. -->
+							<key>sendInvitationDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.NormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mu gives the mean of the normal distribution (in seconds). -->
+									<key>mu</key>
+									<integer>60</integer>
+
+									<!-- and sigma gives its standard deviation. -->
+									<key>sigma</key>
+									<integer>5</integer>
+								</dict>
+							</dict>
+
+							<!-- Define the distribution of who will be invited to an event.
+							
+								When inviteeClumping is turned on each invitee is based on a sample of
+								users "close to" the organizer based on account index. If the clumping
+								is too "tight" for the requested number of attendees, then invites for
+								those larger numbers will simply fail (the sim will report that situation).
+								
+								When inviteeClumping is off invitees will be sampled across an entire
+								range of account indexes. In this case the distribution ought to be a
+								UniformIntegerDistribution with min=0 and max set to the number of accounts.
+							-->
+							<key>inviteeDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.UniformIntegerDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- The minimum value (inclusive) of the uniform distribution. -->
+									<key>min</key>
+									<integer>0</integer>
+									<!-- The maximum value (exclusive) of the uniform distribution. -->
+									<key>max</key>
+									<integer>99</integer>
+								</dict>
+							</dict>
+
+							<key>inviteeClumping</key>
+							<true/>
+
+							<!-- Define the distribution of how many attendees will be invited to an event.
+							
+								LogNormal is the best fit to observed data.
+
+
+								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
+								mode should typically be 1, and mean whatever matches the user behavior.
+								Our typical mean is 6. 							
+							     -->
+							<key>inviteeCountDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.LogNormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mode - peak-->
+									<key>mode</key>
+									<integer>1</integer>
+									<!-- mean - average-->
+									<key>median</key>
+									<integer>6</integer>
+									<!-- maximum -->
+									<key>maximum</key>
+									<real>60</real>
+								</dict>
+							</dict>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events
+								will be selected. This is an example of a "Distribution" parameter. The value
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps
+									in the near future, limited to certain days of the week and certain hours
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<true/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile accepts invitations to events, handles cancels, and
+					     handles replies received. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Accepter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define how long to wait after seeing a new invitation before
+								accepting it.
+
+								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
+								(i.e., half of the user have accepted by that time).								
+							-->
+							<key>acceptDelayDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.LogNormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mode - peak-->
+									<key>mode</key>
+									<integer>300</integer>
+									<!-- median - 50% done-->
+									<key>median</key>
+									<integer>1800</integer>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- A task-creating profile, which will periodically create
+						new tasks at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Tasker</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the interval (in seconds) at which this profile will use
+								its client to create a new task. -->
+							<key>interval</key>
+							<integer>300</integer>
+
+							<!-- Define how due times (DUE) for the randomly generated tasks
+								will be selected. This is an example of a "Distribution" parameter. The value
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>taskDueDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps
+									in the near future, limited to certain days of the week and certain hours
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+				</array>
+
+				<!-- Determine the frequency at which this client configuration will
+					appear in the clients which are created by the load tester. -->
+				<key>weight</key>
+				<integer>1</integer>
+			</dict>
+		</array>
+	</dict>
+</plist>

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.dist.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.dist.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.dist.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -50,10 +50,19 @@
 			<integer>8080</integer>
 		</dict>
 
-		<!--  Define whether client data should be saved and re-used. -->
+		<!--  Define whether server supports stats socket. -->
+		<key>serverStats</key>
+		<dict>
+			<key>enabled</key>
+			<true/>
+			<key>Port</key>
+			<integer>8100</integer>
+		</dict>
+
+		<!--  Define whether client data should be re-used. It will always be saved to the specified path.-->
 		<key>clientDataSerialization</key>
 		<dict>
-			<key>Enabled</key>
+			<key>UseOldData</key>
 			<true/>
 			<key>Path</key>
 			<string>/tmp/sim</string>
@@ -119,471 +128,6 @@
 
 		</dict>
 
-		<!-- Define the kinds of software and user behavior the load simulation 
-			will simulate. -->
-		<key>clients</key>
-
-		<!-- Have as many different kinds of software and user behavior configurations 
-			as you want. Each is a dict -->
-		<array>
-
-			<dict>
-
-				<!-- Here is a OS X client simulator. -->
-				<key>software</key>
-				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
-
-				<!-- Arguments to use to initialize the OS_X_10_7 instance. -->
-				<key>params</key>
-				<dict>
-					<!-- Name that appears in logs. -->
-					<key>title</key>
-					<string>10.7</string>
-
-					<!-- OS_X_10_7 can poll the calendar home at some interval. This is 
-						in seconds. -->
-					<key>calendarHomePollInterval</key>
-					<integer>30</integer>
-
-					<!-- If the server advertises xmpp push, OS_X_10_7 can wait for notifications 
-						about calendar home changes instead of polling for them periodically. If 
-						this option is true, then look for the server advertisement for xmpp push 
-						and use it if possible. Still fall back to polling if there is no xmpp push 
-						advertised. -->
-					<key>supportPush</key>
-					<false />
-				</dict>
-
-				<!-- The profiles define certain types of user behavior on top of the 
-					client software being simulated. -->
-				<key>profiles</key>
-				<array>
-
-					<!-- First an event-creating profile, which will periodically create 
-						new events at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Eventer</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new event. -->
-							<key>interval</key>
-							<integer>60</integer>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<true/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites new attendees to existing events. 
-					     This profile should no longer be used - use RealisticInviter instead. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Inviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.NormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mu gives the mean of the normal distribution (in seconds). -->
-									<key>mu</key>
-									<integer>60</integer>
-
-									<!-- and sigma gives its standard deviation. -->
-									<key>sigma</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event. Each 
-								set of credentials loaded by the load tester has an index; samples from this 
-								distribution will be added to that index to arrive at the index of some other 
-								credentials, which will be the target of the invitation. -->
-							<key>inviteeDistanceDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>-100</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>101</integer>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites some number of new attendees to new events. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.NormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mu gives the mean of the normal distribution (in seconds). -->
-									<key>mu</key>
-									<integer>60</integer>
-
-									<!-- and sigma gives its standard deviation. -->
-									<key>sigma</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event.
-							
-								When inviteeClumping is turned on each invitee is based on a sample of
-								users "close to" the organizer based on account index. If the clumping
-								is too "tight" for the requested number of attendees, then invites for
-								those larger numbers will simply fail (the sim will report that situation).
-								
-								When inviteeClumping is off invitees will be sampled across an entire
-								range of account indexes. In this case the distribution ought to be a
-								UniformIntegerDistribution with min=0 and max set to the number of accounts.
-							-->
-							<key>inviteeDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>-100</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>101</integer>
-								</dict>
-							</dict>
-
-							<key>inviteeClumping</key>
-							<true/>
-
-							<!-- Define the distribution of how many attendees will be invited to an event.
-							
-								LogNormal is the best fit to observed data.
-
-
-								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
-								mode should typically be 1, and mean whatever matches the user behavior.
-								Our typical mean is 6. 							
-							     -->
-							<key>inviteeCountDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>1</integer>
-									<!-- mean - average-->
-									<key>median</key>
-									<integer>6</integer>
-									<!-- maximum -->
-									<key>maximum</key>
-									<real>100</real>
-								</dict>
-							</dict>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<true/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile accepts invitations to events, handles cancels, and
-					     handles replies received. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Accepter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define how long to wait after seeing a new invitation before
-								accepting it.
-
-								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
-								(i.e., half of the user have accepted by that time).								
-							-->
-							<key>acceptDelayDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>300</integer>
-									<!-- median - 50% done-->
-									<key>median</key>
-									<integer>1800</integer>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- A task-creating profile, which will periodically create 
-						new tasks at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Tasker</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new task. -->
-							<key>interval</key>
-							<integer>300</integer>
-
-							<!-- Define how due times (DUE) for the randomly generated tasks 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>taskDueDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-				</array>
-
-				<!-- Determine the frequency at which this client configuration will 
-					appear in the clients which are created by the load tester. -->
-				<key>weight</key>
-				<integer>1</integer>
-			</dict>
-		</array>
-
 		<!-- Define some log observers to report on the load test. -->
 		<key>observers</key>
 		<array>

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/config.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -37,10 +37,19 @@
 			<integer>8080</integer>
 		</dict>
 
-		<!--  Define whether client data should be saved and re-used. -->
+		<!--  Define whether server supports stats socket. -->
+		<key>serverStats</key>
+		<dict>
+			<key>enabled</key>
+			<true/>
+			<key>Port</key>
+			<integer>8100</integer>
+		</dict>
+
+		<!--  Define whether client data should be re-used. It will always be saved to the specified path.-->
 		<key>clientDataSerialization</key>
 		<dict>
-			<key>Enabled</key>
+			<key>UseOldData</key>
 			<true/>
 			<key>Path</key>
 			<string>/tmp/sim</string>
@@ -106,429 +115,6 @@
 
 		</dict>
 
-		<!-- Define the kinds of software and user behavior the load simulation
-			will simulate. -->
-		<key>clients</key>
-
-		<!-- Have as many different kinds of software and user behavior configurations
-			as you want. Each is a dict -->
-		<array>
-
-			<dict>
-
-				<!-- Here is a OS X client simulator. -->
-				<key>software</key>
-				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
-
-				<!-- Arguments to use to initialize the OS_X_10_7 instance. -->
-				<key>params</key>
-				<dict>
-					<!-- Name that appears in logs. -->
-					<key>title</key>
-					<string>10.7</string>
-	
-					<!-- OS_X_10_7 can poll the calendar home at some interval. This is
-						in seconds. -->
-					<key>calendarHomePollInterval</key>
-					<integer>30</integer>
-
-					<!-- If the server advertises xmpp push, OS_X_10_7 can wait for notifications
-						about calendar home changes instead of polling for them periodically. If
-						this option is true, then look for the server advertisement for xmpp push
-						and use it if possible. Still fall back to polling if there is no xmpp push
-						advertised. -->
-					<key>supportPush</key>
-					<false />
-
-					<key>supportAmpPush</key>
-					<true/>
-					<key>ampPushHost</key>
-					<string>localhost</string>
-					<key>ampPushPort</key>
-					<integer>62311</integer>
-				</dict>
-
-				<!-- The profiles define certain types of user behavior on top of the
-					client software being simulated. -->
-				<key>profiles</key>
-				<array>
-
-					<!-- First an event-creating profile, which will periodically create
-						new events at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Eventer</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the interval (in seconds) at which this profile will use
-								its client to create a new event. -->
-							<key>interval</key>
-							<integer>60</integer>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events
-								will be selected. This is an example of a "Distribution" parameter. The value
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps
-									in the near future, limited to certain days of the week and certain hours
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<true/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites some number of new attendees to new events. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.NormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mu gives the mean of the normal distribution (in seconds). -->
-									<key>mu</key>
-									<integer>60</integer>
-
-									<!-- and sigma gives its standard deviation. -->
-									<key>sigma</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event.
-							
-								When inviteeClumping is turned on each invitee is based on a sample of
-								users "close to" the organizer based on account index. If the clumping
-								is too "tight" for the requested number of attendees, then invites for
-								those larger numbers will simply fail (the sim will report that situation).
-								
-								When inviteeClumping is off invitees will be sampled across an entire
-								range of account indexes. In this case the distribution ought to be a
-								UniformIntegerDistribution with min=0 and max set to the number of accounts.
-							-->
-							<key>inviteeDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>0</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>99</integer>
-								</dict>
-							</dict>
-
-							<key>inviteeClumping</key>
-							<true/>
-
-							<!-- Define the distribution of how many attendees will be invited to an event.
-							
-								LogNormal is the best fit to observed data.
-
-
-								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
-								mode should typically be 1, and mean whatever matches the user behavior.
-								Our typical mean is 6. 							
-							     -->
-							<key>inviteeCountDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>1</integer>
-									<!-- mean - average-->
-									<key>median</key>
-									<integer>6</integer>
-									<!-- maximum -->
-									<key>maximum</key>
-									<real>60</real>
-								</dict>
-							</dict>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events
-								will be selected. This is an example of a "Distribution" parameter. The value
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps
-									in the near future, limited to certain days of the week and certain hours
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<true/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile accepts invitations to events, handles cancels, and
-					     handles replies received. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Accepter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define how long to wait after seeing a new invitation before
-								accepting it.
-
-								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
-								(i.e., half of the user have accepted by that time).								
-							-->
-							<key>acceptDelayDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>300</integer>
-									<!-- median - 50% done-->
-									<key>median</key>
-									<integer>1800</integer>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- A task-creating profile, which will periodically create
-						new tasks at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Tasker</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the interval (in seconds) at which this profile will use
-								its client to create a new task. -->
-							<key>interval</key>
-							<integer>300</integer>
-
-							<!-- Define how due times (DUE) for the randomly generated tasks
-								will be selected. This is an example of a "Distribution" parameter. The value
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>taskDueDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps
-									in the near future, limited to certain days of the week and certain hours
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-				</array>
-
-				<!-- Determine the frequency at which this client configuration will
-					appear in the clients which are created by the load tester. -->
-				<key>weight</key>
-				<integer>1</integer>
-			</dict>
-		</array>
-
 		<!-- Define some log observers to report on the load test. -->
 		<key>observers</key>
 		<array>

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/population.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/population.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/population.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -396,6 +396,7 @@
         self._failed_clients = []
         self._failed_sim = collections.defaultdict(int)
         self._startTime = datetime.now()
+        self._expired_data = None
 
         # Load parameters from config
         if "thresholdsPath" in params:
@@ -423,6 +424,13 @@
             self._fail_cut_off = params["failCutoff"]
 
 
+    def observe(self, event):
+        if event.get('type') == 'sim-expired':
+            self.simExpired(event)
+        else:
+            super(ReportStatistics, self).observe(event)
+
+
     def countUsers(self):
         return len(self._users)
 
@@ -454,6 +462,10 @@
         self._failed_sim[event['reason']] += 1
 
 
+    def simExpired(self, event):
+        self._expired_data = event['reason']
+
+
     def printMiscellaneous(self, output, items):
         maxColumnWidth = str(len(max(items.iterkeys(), key=len)))
         fmt = "%" + maxColumnWidth + "s : %-s\n"
@@ -480,7 +492,7 @@
             if result is not None:
                 differences.append(result)
 
-        return mean(differences) if differences else "None"
+        return ("%-8.4f" % mean(differences)) if differences else "None"
 
 
     def qos_value(self, method, value):
@@ -518,7 +530,7 @@
             'Start time': self._startTime.strftime('%m/%d %H:%M:%S'),
             'Run time': "%02d:%02d:%02d" % (runHours, runMinutes, runSeconds),
             'CPU Time': "user %-5.2f sys %-5.2f total %02d:%02d:%02d" % (cpuUser, cpuSys, cpuHours, cpuMinutes, cpuSeconds,),
-            'QoS': "%-8.4f" % (self.qos(),),
+            'QoS': self.qos(),
         }
         if self.countClientFailures() > 0:
             items['Failed clients'] = self.countClientFailures()
@@ -527,8 +539,22 @@
         if self.countSimFailures() > 0:
             for reason, count in self._failed_sim.items():
                 items['Failed operation'] = "%s : %d times" % (reason, count,)
+        output.write("* Client\n")
         self.printMiscellaneous(output, items)
         output.write("\n")
+
+        if self._expired_data is not None:
+            items = {
+                "Req/sec" : "%.1f" % (self._expired_data[0],),
+                "Response": "%.1f (ms)" % (self._expired_data[1],),
+                "Slots": "%.2f" % (self._expired_data[2],),
+                "CPU": "%.1f%%" % (self._expired_data[3],),
+            }
+            output.write("* Server (Last 5 minutes)\n")
+            self.printMiscellaneous(output, items)
+            output.write("\n")
+        output.write("* Details\n")
+
         self.printHeader(output, [
                 (label, width)
                 for (label, width, _ignore_fmt)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/sim.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/sim.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/sim.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -23,11 +23,15 @@
 from plistlib import readPlist
 from random import Random
 from sys import argv, stdout
+from urlparse import urlsplit
 from xml.parsers.expat import ExpatError
+import json
+import shutil
+import socket
 
 from twisted.python import context
 from twisted.python.filepath import FilePath
-from twisted.python.log import startLogging, addObserver, removeObserver
+from twisted.python.log import startLogging, addObserver, removeObserver, msg
 from twisted.python.usage import UsageError, Options
 from twisted.python.reflect import namedAny
 
@@ -56,6 +60,11 @@
 
 
 
+def safeDivision(value, total, factor=1):
+    return value * factor / total if total else 0
+
+
+
 def generateRecords(count, uidPattern="user%d", passwordPattern="user%d",
     namePattern="User %d", emailPattern="user%d at example.com"):
     for i in xrange(count):
@@ -121,6 +130,7 @@
     """
     config = None
     _defaultConfig = FilePath(__file__).sibling("config.plist")
+    _defaultClients = FilePath(__file__).sibling("clients.plist")
 
     optParameters = [
         ("runtime", "t", None,
@@ -129,6 +139,9 @@
         ("config", None, _defaultConfig,
          "Configuration plist file name from which to read simulation parameters.",
          FilePath),
+        ("clients", None, _defaultClients,
+         "Configuration plist file name from which to read client parameters.",
+         FilePath),
         ]
 
 
@@ -181,7 +194,23 @@
         finally:
             configFile.close()
 
+        try:
+            clientFile = self['clients'].open()
+        except IOError, e:
+            raise UsageError("--clients %s: %s" % (
+                    self['clients'].path, e.strerror))
+        try:
+            try:
+                client_config = readPlist(clientFile)
+                self.config["clients"] = client_config["clients"]
+                if "arrivalInterval" in client_config:
+                    self.config["arrival"]["params"]["interval"] = client_config["arrivalInterval"]
+            except ExpatError, e:
+                raise UsageError("--clients %s: %s" % (self['clients'].path, e))
+        finally:
+            clientFile.close()
 
+
 Arrival = namedtuple('Arrival', 'factory parameters')
 
 
@@ -200,7 +229,7 @@
         user information about the accounts on the server being put
         under load.
     """
-    def __init__(self, server, principalPathTemplate, webadminPort, serializationPath, arrival, parameters, observers=None,
+    def __init__(self, server, principalPathTemplate, webadminPort, serverStats, serializationPath, arrival, parameters, observers=None,
                  records=None, reactor=None, runtime=None, workers=None,
                  configTemplate=None, workerID=None, workerCount=1):
         if reactor is None:
@@ -208,6 +237,7 @@
         self.server = server
         self.principalPathTemplate = principalPathTemplate
         self.webadminPort = webadminPort
+        self.serverStats = serverStats
         self.serializationPath = serializationPath
         self.arrival = arrival
         self.parameters = parameters
@@ -260,15 +290,17 @@
                 principalPathTemplate = config['principalPathTemplate']
 
             if 'clientDataSerialization' in config:
-                if config['clientDataSerialization']['Enabled']:
-                    serializationPath = config['clientDataSerialization']['Path']
-                    if not isdir(serializationPath):
-                        try:
-                            mkdir(serializationPath)
-                        except OSError:
-                            print("Unable to create client data serialization directory: %s" % (serializationPath))
-                            print("Please consult the clientDataSerialization stanza of contrib/performance/loadtest/config.plist")
-                            raise
+                serializationPath = config['clientDataSerialization']['Path']
+                if not config['clientDataSerialization']['UseOldData']:
+                    shutil.rmtree(serializationPath)
+                serializationPath = config['clientDataSerialization']['Path']
+                if not isdir(serializationPath):
+                    try:
+                        mkdir(serializationPath)
+                    except OSError:
+                        print("Unable to create client data serialization directory: %s" % (serializationPath))
+                        print("Please consult the clientDataSerialization stanza of contrib/performance/loadtest/config.plist")
+                        raise
 
             if 'arrival' in config:
                 arrival = Arrival(
@@ -310,6 +342,12 @@
             if config['webadmin']['enabled']:
                 webadminPort = config['webadmin']['HTTPPort']
 
+        serverStats = None
+        if 'serverStats' in config:
+            if config['serverStats']['enabled']:
+                serverStats = config['serverStats']
+                serverStats['server'] = config['server'] if 'server' in config else ''
+
         observers = []
         if 'observers' in config:
             for observer in config['observers']:
@@ -324,11 +362,23 @@
             records.extend(namedAny(loader)(**params))
             output.write("Loaded {0} accounts.\n".format(len(records)))
 
-        return cls(server, principalPathTemplate, webadminPort, serializationPath,
-                   arrival, parameters, observers=observers,
-                   records=records, runtime=runtime, reactor=reactor,
-                   workers=workers, configTemplate=configTemplate,
-                   workerID=workerID, workerCount=workerCount)
+        return cls(
+            server,
+            principalPathTemplate,
+            webadminPort,
+            serverStats,
+            serializationPath,
+            arrival,
+            parameters,
+            observers=observers,
+            records=records,
+            runtime=runtime,
+            reactor=reactor,
+            workers=workers,
+            configTemplate=configTemplate,
+            workerID=workerID,
+            workerCount=workerCount,
+        )
 
 
     @classmethod
@@ -409,7 +459,7 @@
     def run(self, output=stdout):
         self.attachServices(output)
         if self.runtime is not None:
-            self.reactor.callLater(self.runtime, self.reactor.stop)
+            self.reactor.callLater(self.runtime, self.stopAndReport)
         if self.webadminPort:
             self.reactor.listenTCP(self.webadminPort, server.Site(LoadSimAdminResource(self)))
         self.reactor.run()
@@ -417,16 +467,65 @@
 
     def stop(self):
         if self.ms.running:
+            self.updateStats()
             self.ms.stopService()
-            self.reactor.callLater(5, self.reactor.stop)
+            self.reactor.callLater(5, self.stopAndReport)
 
 
     def shutdown(self):
         if self.ms.running:
+            self.updateStats()
             return self.ms.stopService()
 
 
+    def updateStats(self):
+        """
+        Capture server stats and stop.
+        """
 
+        if self.serverStats is not None:
+            _ignore_scheme, hostname, _ignore_path, _ignore_query, _ignore_fragment = urlsplit(self.serverStats["server"])
+            data = self.readStatsSock((hostname.split(":")[0], self.serverStats["Port"],), True)
+            if "Failed" not in data:
+                data = data["5 Minutes"]
+                result = (
+                    safeDivision(float(data["requests"]), 5 * 60),
+                    safeDivision(data["t"], data["requests"]),
+                    safeDivision(float(data["slots"]), data["requests"]),
+                    safeDivision(data["cpu"], data["requests"]),
+                )
+                msg(type="sim-expired", reason=result)
+
+
+    def stopAndReport(self):
+        """
+        Runtime has expired - capture server stats and stop.
+        """
+
+        self.updateStats()
+        self.reactor.stop()
+
+
+    def readStatsSock(self, sockname, useTCP):
+        try:
+            s = socket.socket(socket.AF_INET if useTCP else socket.AF_UNIX, socket.SOCK_STREAM)
+            s.connect(sockname)
+            data = ""
+            while True:
+                d = s.recv(1024)
+                if d:
+                    data += d
+                else:
+                    break
+            s.close()
+            data = json.loads(data)
+        except socket.error:
+            data = {"Failed": "Unable to read statistics from server: %s" % (sockname,)}
+        data["Server"] = sockname
+        return data
+
+
+
 def attachService(reactor, loadsim, service):
     """
     Attach a given L{IService} provider to the given L{IReactorCore}; cause it
@@ -557,7 +656,6 @@
 
 
     def errReceived(self, error):
-        from twisted.python.log import msg
         msg("stderr received from " + str(self.transport.pid))
         msg("    " + repr(error))
 

Deleted: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist
===================================================================
--- CalendarServer/trunk/contrib/performance/loadtest/standard-configs/events-only.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1,440 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!--
-    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
-
-    Licensed under the Apache License, Version 2.0 (the "License");
-    you may not use this file except in compliance with the License.
-    You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
-  -->
-
-<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
-<plist version="1.0">
-	<dict>
-		<!-- Define the kinds of software and user behavior the load simulation
-			will simulate. -->
-		<key>clients</key>
-
-		<!-- Have as many different kinds of software and user behavior configurations
-			as you want. Each is a dict -->
-		<array>
-
-			<dict>
-
-				<!-- Here is a Lion iCal simulator. -->
-				<key>software</key>
-				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
-
-				<!-- Arguments to use to initialize the client instance. -->
-				<key>params</key>
-				<dict>
-					<!-- Name that appears in logs. -->
-					<key>title</key>
-					<string>10.7</string>
-
-					<!-- Client can poll the calendar home at some interval. This is 
-						in seconds. -->
-					<key>calendarHomePollInterval</key>
-					<integer>300000</integer>
-
-					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
-						about calendar home changes instead of polling for them periodically. If 
-						this option is true, then look for the server advertisement for xmpp push 
-						and use it if possible. Still fall back to polling if there is no xmpp push 
-						advertised. -->
-					<key>supportPush</key>
-					<false />
-					<key>supportAmpPush</key>
-					<false />
-				</dict>
-
-				<!-- The profiles define certain types of user behavior on top of the 
-					client software being simulated. -->
-				<key>profiles</key>
-				<array>
-
-					<!-- First an event-creating profile, which will periodically create 
-						new events at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Eventer</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new event. -->
-							<key>interval</key>
-							<integer>20</integer>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<false/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites some number of new attendees to new events. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.NormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mu gives the mean of the normal distribution (in seconds). -->
-									<key>mu</key>
-									<integer>10</integer>
-
-									<!-- and sigma gives its standard deviation. -->
-									<key>sigma</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event.
-							
-								When inviteeClumping is turned on each invitee is based on a sample of
-								users "close to" the organizer based on account index. If the clumping
-								is too "tight" for the requested number of attendees, then invites for
-								those larger numbers will simply fail (the sim will report that situation).
-								
-								When inviteeClumping is off invitees will be sampled across an entire
-								range of account indexes. In this case the distribution ought to be a
-								UniformIntegerDistribution with min=0 and max set to the number of accounts.
-							-->
-							<key>inviteeDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>0</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>99</integer>
-								</dict>
-							</dict>
-
-							<key>inviteeClumping</key>
-							<true/>
-
-							<!-- Define the distribution of how many attendees will be invited to an event.
-							
-								LogNormal is the best fit to observed data.
-
-
-								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
-								mode should typically be 1, and mean whatever matches the user behavior.
-								Our typical mean is 6. 							
-							     -->
-							<key>inviteeCountDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>1</integer>
-									<!-- mean - average-->
-									<key>median</key>
-									<integer>6</integer>
-									<!-- maximum -->
-									<key>maximum</key>
-									<real>100</real>
-								</dict>
-							</dict>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<true/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile accepts invitations to events, handles cancels, and
-					     handles replies received. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Accepter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define how long to wait after seeing a new invitation before
-								accepting it.
-
-								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
-								(i.e., half of the user have accepted by that time).								
-							-->
-							<key>acceptDelayDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>300</integer>
-									<!-- median - 50% done-->
-									<key>median</key>
-									<integer>1800</integer>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- A task-creating profile, which will periodically create 
-						new tasks at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Tasker</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new task. -->
-							<key>interval</key>
-							<integer>300</integer>
-
-							<!-- Define how due times (DUE) for the randomly generated tasks 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>taskDueDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-				</array>
-
-				<!-- Determine the frequency at which this client configuration will 
-					appear in the clients which are created by the load tester. -->
-				<key>weight</key>
-				<integer>1</integer>
-			</dict>
-		</array>
-	</dict>
-</plist>

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist (from rev 11870, CalendarServer/trunk/contrib/performance/loadtest/standard-configs/events-only.plist)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/events-only.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,440 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+	<dict>
+		<!-- Define the kinds of software and user behavior the load simulation
+			will simulate. -->
+		<key>clients</key>
+
+		<!-- Have as many different kinds of software and user behavior configurations
+			as you want. Each is a dict -->
+		<array>
+
+			<dict>
+
+				<!-- Here is a Lion iCal simulator. -->
+				<key>software</key>
+				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
+
+				<!-- Arguments to use to initialize the client instance. -->
+				<key>params</key>
+				<dict>
+					<!-- Name that appears in logs. -->
+					<key>title</key>
+					<string>10.7</string>
+
+					<!-- Client can poll the calendar home at some interval. This is 
+						in seconds. -->
+					<key>calendarHomePollInterval</key>
+					<integer>300000</integer>
+
+					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
+						about calendar home changes instead of polling for them periodically. If 
+						this option is true, then look for the server advertisement for xmpp push 
+						and use it if possible. Still fall back to polling if there is no xmpp push 
+						advertised. -->
+					<key>supportPush</key>
+					<false />
+					<key>supportAmpPush</key>
+					<false />
+				</dict>
+
+				<!-- The profiles define certain types of user behavior on top of the 
+					client software being simulated. -->
+				<key>profiles</key>
+				<array>
+
+					<!-- First an event-creating profile, which will periodically create 
+						new events at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Eventer</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new event. -->
+							<key>interval</key>
+							<integer>20</integer>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<false/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile invites some number of new attendees to new events. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the frequency at which new invitations will be sent out. -->
+							<key>sendInvitationDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.NormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mu gives the mean of the normal distribution (in seconds). -->
+									<key>mu</key>
+									<integer>10</integer>
+
+									<!-- and sigma gives its standard deviation. -->
+									<key>sigma</key>
+									<integer>5</integer>
+								</dict>
+							</dict>
+
+							<!-- Define the distribution of who will be invited to an event.
+							
+								When inviteeClumping is turned on each invitee is based on a sample of
+								users "close to" the organizer based on account index. If the clumping
+								is too "tight" for the requested number of attendees, then invites for
+								those larger numbers will simply fail (the sim will report that situation).
+								
+								When inviteeClumping is off invitees will be sampled across an entire
+								range of account indexes. In this case the distribution ought to be a
+								UniformIntegerDistribution with min=0 and max set to the number of accounts.
+							-->
+							<key>inviteeDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.UniformIntegerDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- The minimum value (inclusive) of the uniform distribution. -->
+									<key>min</key>
+									<integer>0</integer>
+									<!-- The maximum value (exclusive) of the uniform distribution. -->
+									<key>max</key>
+									<integer>99</integer>
+								</dict>
+							</dict>
+
+							<key>inviteeClumping</key>
+							<true/>
+
+							<!-- Define the distribution of how many attendees will be invited to an event.
+							
+								LogNormal is the best fit to observed data.
+
+
+								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
+								mode should typically be 1, and mean whatever matches the user behavior.
+								Our typical mean is 6. 							
+							     -->
+							<key>inviteeCountDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.LogNormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mode - peak-->
+									<key>mode</key>
+									<integer>1</integer>
+									<!-- mean - average-->
+									<key>median</key>
+									<integer>6</integer>
+									<!-- maximum -->
+									<key>maximum</key>
+									<real>100</real>
+								</dict>
+							</dict>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<true/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile accepts invitations to events, handles cancels, and
+					     handles replies received. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Accepter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define how long to wait after seeing a new invitation before
+								accepting it.
+
+								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
+								(i.e., half of the user have accepted by that time).								
+							-->
+							<key>acceptDelayDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.LogNormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mode - peak-->
+									<key>mode</key>
+									<integer>300</integer>
+									<!-- median - 50% done-->
+									<key>median</key>
+									<integer>1800</integer>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- A task-creating profile, which will periodically create 
+						new tasks at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Tasker</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new task. -->
+							<key>interval</key>
+							<integer>300</integer>
+
+							<!-- Define how due times (DUE) for the randomly generated tasks 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>taskDueDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+				</array>
+
+				<!-- Determine the frequency at which this client configuration will 
+					appear in the clients which are created by the load tester. -->
+				<key>weight</key>
+				<integer>1</integer>
+			</dict>
+		</array>
+	</dict>
+</plist>

Deleted: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist
===================================================================
--- CalendarServer/trunk/contrib/performance/loadtest/standard-configs/invites-accepts.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1,419 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!--
-    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
-
-    Licensed under the Apache License, Version 2.0 (the "License");
-    you may not use this file except in compliance with the License.
-    You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
-  -->
-
-<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
-<plist version="1.0">
-	<dict>
-		<!-- Define the kinds of software and user behavior the load simulation
-			will simulate. -->
-		<key>clients</key>
-
-		<!-- Have as many different kinds of software and user behavior configurations
-			as you want. Each is a dict -->
-		<array>
-
-			<dict>
-
-				<!-- Here is a Lion iCal simulator. -->
-				<key>software</key>
-				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
-
-				<!-- Arguments to use to initialize the client instance. -->
-				<key>params</key>
-				<dict>
-					<!-- Name that appears in logs. -->
-					<key>title</key>
-					<string>10.7</string>
-
-					<!-- Client can poll the calendar home at some interval. This is 
-						in seconds. -->
-					<key>calendarHomePollInterval</key>
-					<integer>300000</integer>
-
-					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
-						about calendar home changes instead of polling for them periodically. If 
-						this option is true, then look for the server advertisement for xmpp push 
-						and use it if possible. Still fall back to polling if there is no xmpp push 
-						advertised. -->
-					<key>supportPush</key>
-					<false />
-					<key>supportAmpPush</key>
-					<true />
-				</dict>
-
-				<!-- The profiles define certain types of user behavior on top of the 
-					client software being simulated. -->
-				<key>profiles</key>
-				<array>
-
-					<!-- First an event-creating profile, which will periodically create 
-						new events at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Eventer</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new event. -->
-							<key>interval</key>
-							<integer>20</integer>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<false/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites some number of new attendees to new events. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.FixedDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- interval (in seconds). -->
-									<key>value</key>
-									<integer>150</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event.
-							
-								When inviteeClumping is turned on each invitee is based on a sample of
-								users "close to" the organizer based on account index. If the clumping
-								is too "tight" for the requested number of attendees, then invites for
-								those larger numbers will simply fail (the sim will report that situation).
-								
-								When inviteeClumping is off invitees will be sampled across an entire
-								range of account indexes. In this case the distribution ought to be a
-								UniformIntegerDistribution with min=0 and max set to the number of accounts.
-							-->
-							<key>inviteeDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>0</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>99</integer>
-								</dict>
-							</dict>
-
-							<key>inviteeClumping</key>
-							<true/>
-
-							<!-- Define the distribution of how many attendees will be invited to an event.
-							
-								LogNormal is the best fit to observed data.
-
-
-								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
-								mode should typically be 1, and mean whatever matches the user behavior.
-								Our typical mean is 6. 							
-							     -->
-							<key>inviteeCountDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.FixedDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- Number of attendees. -->
-									<key>value</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<false/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>100</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile accepts invitations to events, handles cancels, and
-					     handles replies received. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Accepter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define how long to wait after seeing a new invitation before
-								accepting it.
-
-								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
-								(i.e., half of the user have accepted by that time).								
-							-->
-							<key>acceptDelayDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformDiscreteDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- Set of values to use - will be chosen in random order. -->
-									<key>values</key>
-									<array>
-										<integer>0</integer>
-										<integer>5</integer>
-										<integer>10</integer>
-										<integer>15</integer>
-										<integer>20</integer>
-										<integer>25</integer>
-										<integer>30</integer>
-									</array>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- A task-creating profile, which will periodically create 
-						new tasks at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Tasker</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new task. -->
-							<key>interval</key>
-							<integer>300</integer>
-
-							<!-- Define how due times (DUE) for the randomly generated tasks 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>taskDueDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-				</array>
-
-				<!-- Determine the frequency at which this client configuration will 
-					appear in the clients which are created by the load tester. -->
-				<key>weight</key>
-				<integer>1</integer>
-			</dict>
-		</array>
-
-		<!-- Determine the interval between client creation. -->
-		<key>arrivalInterval</key>
-		<integer>5</integer>
-	</dict>
-</plist>

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist (from rev 11870, CalendarServer/trunk/contrib/performance/loadtest/standard-configs/invites-accepts.plist)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-accepts.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,419 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+	<dict>
+		<!-- Define the kinds of software and user behavior the load simulation
+			will simulate. -->
+		<key>clients</key>
+
+		<!-- Have as many different kinds of software and user behavior configurations
+			as you want. Each is a dict -->
+		<array>
+
+			<dict>
+
+				<!-- Here is a Lion iCal simulator. -->
+				<key>software</key>
+				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
+
+				<!-- Arguments to use to initialize the client instance. -->
+				<key>params</key>
+				<dict>
+					<!-- Name that appears in logs. -->
+					<key>title</key>
+					<string>10.7</string>
+
+					<!-- Client can poll the calendar home at some interval. This is 
+						in seconds. -->
+					<key>calendarHomePollInterval</key>
+					<integer>300000</integer>
+
+					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
+						about calendar home changes instead of polling for them periodically. If 
+						this option is true, then look for the server advertisement for xmpp push 
+						and use it if possible. Still fall back to polling if there is no xmpp push 
+						advertised. -->
+					<key>supportPush</key>
+					<false />
+					<key>supportAmpPush</key>
+					<true />
+				</dict>
+
+				<!-- The profiles define certain types of user behavior on top of the 
+					client software being simulated. -->
+				<key>profiles</key>
+				<array>
+
+					<!-- First an event-creating profile, which will periodically create 
+						new events at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Eventer</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new event. -->
+							<key>interval</key>
+							<integer>20</integer>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<false/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile invites some number of new attendees to new events. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the frequency at which new invitations will be sent out. -->
+							<key>sendInvitationDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.FixedDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- interval (in seconds). -->
+									<key>value</key>
+									<integer>150</integer>
+								</dict>
+							</dict>
+
+							<!-- Define the distribution of who will be invited to an event.
+							
+								When inviteeClumping is turned on each invitee is based on a sample of
+								users "close to" the organizer based on account index. If the clumping
+								is too "tight" for the requested number of attendees, then invites for
+								those larger numbers will simply fail (the sim will report that situation).
+								
+								When inviteeClumping is off invitees will be sampled across an entire
+								range of account indexes. In this case the distribution ought to be a
+								UniformIntegerDistribution with min=0 and max set to the number of accounts.
+							-->
+							<key>inviteeDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.UniformIntegerDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- The minimum value (inclusive) of the uniform distribution. -->
+									<key>min</key>
+									<integer>0</integer>
+									<!-- The maximum value (exclusive) of the uniform distribution. -->
+									<key>max</key>
+									<integer>99</integer>
+								</dict>
+							</dict>
+
+							<key>inviteeClumping</key>
+							<true/>
+
+							<!-- Define the distribution of how many attendees will be invited to an event.
+							
+								LogNormal is the best fit to observed data.
+
+
+								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
+								mode should typically be 1, and mean whatever matches the user behavior.
+								Our typical mean is 6. 							
+							     -->
+							<key>inviteeCountDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.FixedDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- Number of attendees. -->
+									<key>value</key>
+									<integer>5</integer>
+								</dict>
+							</dict>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<false/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>100</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile accepts invitations to events, handles cancels, and
+					     handles replies received. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Accepter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define how long to wait after seeing a new invitation before
+								accepting it.
+
+								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
+								(i.e., half of the user have accepted by that time).								
+							-->
+							<key>acceptDelayDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.UniformDiscreteDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- Set of values to use - will be chosen in random order. -->
+									<key>values</key>
+									<array>
+										<integer>0</integer>
+										<integer>5</integer>
+										<integer>10</integer>
+										<integer>15</integer>
+										<integer>20</integer>
+										<integer>25</integer>
+										<integer>30</integer>
+									</array>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- A task-creating profile, which will periodically create 
+						new tasks at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Tasker</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new task. -->
+							<key>interval</key>
+							<integer>300</integer>
+
+							<!-- Define how due times (DUE) for the randomly generated tasks 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>taskDueDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+				</array>
+
+				<!-- Determine the frequency at which this client configuration will 
+					appear in the clients which are created by the load tester. -->
+				<key>weight</key>
+				<integer>1</integer>
+			</dict>
+		</array>
+
+		<!-- Determine the interval between client creation. -->
+		<key>arrivalInterval</key>
+		<integer>5</integer>
+	</dict>
+</plist>

Deleted: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist
===================================================================
--- CalendarServer/trunk/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1,414 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!--
-    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
-
-    Licensed under the Apache License, Version 2.0 (the "License");
-    you may not use this file except in compliance with the License.
-    You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
-  -->
-
-<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
-<plist version="1.0">
-	<dict>
-		<!-- Define the kinds of software and user behavior the load simulation
-			will simulate. -->
-		<key>clients</key>
-
-		<!-- Have as many different kinds of software and user behavior configurations
-			as you want. Each is a dict -->
-		<array>
-
-			<dict>
-
-				<!-- Here is a Lion iCal simulator. -->
-				<key>software</key>
-				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
-
-				<!-- Arguments to use to initialize the client instance. -->
-				<key>params</key>
-				<dict>
-					<!-- Name that appears in logs. -->
-					<key>title</key>
-					<string>10.7</string>
-
-					<!-- Client can poll the calendar home at some interval. This is 
-						in seconds. -->
-					<key>calendarHomePollInterval</key>
-					<integer>300000</integer>
-
-					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
-						about calendar home changes instead of polling for them periodically. If 
-						this option is true, then look for the server advertisement for xmpp push 
-						and use it if possible. Still fall back to polling if there is no xmpp push 
-						advertised. -->
-					<key>supportPush</key>
-					<false />
-					<key>supportAmpPush</key>
-					<false />
-				</dict>
-
-				<!-- The profiles define certain types of user behavior on top of the 
-					client software being simulated. -->
-				<key>profiles</key>
-				<array>
-
-					<!-- First an event-creating profile, which will periodically create 
-						new events at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Eventer</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new event. -->
-							<key>interval</key>
-							<integer>20</integer>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<false/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites some number of new attendees to new events. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.FixedDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- interval (in seconds). -->
-									<key>value</key>
-									<integer>120</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event.
-							
-								When inviteeClumping is turned on each invitee is based on a sample of
-								users "close to" the organizer based on account index. If the clumping
-								is too "tight" for the requested number of attendees, then invites for
-								those larger numbers will simply fail (the sim will report that situation).
-								
-								When inviteeClumping is off invitees will be sampled across an entire
-								range of account indexes. In this case the distribution ought to be a
-								UniformIntegerDistribution with min=0 and max set to the number of accounts.
-							-->
-							<key>inviteeDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>0</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>99</integer>
-								</dict>
-							</dict>
-
-							<key>inviteeClumping</key>
-							<true/>
-
-							<!-- Define the distribution of how many attendees will be invited to an event.
-							
-								LogNormal is the best fit to observed data.
-
-
-								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
-								mode should typically be 1, and mean whatever matches the user behavior.
-								Our typical mean is 6. 							
-							     -->
-							<key>inviteeCountDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.FixedDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- Number of attendees. -->
-									<key>value</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<true/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>100</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile accepts invitations to events, handles cancels, and
-					     handles replies received. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Accepter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define how long to wait after seeing a new invitation before
-								accepting it.
-
-								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
-								(i.e., half of the user have accepted by that time).								
-							-->
-							<key>acceptDelayDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>300</integer>
-									<!-- median - 50% done-->
-									<key>median</key>
-									<integer>1800</integer>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- A task-creating profile, which will periodically create 
-						new tasks at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Tasker</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new task. -->
-							<key>interval</key>
-							<integer>300</integer>
-
-							<!-- Define how due times (DUE) for the randomly generated tasks 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>taskDueDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-				</array>
-
-				<!-- Determine the frequency at which this client configuration will 
-					appear in the clients which are created by the load tester. -->
-				<key>weight</key>
-				<integer>1</integer>
-			</dict>
-		</array>
-
-		<!-- Determine the interval between client creation. -->
-		<key>arrivalInterval</key>
-		<integer>4</integer>
-	</dict>
-</plist>

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist (from rev 11870, CalendarServer/trunk/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only-recurring.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,414 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+	<dict>
+		<!-- Define the kinds of software and user behavior the load simulation
+			will simulate. -->
+		<key>clients</key>
+
+		<!-- Have as many different kinds of software and user behavior configurations
+			as you want. Each is a dict -->
+		<array>
+
+			<dict>
+
+				<!-- Here is a Lion iCal simulator. -->
+				<key>software</key>
+				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
+
+				<!-- Arguments to use to initialize the client instance. -->
+				<key>params</key>
+				<dict>
+					<!-- Name that appears in logs. -->
+					<key>title</key>
+					<string>10.7</string>
+
+					<!-- Client can poll the calendar home at some interval. This is 
+						in seconds. -->
+					<key>calendarHomePollInterval</key>
+					<integer>300000</integer>
+
+					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
+						about calendar home changes instead of polling for them periodically. If 
+						this option is true, then look for the server advertisement for xmpp push 
+						and use it if possible. Still fall back to polling if there is no xmpp push 
+						advertised. -->
+					<key>supportPush</key>
+					<false />
+					<key>supportAmpPush</key>
+					<false />
+				</dict>
+
+				<!-- The profiles define certain types of user behavior on top of the 
+					client software being simulated. -->
+				<key>profiles</key>
+				<array>
+
+					<!-- First an event-creating profile, which will periodically create 
+						new events at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Eventer</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new event. -->
+							<key>interval</key>
+							<integer>20</integer>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<false/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile invites some number of new attendees to new events. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the frequency at which new invitations will be sent out. -->
+							<key>sendInvitationDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.FixedDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- interval (in seconds). -->
+									<key>value</key>
+									<integer>120</integer>
+								</dict>
+							</dict>
+
+							<!-- Define the distribution of who will be invited to an event.
+							
+								When inviteeClumping is turned on each invitee is based on a sample of
+								users "close to" the organizer based on account index. If the clumping
+								is too "tight" for the requested number of attendees, then invites for
+								those larger numbers will simply fail (the sim will report that situation).
+								
+								When inviteeClumping is off invitees will be sampled across an entire
+								range of account indexes. In this case the distribution ought to be a
+								UniformIntegerDistribution with min=0 and max set to the number of accounts.
+							-->
+							<key>inviteeDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.UniformIntegerDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- The minimum value (inclusive) of the uniform distribution. -->
+									<key>min</key>
+									<integer>0</integer>
+									<!-- The maximum value (exclusive) of the uniform distribution. -->
+									<key>max</key>
+									<integer>99</integer>
+								</dict>
+							</dict>
+
+							<key>inviteeClumping</key>
+							<true/>
+
+							<!-- Define the distribution of how many attendees will be invited to an event.
+							
+								LogNormal is the best fit to observed data.
+
+
+								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
+								mode should typically be 1, and mean whatever matches the user behavior.
+								Our typical mean is 6. 							
+							     -->
+							<key>inviteeCountDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.FixedDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- Number of attendees. -->
+									<key>value</key>
+									<integer>5</integer>
+								</dict>
+							</dict>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<true/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>100</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile accepts invitations to events, handles cancels, and
+					     handles replies received. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Accepter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define how long to wait after seeing a new invitation before
+								accepting it.
+
+								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
+								(i.e., half of the user have accepted by that time).								
+							-->
+							<key>acceptDelayDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.LogNormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mode - peak-->
+									<key>mode</key>
+									<integer>300</integer>
+									<!-- median - 50% done-->
+									<key>median</key>
+									<integer>1800</integer>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- A task-creating profile, which will periodically create 
+						new tasks at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Tasker</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new task. -->
+							<key>interval</key>
+							<integer>300</integer>
+
+							<!-- Define how due times (DUE) for the randomly generated tasks 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>taskDueDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+				</array>
+
+				<!-- Determine the frequency at which this client configuration will 
+					appear in the clients which are created by the load tester. -->
+				<key>weight</key>
+				<integer>1</integer>
+			</dict>
+		</array>
+
+		<!-- Determine the interval between client creation. -->
+		<key>arrivalInterval</key>
+		<integer>4</integer>
+	</dict>
+</plist>

Deleted: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist
===================================================================
--- CalendarServer/trunk/contrib/performance/loadtest/standard-configs/invites-only.plist	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1,430 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!--
-    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
-
-    Licensed under the Apache License, Version 2.0 (the "License");
-    you may not use this file except in compliance with the License.
-    You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
-  -->
-
-<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
-<plist version="1.0">
-	<dict>
-		<!-- Define the kinds of software and user behavior the load simulation
-			will simulate. -->
-		<key>clients</key>
-
-		<!-- Have as many different kinds of software and user behavior configurations
-			as you want. Each is a dict -->
-		<array>
-
-			<dict>
-
-				<!-- Here is a Lion iCal simulator. -->
-				<key>software</key>
-				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
-
-				<!-- Arguments to use to initialize the client instance. -->
-				<key>params</key>
-				<dict>
-					<!-- Name that appears in logs. -->
-					<key>title</key>
-					<string>10.7</string>
-
-					<!-- Client can poll the calendar home at some interval. This is 
-						in seconds. -->
-					<key>calendarHomePollInterval</key>
-					<integer>300000</integer>
-
-					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
-						about calendar home changes instead of polling for them periodically. If 
-						this option is true, then look for the server advertisement for xmpp push 
-						and use it if possible. Still fall back to polling if there is no xmpp push 
-						advertised. -->
-					<key>supportPush</key>
-					<false />
-					<key>supportAmpPush</key>
-					<false />
-				</dict>
-
-				<!-- The profiles define certain types of user behavior on top of the 
-					client software being simulated. -->
-				<key>profiles</key>
-				<array>
-
-					<!-- First an event-creating profile, which will periodically create 
-						new events at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Eventer</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new event. -->
-							<key>interval</key>
-							<integer>20</integer>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<false/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile invites some number of new attendees to new events. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<true/>
-
-							<!-- Define the frequency at which new invitations will be sent out. -->
-							<key>sendInvitationDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.FixedDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- interval (in seconds). -->
-									<key>value</key>
-									<integer>120</integer>
-								</dict>
-							</dict>
-
-							<!-- Define the distribution of who will be invited to an event.
-							
-								When inviteeClumping is turned on each invitee is based on a sample of
-								users "close to" the organizer based on account index. If the clumping
-								is too "tight" for the requested number of attendees, then invites for
-								those larger numbers will simply fail (the sim will report that situation).
-								
-								When inviteeClumping is off invitees will be sampled across an entire
-								range of account indexes. In this case the distribution ought to be a
-								UniformIntegerDistribution with min=0 and max set to the number of accounts.
-							-->
-							<key>inviteeDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.UniformIntegerDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- The minimum value (inclusive) of the uniform distribution. -->
-									<key>min</key>
-									<integer>0</integer>
-									<!-- The maximum value (exclusive) of the uniform distribution. -->
-									<key>max</key>
-									<integer>99</integer>
-								</dict>
-							</dict>
-
-							<key>inviteeClumping</key>
-							<true/>
-
-							<!-- Define the distribution of how many attendees will be invited to an event.
-							
-								LogNormal is the best fit to observed data.
-
-
-								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
-								mode should typically be 1, and mean whatever matches the user behavior.
-								Our typical mean is 6. 							
-							     -->
-							<key>inviteeCountDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.FixedDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- Number of attendees. -->
-									<key>value</key>
-									<integer>5</integer>
-								</dict>
-							</dict>
-
-							<!-- Define how start times (DTSTART) for the randomly generated events 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>eventStartDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-
-							<!-- Define how recurrences are created. -->
-							<key>recurrenceDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized.  We have a fixed set of
-								     RRULEs defined for this distribution and pick each based on a
-								     weight. -->
-								<key>type</key>
-								<string>contrib.performance.stats.RecurrenceDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- False to disable RRULEs -->
-									<key>allowRecurrence</key>
-									<false/>
-
-									<!-- These are the weights for the specific set of RRULEs. -->
-									<key>weights</key>
-									<dict>
-										<!-- Half of all events will be non-recurring -->
-										<key>none</key>
-										<integer>50</integer>
-										
-										<!-- Daily and weekly are pretty common -->
-										<key>daily</key>
-										<integer>10</integer>
-										<key>weekly</key>
-										<integer>20</integer>
-										
-										<!-- Monthly, yearly, daily & weekly limit not so common -->
-										<key>monthly</key>
-										<integer>2</integer>
-										<key>yearly</key>
-										<integer>1</integer>
-										<key>dailylimit</key>
-										<integer>2</integer>
-										<key>weeklylimit</key>
-										<integer>5</integer>
-										
-										<!-- Work days pretty common -->
-										<key>workdays</key>
-										<integer>10</integer>
-									</dict>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- This profile accepts invitations to events, handles cancels, and
-					     handles replies received. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Accepter</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define how long to wait after seeing a new invitation before
-								accepting it.
-
-								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
-								(i.e., half of the user have accepted by that time).								
-							-->
-							<key>acceptDelayDistribution</key>
-							<dict>
-								<key>type</key>
-								<string>contrib.performance.stats.LogNormalDistribution</string>
-								<key>params</key>
-								<dict>
-									<!-- mode - peak-->
-									<key>mode</key>
-									<integer>300</integer>
-									<!-- median - 50% done-->
-									<key>median</key>
-									<integer>1800</integer>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-					<!-- A task-creating profile, which will periodically create 
-						new tasks at a random time on a random calendar. -->
-					<dict>
-						<key>class</key>
-						<string>contrib.performance.loadtest.profiles.Tasker</string>
-
-						<key>params</key>
-						<dict>
-							<key>enabled</key>
-							<false/>
-
-							<!-- Define the interval (in seconds) at which this profile will use 
-								its client to create a new task. -->
-							<key>interval</key>
-							<integer>300</integer>
-
-							<!-- Define how due times (DUE) for the randomly generated tasks 
-								will be selected. This is an example of a "Distribution" parameter. The value 
-								for most "Distribution" parameters are interchangeable and extensible. -->
-							<key>taskDueDistribution</key>
-							<dict>
-
-								<!-- This distribution is pretty specialized. It produces timestamps 
-									in the near future, limited to certain days of the week and certain hours 
-									of the day. -->
-								<key>type</key>
-								<string>contrib.performance.stats.WorkDistribution</string>
-
-								<key>params</key>
-								<dict>
-									<!-- These are the days of the week the distribution will use. -->
-									<key>daysOfWeek</key>
-									<array>
-										<string>mon</string>
-										<string>tue</string>
-										<string>wed</string>
-										<string>thu</string>
-										<string>fri</string>
-									</array>
-
-									<!-- The earliest hour of a day at which an event might be scheduled. -->
-									<key>beginHour</key>
-									<integer>8</integer>
-
-									<!-- And the latest hour of a day (at which an event will be scheduled 
-										to begin!). -->
-									<key>endHour</key>
-									<integer>16</integer>
-
-									<!-- The timezone in which the event is scheduled. (XXX Does this 
-										really work right?) -->
-									<key>tzname</key>
-									<string>America/Los_Angeles</string>
-								</dict>
-							</dict>
-						</dict>
-					</dict>
-
-				</array>
-
-				<!-- Determine the frequency at which this client configuration will 
-					appear in the clients which are created by the load tester. -->
-				<key>weight</key>
-				<integer>1</integer>
-			</dict>
-		</array>
-	</dict>
-</plist>

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist (from rev 11870, CalendarServer/trunk/contrib/performance/loadtest/standard-configs/invites-only.plist)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/standard-configs/invites-only.plist	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,430 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+    Copyright (c) 2011-2012 Apple Inc. All rights reserved.
+
+    Licensed under the Apache License, Version 2.0 (the "License");
+    you may not use this file except in compliance with the License.
+    You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+  -->
+
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+	<dict>
+		<!-- Define the kinds of software and user behavior the load simulation
+			will simulate. -->
+		<key>clients</key>
+
+		<!-- Have as many different kinds of software and user behavior configurations
+			as you want. Each is a dict -->
+		<array>
+
+			<dict>
+
+				<!-- Here is a Lion iCal simulator. -->
+				<key>software</key>
+				<string>contrib.performance.loadtest.ical.OS_X_10_7</string>
+
+				<!-- Arguments to use to initialize the client instance. -->
+				<key>params</key>
+				<dict>
+					<!-- Name that appears in logs. -->
+					<key>title</key>
+					<string>10.7</string>
+
+					<!-- Client can poll the calendar home at some interval. This is 
+						in seconds. -->
+					<key>calendarHomePollInterval</key>
+					<integer>300000</integer>
+
+					<!-- If the server advertises xmpp push, OS X 10.6 can wait for notifications 
+						about calendar home changes instead of polling for them periodically. If 
+						this option is true, then look for the server advertisement for xmpp push 
+						and use it if possible. Still fall back to polling if there is no xmpp push 
+						advertised. -->
+					<key>supportPush</key>
+					<false />
+					<key>supportAmpPush</key>
+					<false />
+				</dict>
+
+				<!-- The profiles define certain types of user behavior on top of the 
+					client software being simulated. -->
+				<key>profiles</key>
+				<array>
+
+					<!-- First an event-creating profile, which will periodically create 
+						new events at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Eventer</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new event. -->
+							<key>interval</key>
+							<integer>20</integer>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<false/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile invites some number of new attendees to new events. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.RealisticInviter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<true/>
+
+							<!-- Define the frequency at which new invitations will be sent out. -->
+							<key>sendInvitationDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.FixedDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- interval (in seconds). -->
+									<key>value</key>
+									<integer>120</integer>
+								</dict>
+							</dict>
+
+							<!-- Define the distribution of who will be invited to an event.
+							
+								When inviteeClumping is turned on each invitee is based on a sample of
+								users "close to" the organizer based on account index. If the clumping
+								is too "tight" for the requested number of attendees, then invites for
+								those larger numbers will simply fail (the sim will report that situation).
+								
+								When inviteeClumping is off invitees will be sampled across an entire
+								range of account indexes. In this case the distribution ought to be a
+								UniformIntegerDistribution with min=0 and max set to the number of accounts.
+							-->
+							<key>inviteeDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.UniformIntegerDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- The minimum value (inclusive) of the uniform distribution. -->
+									<key>min</key>
+									<integer>0</integer>
+									<!-- The maximum value (exclusive) of the uniform distribution. -->
+									<key>max</key>
+									<integer>99</integer>
+								</dict>
+							</dict>
+
+							<key>inviteeClumping</key>
+							<true/>
+
+							<!-- Define the distribution of how many attendees will be invited to an event.
+							
+								LogNormal is the best fit to observed data.
+
+
+								For LogNormal "mode" is the peak, "mean" is the mean value.	For invites,
+								mode should typically be 1, and mean whatever matches the user behavior.
+								Our typical mean is 6. 							
+							     -->
+							<key>inviteeCountDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.FixedDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- Number of attendees. -->
+									<key>value</key>
+									<integer>5</integer>
+								</dict>
+							</dict>
+
+							<!-- Define how start times (DTSTART) for the randomly generated events 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>eventStartDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+
+							<!-- Define how recurrences are created. -->
+							<key>recurrenceDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized.  We have a fixed set of
+								     RRULEs defined for this distribution and pick each based on a
+								     weight. -->
+								<key>type</key>
+								<string>contrib.performance.stats.RecurrenceDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- False to disable RRULEs -->
+									<key>allowRecurrence</key>
+									<false/>
+
+									<!-- These are the weights for the specific set of RRULEs. -->
+									<key>weights</key>
+									<dict>
+										<!-- Half of all events will be non-recurring -->
+										<key>none</key>
+										<integer>50</integer>
+										
+										<!-- Daily and weekly are pretty common -->
+										<key>daily</key>
+										<integer>10</integer>
+										<key>weekly</key>
+										<integer>20</integer>
+										
+										<!-- Monthly, yearly, daily & weekly limit not so common -->
+										<key>monthly</key>
+										<integer>2</integer>
+										<key>yearly</key>
+										<integer>1</integer>
+										<key>dailylimit</key>
+										<integer>2</integer>
+										<key>weeklylimit</key>
+										<integer>5</integer>
+										
+										<!-- Work days pretty common -->
+										<key>workdays</key>
+										<integer>10</integer>
+									</dict>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- This profile accepts invitations to events, handles cancels, and
+					     handles replies received. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Accepter</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define how long to wait after seeing a new invitation before
+								accepting it.
+
+								For LogNormal "mode" is the peak, "median" is the 50% cummulative value
+								(i.e., half of the user have accepted by that time).								
+							-->
+							<key>acceptDelayDistribution</key>
+							<dict>
+								<key>type</key>
+								<string>contrib.performance.stats.LogNormalDistribution</string>
+								<key>params</key>
+								<dict>
+									<!-- mode - peak-->
+									<key>mode</key>
+									<integer>300</integer>
+									<!-- median - 50% done-->
+									<key>median</key>
+									<integer>1800</integer>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+					<!-- A task-creating profile, which will periodically create 
+						new tasks at a random time on a random calendar. -->
+					<dict>
+						<key>class</key>
+						<string>contrib.performance.loadtest.profiles.Tasker</string>
+
+						<key>params</key>
+						<dict>
+							<key>enabled</key>
+							<false/>
+
+							<!-- Define the interval (in seconds) at which this profile will use 
+								its client to create a new task. -->
+							<key>interval</key>
+							<integer>300</integer>
+
+							<!-- Define how due times (DUE) for the randomly generated tasks 
+								will be selected. This is an example of a "Distribution" parameter. The value 
+								for most "Distribution" parameters are interchangeable and extensible. -->
+							<key>taskDueDistribution</key>
+							<dict>
+
+								<!-- This distribution is pretty specialized. It produces timestamps 
+									in the near future, limited to certain days of the week and certain hours 
+									of the day. -->
+								<key>type</key>
+								<string>contrib.performance.stats.WorkDistribution</string>
+
+								<key>params</key>
+								<dict>
+									<!-- These are the days of the week the distribution will use. -->
+									<key>daysOfWeek</key>
+									<array>
+										<string>mon</string>
+										<string>tue</string>
+										<string>wed</string>
+										<string>thu</string>
+										<string>fri</string>
+									</array>
+
+									<!-- The earliest hour of a day at which an event might be scheduled. -->
+									<key>beginHour</key>
+									<integer>8</integer>
+
+									<!-- And the latest hour of a day (at which an event will be scheduled 
+										to begin!). -->
+									<key>endHour</key>
+									<integer>16</integer>
+
+									<!-- The timezone in which the event is scheduled. (XXX Does this 
+										really work right?) -->
+									<key>tzname</key>
+									<string>America/Los_Angeles</string>
+								</dict>
+							</dict>
+						</dict>
+					</dict>
+
+				</array>
+
+				<!-- Determine the frequency at which this client configuration will 
+					appear in the clients which are created by the load tester. -->
+				<key>weight</key>
+				<integer>1</integer>
+			</dict>
+		</array>
+	</dict>
+</plist>

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/test_sim.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/test_sim.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/loadtest/test_sim.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -253,7 +253,7 @@
         exc = self.assertRaises(
             SystemExit, StubSimulator.main, ['--config', config.path])
         self.assertEquals(
-            exc.args, (StubSimulator(None, None, None, None, None, None).run(),))
+            exc.args, (StubSimulator(None, None, None, None, None, None, None).run(),))
 
 
     def test_createSimulator(self):
@@ -264,7 +264,7 @@
         """
         server = 'http://127.0.0.7:1243/'
         reactor = object()
-        sim = LoadSimulator(server, None, None, None, None, None, reactor=reactor)
+        sim = LoadSimulator(server, None, None, None, None, None, None, reactor=reactor)
         calsim = sim.createSimulator()
         self.assertIsInstance(calsim, CalendarClientSimulator)
         self.assertIsInstance(calsim.reactor, LagTrackingReactor)
@@ -447,7 +447,7 @@
 
         reactor = object()
         sim = LoadSimulator(
-            None, None, None, None, Arrival(FakeArrival, {'x': 3, 'y': 2}), None, reactor=reactor)
+            None, None, None, None, None, Arrival(FakeArrival, {'x': 3, 'y': 2}), None, reactor=reactor)
         arrival = sim.createArrivalPolicy()
         self.assertIsInstance(arrival, FakeArrival)
         self.assertIdentical(arrival.reactor, sim.reactor)
@@ -478,7 +478,9 @@
                             "weight": 3,
                             }]}))
 
-        sim = LoadSimulator.fromCommandLine(['--config', config.path])
+        sim = LoadSimulator.fromCommandLine(
+            ['--config', config.path, '--clients', config.path]
+        )
         expectedParameters = PopulationParameters()
         expectedParameters.addClient(
             3, ClientType(OS_X_10_6, {"foo": "bar"}, [ProfileType(Eventer, {
@@ -495,7 +497,9 @@
         """
         config = FilePath(self.mktemp())
         config.setContent(writePlistToString({"clients": []}))
-        sim = LoadSimulator.fromCommandLine(['--config', config.path])
+        sim = LoadSimulator.fromCommandLine(
+            ['--config', config.path, '--clients', config.path]
+        )
         expectedParameters = PopulationParameters()
         expectedParameters.addClient(
             1, ClientType(OS_X_10_6, {}, [Eventer, Inviter, Accepter]))
@@ -528,6 +532,7 @@
             "/principals/users/%s/",
             None,
             None,
+            None,
             Arrival(lambda reactor: NullArrival(), {}),
             None, observers, reactor=Reactor())
         io = StringIO()

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/requests/httpTests.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/requests/httpTests.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/requests/httpTests.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -89,12 +89,21 @@
             pos = line.find(": ")
             return float(line[pos + 2:])
 
+        # Need to skip over stats that are unlabeled
         data = open(self.logFilePath).read()
         lines = data.splitlines()
-        count = extractInt(lines[4])
-        rows = extractInt(lines[5])
-        timing = extractFloat(lines[6])
-        self.result = HTTPTestBase.SQLResults(count, rows, timing)
+        offset = 0
+        while True:
+            if lines[offset] == "*** SQL Stats ***":
+                if lines[offset + 2].split()[1] != "unlabeled":
+                    count = extractInt(lines[offset + 4])
+                    rows = extractInt(lines[offset + 5])
+                    timing = extractFloat(lines[offset + 6])
+                    self.result = HTTPTestBase.SQLResults(count, rows, timing)
+                    break
+            offset += 1
+        else:
+            self.result = HTTPTestBase.SQLResults(-1, -1, 0.0)
 
         with open("%s-%d-%s" % (self.logFilePath, event_count, self.label), "w") as f:
             f.write(data)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/sqlusage.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/sqlusage.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/performance/sqlusage/sqlusage.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -127,11 +127,17 @@
         ]
         self.requestLabels = [request.label for request in requests]
 
-        # Warm-up server by doing calendar home and calendar propfinds
-        props = (davxml.resourcetype,)
-        for session in sessions:
-            session.getPropertiesOnHierarchy(URL(path=session.homeHref), props)
-            session.getPropertiesOnHierarchy(URL(path=session.calendarHref), props)
+        def _warmUp():
+            # Warm-up server by doing calendar home and child collection propfinds.
+            # Do this twice because the very first time might provision DB objects and
+            # blow any DB cache - the second time will warm the DB cache.
+            props = (davxml.resourcetype,)
+            for _ignore in range(2):
+                for session in sessions:
+                    session.getPropertiesOnHierarchy(URL(path=session.homeHref), props)
+                    session.getPropertiesOnHierarchy(URL(path=session.calendarHref), props)
+                    session.getPropertiesOnHierarchy(URL(path=session.inboxHref), props)
+                    session.getPropertiesOnHierarchy(URL(path=session.notificationHref), props)
 
         # Now loop over sets of events
         for count in event_counts:
@@ -140,6 +146,7 @@
             result = {}
             for request in requests:
                 print("  Test = %s" % (request.label,))
+                _warmUp()
                 result[request.label] = request.execute(count)
             self.results[count] = result
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/fix_calendar
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/fix_calendar	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/fix_calendar	2013-11-01 22:25:30 UTC (rev 11871)
@@ -28,9 +28,9 @@
 def usage():
     print """Usage: xattr_fix CALENDARS
 Options:
-    
+
 CALENDARS - a list of directories that are to be treated as calendars
-    
+
 Description:
 This utility will add xattrs to the specified directories and their contents
 to make them appear to be calendars and calendar resources when used with
@@ -40,8 +40,10 @@
 root without properly preserving the xattrs.
 """
 
+
+
 def fixCalendar(path):
-    
+
     # First fix the resourcetype & getctag on the calendar
     x = xattr.xattr(path)
     x["WebDAV:{DAV:}resourcetype"] = """<?xml version='1.0' encoding='UTF-8'?>
@@ -60,7 +62,7 @@
         if not child.endswith(".ics"):
             continue
         fullpath = os.path.join(path, child)
-        
+
         # getcontenttype
         x = xattr.xattr(fullpath)
         x["WebDAV:{DAV:}getcontenttype"] = """<?xml version='1.0' encoding='UTF-8'?>
@@ -94,7 +96,7 @@
             if not os.path.exists(arg):
                 print "Path does not exist: '%s'. Ignoring." % (arg,)
                 continue
-            
+
             if os.path.basename(arg) in ("inbox", "outbox", "dropbox",):
                 print "Cannot be used on inbox, outbox or dropbox."
                 continue
@@ -103,4 +105,3 @@
 
     except Exception, e:
         sys.exit(str(e))
-    
\ No newline at end of file

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/protocolanalysis.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/protocolanalysis.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/contrib/tools/protocolanalysis.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -293,6 +293,12 @@
         self.userCounts = collections.defaultdict(int)
         self.userResponseTimes = collections.defaultdict(float)
 
+        self.newEvents = 0
+        self.newInvites = 0
+        self.updateEvents = 0
+        self.updateInvites = 0
+        self.attendeeInvites = 0
+
         self.otherUserCalendarRequests = {}
 
         self.currentLine = None
@@ -416,6 +422,19 @@
                 self.hourlyByStatus[" TOTAL"][timeBucketIndex] += 1
                 self.hourlyByStatus[self.currentLine.status][timeBucketIndex] += 1
 
+                if self.currentLine.status == 201:
+                    if adjustedMethod == METHOD_PUT_ICS:
+                        self.newEvents += 1
+                    elif adjustedMethod == METHOD_PUT_ORGANIZER:
+                        self.newInvites += 1
+                elif isOK:
+                    if adjustedMethod == METHOD_PUT_ICS:
+                        self.updateEvents += 1
+                    elif adjustedMethod == METHOD_PUT_ORGANIZER:
+                        self.updateInvites += 1
+                    elif adjustedMethod == METHOD_PUT_ATTENDEE:
+                        self.attendeeInvites += 1
+
                 # Cache analysis
                 if adjustedMethod == METHOD_PROPFIND_CALENDAR and self.currentLine.status == 207:
                     responses = int(self.currentLine.extended.get("responses", 0))
@@ -1029,7 +1048,10 @@
             #print("User Response times")
             #self.printUserResponseTimes(doTabs)
 
+            print("Sim values")
+            self.printSimStats(doTabs)
 
+
     def printInfo(self, doTabs):
 
         table = tables.Table()
@@ -1083,6 +1105,7 @@
         totalRequests = 0
         totalDepth = 0
         totalTime = 0.0
+        self.timeCounts = 0
         for ctr in xrange(self.timeBucketCount):
             hour = self.getHourFromIndex(ctr)
             if hour is None:
@@ -1101,12 +1124,13 @@
             totalRequests += countRequests
             totalDepth += countDepth
             totalTime += countTime
+            self.timeCounts += 1
 
         table.addFooter(
             (
                 "Total:",
                 totalRequests,
-                (1.0 * totalRequests) / self.timeBucketCount / self.resolutionMinutes / 60,
+                safePercent(totalRequests, self.timeCounts * self.resolutionMinutes * 60, 1.0),
                 safePercent(totalTime, totalRequests, 1.0),
                 safePercent(float(totalDepth), totalRequests, 1),
             ),
@@ -1545,7 +1569,38 @@
         print("")
 
 
+    def printSimStats(self, doTabs):
+        users = len(self.userCounts.keys())
+        hours = self.timeCounts / self.resolutionMinutes / 60
+        table = tables.Table()
+        table.setDefaultColumnFormats((
+                tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
+                tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
+                tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
+                tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
+                ))
+        table.addHeader(("Item", "Value", "Items, per User, per Day", "Interval (sec), per item, per user"))
+        table.addRow(("Unique Users", users, "", ""))
 
+        def _addRow(title, item):
+            table.addRow((title, item, "%.1f" % (safePercent(24 * item, hours * users, 1.0),), "%.1f" % (safePercent(hours * 60 * 60 * users, item, 1.0),),))
+
+        _addRow("New Events", self.newEvents)
+        _addRow("New Invites", self.newInvites)
+        _addRow("Updated Events", self.updateEvents)
+        _addRow("Updated Invites", self.updateInvites)
+        _addRow("Attendee Invites", self.attendeeInvites)
+        table.addRow((
+            "Recipients",
+            "%.1f" % (safePercent(sum(self.averagedHourlyByRecipientCount["iTIP Average"]), self.timeCounts, 1.0),),
+            "",
+            "",
+        ))
+        table.printTabDelimitedData() if doTabs else table.printTable()
+        print("")
+
+
+
 class TablePrinter(object):
 
     @classmethod

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/build.sh
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/build.sh	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/build.sh	2013-11-01 22:25:30 UTC (rev 11871)
@@ -598,10 +598,11 @@
 
   export              PATH="${dstroot}/bin:${PATH}";
   export    C_INCLUDE_PATH="${dstroot}/include:${C_INCLUDE_PATH:-}";
-  export   LD_LIBRARY_PATH="${dstroot}/lib:${LD_LIBRARY_PATH:-}";
+  export   LD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${LD_LIBRARY_PATH:-}";
   export          CPPFLAGS="-I${dstroot}/include ${CPPFLAGS:-} ";
-  export           LDFLAGS="-L${dstroot}/lib ${LDFLAGS:-} ";
-  export DYLD_LIBRARY_PATH="${dstroot}/lib:${DYLD_LIBRARY_PATH:-}";
+  export           LDFLAGS="-L${dstroot}/lib -L${dstroot}/lib64 ${LDFLAGS:-} ";
+  export DYLD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${DYLD_LIBRARY_PATH:-}";
+  export PKG_CONFIG_PATH="${dstroot}/lib/pkgconfig:${PKG_CONFIG_PATH:-}";
 
   if "${do_setup}"; then
     if "${force_setup}" || "${do_bundle}" || [ ! -d "${dstroot}" ]; then
@@ -626,10 +627,10 @@
   cat > "${dstroot}/environment.sh" << __EOF__
 export              PATH="${dstroot}/bin:\${PATH}";
 export    C_INCLUDE_PATH="${dstroot}/include:\${C_INCLUDE_PATH:-}";
-export   LD_LIBRARY_PATH="${dstroot}/lib:\${LD_LIBRARY_PATH:-}:\$ORACLE_HOME";
+export   LD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:\${LD_LIBRARY_PATH:-}:\$ORACLE_HOME";
 export          CPPFLAGS="-I${dstroot}/include \${CPPFLAGS:-} ";
-export           LDFLAGS="-L${dstroot}/lib \${LDFLAGS:-} ";
-export DYLD_LIBRARY_PATH="${dstroot}/lib:\${DYLD_LIBRARY_PATH:-}:\$ORACLE_HOME";
+export           LDFLAGS="-L${dstroot}/lib -L${dstroot}/lib64 \${LDFLAGS:-} ";
+export DYLD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:\${DYLD_LIBRARY_PATH:-}:\$ORACLE_HOME";
 __EOF__
 }
 
@@ -656,10 +657,10 @@
 
     # Normally we depend on the system Python, but a bundle install should be as
     # self-contained as possible.
-    local pyfn="Python-2.7.1";
-    c_dependency -m "aa27bc25725137ba155910bd8e5ddc4f" \
+    local pyfn="Python-2.7.5";
+    c_dependency -m "6334b666b7ff2038c761d7b27ba699c1" \
         "Python" "${pyfn}" \
-        "http://www.python.org/ftp/python/2.7.1/${pyfn}.tar.bz2" \
+        "http://www.python.org/ftp/python/2.7.5/${pyfn}.tar.bz2" \
         --enable-shared;
     # Be sure to use the Python we just built.
     export PYTHON="$(type -p python)";
@@ -707,6 +708,14 @@
       --disable-bdb --disable-hdb;
   fi;
 
+  if find_header ffi/ffi.h; then
+    using_system "libffi";
+  else
+    c_dependency -m "45f3b6dbc9ee7c7dfbbbc5feba571529" \
+      "libffi" "libffi-3.0.13" \
+      "ftp://sourceware.org/pub/libffi/libffi-3.0.13.tar.gz"
+  fi;
+
   #
   # Python dependencies
   #
@@ -764,7 +773,7 @@
   local v="4.1.1";
   local n="PyGreSQL";
   local p="${n}-${v}";
-  py_dependency -v "${v}" -m "71d0b8c5a382f635572eb52fee47cd08" -o \
+  py_dependency -v "${v}" -m "71d0b8c5a382f635572eb52fee47cd08" \
     "${n}" "pgdb" "${p}" \
     "${pypi}/P/${n}/${p}.tgz";
 
@@ -811,7 +820,7 @@
   local v="0.1.2";
   local n="sqlparse";
   local p="${n}-${v}";
-  py_dependency -o -v "${v}" -s "978874e5ebbd78e6d419e8182ce4fb3c30379642" \
+  py_dependency -v "${v}" -s "978874e5ebbd78e6d419e8182ce4fb3c30379642" \
     "SQLParse" "${n}" "${p}" \
     "http://python-sqlparse.googlecode.com/files/${p}.tar.gz";
 
@@ -821,7 +830,7 @@
     local v="0.6.1";
     local n="pyflakes";
     local p="${n}-${v}";
-    py_dependency -o -v "${v}" -m "00debd2280b962e915dfee552a675915" \
+    py_dependency -v "${v}" -m "00debd2280b962e915dfee552a675915" \
       "Pyflakes" "${n}" "${p}" \
       "${pypi}/p/${n}/${p}.tar.gz";
   fi;
@@ -833,28 +842,28 @@
   # Can't add "-v 2011g" to args because the version check expects numbers.
   local n="pytz";
   local p="${n}-2011n";
-  py_dependency -o -m "75ffdc113a4bcca8096ab953df746391" \
+  py_dependency -m "75ffdc113a4bcca8096ab953df746391" \
     "${n}" "${n}" "${p}" \
     "${pypi}/p/${n}/${p}.tar.gz";
 
   local v="2.5";
   local n="pycrypto";
   local p="${n}-${v}";
-  py_dependency -o -v "${v}" -m "783e45d4a1a309e03ab378b00f97b291" \
+  py_dependency -v "${v}" -m "783e45d4a1a309e03ab378b00f97b291" \
     "PyCrypto" "${n}" "${p}" \
     "http://ftp.dlitz.net/pub/dlitz/crypto/${n}/${p}.tar.gz";
 
   local v="0.1.2";
   local n="pyasn1";
   local p="${n}-${v}";
-  py_dependency -o -v "${v}" -m "a7c67f5880a16a347a4d3ce445862a47" \
+  py_dependency -v "${v}" -m "a7c67f5880a16a347a4d3ce445862a47" \
     "${n}" "${n}" "${p}" \
     "${pypi}/p/${n}/${p}.tar.gz";
 
   local v="1.1.6";
   local n="setproctitle";
   local p="${n}-${v}";
-  py_dependency -o -v "1.0" -m "1e42e43b440214b971f4b33c21eac369" \
+  py_dependency -v "1.0" -m "1e42e43b440214b971f4b33c21eac369" \
     "${n}" "${n}" "${p}" \
     "${pypi}/s/${n}/${p}.tar.gz";
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/version.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/version.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/support/version.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -26,7 +26,7 @@
     # Compute the version number.
     #
 
-    base_version = "5.1"
+    base_version = "5.2"
 
     branches = tuple(
         branch.format(version=base_version)
@@ -36,7 +36,7 @@
             "trunk",
         )
     )
-    
+
     source_root = dirname(dirname(__file__))
 
     for branch in branches:

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/testserver
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/testserver	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/testserver	2013-11-01 22:25:30 UTC (rev 11871)
@@ -28,6 +28,8 @@
 printres="";
 subdir="";
 random="--random";
+seed="";
+ssl="";
 
 usage ()
 {
@@ -40,13 +42,15 @@
   echo "        -r  Print request and response";
   echo "        -s  Set the serverinfo.xml";
   echo "        -t  Set the CalDAVTester directory";
+  echo "        -x  Random seed to use.";
   echo "        -v  Verbose.";
+  echo "        -z  Use SSL.";
 
   if [ "${1-}" == "-" ]; then return 0; fi;
   exit 64;
 }
 
-while getopts 'hvrot:s:d:' option; do
+while getopts 'hvrozt:s:d:x:' option; do
   case "$option" in 
     '?') usage; ;;
     'h') usage -; exit 0; ;;
@@ -56,6 +60,8 @@
     'r') printres="--always-print-request --always-print-response"; ;;
     'v') verbose="v"; ;;
     'o') random=""; ;;
+    'x') seed="--random-seed ${OPTARG}"; ;;
+    'z') ssl="--ssl"; ;;
   esac;
 done;
 
@@ -71,5 +77,5 @@
 
 source "${wd}/support/shell.sh";
 
-cd "${cdt}" && "${python}" testcaldav.py ${random} --print-details-onfail ${printres} -s "${serverinfo}" ${subdir} "$@";
+cd "${cdt}" && "${python}" testcaldav.py ${random} ${seed} ${ssl} --print-details-onfail ${printres} -s "${serverinfo}" ${subdir} "$@";
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/adbapi2.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/adbapi2.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/adbapi2.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -18,10 +18,10 @@
 """
 Asynchronous multi-process connection pool.
 
-This is similar to L{twisted.enterprise.adbapi}, but can hold a transaction (and
-thereby a thread) open across multiple asynchronous operations, rather than
-forcing the transaction to be completed entirely in a thread and/or entirely in
-a single SQL statement.
+This is similar to L{twisted.enterprise.adbapi}, but can hold a transaction
+(and thereby a thread) open across multiple asynchronous operations, rather
+than forcing the transaction to be completed entirely in a thread and/or
+entirely in a single SQL statement.
 
 Also, this module includes an AMP protocol for multiplexing connections through
 a single choke-point host.  This is not currently in use, however, as AMP needs
@@ -84,6 +84,15 @@
 
 
 
+def _destructively(aList):
+    """
+    Destructively iterate a list, popping elements from the beginning.
+    """
+    while aList:
+        yield aList.pop(0)
+
+
+
 def _deriveParameters(cursor, args):
     """
     Some DB-API extensions need to call special extension methods on
@@ -118,6 +127,7 @@
     return derived
 
 
+
 def _deriveQueryEnded(cursor, derived):
     """
     A query which involved some L{IDerivedParameter}s just ended.  Execute any
@@ -142,6 +152,8 @@
     """
     implements(IAsyncTransaction)
 
+    noisy = False
+
     def __init__(self, pool, threadHolder, connection, cursor):
         self._pool       = pool
         self._completed  = "idle"
@@ -169,33 +181,31 @@
         """
         Execute the given SQL on a thread, using a DB-API 2.0 cursor.
 
-        This method is invoked internally on a non-reactor thread, one dedicated
-        to and associated with the current cursor.  It executes the given SQL,
-        re-connecting first if necessary, re-cycling the old connection if
-        necessary, and then, if there are results from the statement (as
-        determined by the DB-API 2.0 'description' attribute) it will fetch all
-        the rows and return them, leaving them to be relayed to
+        This method is invoked internally on a non-reactor thread, one
+        dedicated to and associated with the current cursor.  It executes the
+        given SQL, re-connecting first if necessary, re-cycling the old
+        connection if necessary, and then, if there are results from the
+        statement (as determined by the DB-API 2.0 'description' attribute) it
+        will fetch all the rows and return them, leaving them to be relayed to
         L{_ConnectedTxn.execSQL} via the L{ThreadHolder}.
 
         The rules for possibly reconnecting automatically are: if this is the
         very first statement being executed in this transaction, and an error
         occurs in C{execute}, close the connection and try again.  We will
-        ignore any errors from C{close()} (or C{rollback()}) and log them during
-        this process.  This is OK because adbapi2 always enforces transaction
-        discipline: connections are never in autocommit mode, so if the first
-        statement in a transaction fails, nothing can have happened to the
-        database; as per the ADBAPI spec, a lost connection is a rolled-back
-        transaction.  In the cases where some databases fail to enforce
-        transaction atomicity (i.e. schema manipulations), re-executing the same
-        statement will result, at worst, in a spurious and harmless error (like
-        "table already exists"), not corruption.
+        ignore any errors from C{close()} (or C{rollback()}) and log them
+        during this process.  This is OK because adbapi2 always enforces
+        transaction discipline: connections are never in autocommit mode, so if
+        the first statement in a transaction fails, nothing can have happened
+        to the database; as per the ADBAPI spec, a lost connection is a
+        rolled-back transaction.  In the cases where some databases fail to
+        enforce transaction atomicity (i.e.  schema manipulations),
+        re-executing the same statement will result, at worst, in a spurious
+        and harmless error (like "table already exists"), not corruption.
 
         @param sql: The SQL string to execute.
-
         @type sql: C{str}
 
         @param args: The bind parameters to pass to adbapi, if any.
-
         @type args: C{list} or C{None}
 
         @param raiseOnZeroRowCount: If specified, an exception to raise when no
@@ -203,7 +213,6 @@
 
         @return: all the rows that resulted from execution of the given C{sql},
             or C{None}, if the statement is one which does not produce results.
-
         @rtype: C{list} of C{tuple}, or C{NoneType}
 
         @raise Exception: this function may raise any exception raised by the
@@ -234,9 +243,9 @@
             # happen in the transaction, then the connection has probably gone
             # bad in the meanwhile, and we should try again.
             if wasFirst:
-                # Report the error before doing anything else, since doing other
-                # things may cause the traceback stack to be eliminated if they
-                # raise exceptions (even internally).
+                # Report the error before doing anything else, since doing
+                # other things may cause the traceback stack to be eliminated
+                # if they raise exceptions (even internally).
                 log.err(
                     Failure(),
                     "Exception from execute() on first statement in "
@@ -292,11 +301,9 @@
             return None
 
 
-    noisy = False
-
     def execSQL(self, *args, **kw):
         result = self._holder.submit(
-            lambda : self._reallyExecSQL(*args, **kw)
+            lambda: self._reallyExecSQL(*args, **kw)
         )
         if self.noisy:
             def reportResult(results):
@@ -305,7 +312,7 @@
                     "SQL: %r %r" % (args, kw),
                     "Results: %r" % (results,),
                     "",
-                    ]))
+                ]))
                 return results
             result.addBoth(reportResult)
         return result
@@ -328,8 +335,8 @@
             self._completed = "ended"
             def reallySomething():
                 """
-                Do the database work and set appropriate flags.  Executed in the
-                cursor thread.
+                Do the database work and set appropriate flags.  Executed in
+                the cursor thread.
                 """
                 if self._cursor is None or self._first:
                     return
@@ -384,8 +391,8 @@
 class _NoTxn(object):
     """
     An L{IAsyncTransaction} that indicates a local failure before we could even
-    communicate any statements (or possibly even any connection attempts) to the
-    server.
+    communicate any statements (or possibly even any connection attempts) to
+    the server.
     """
     implements(IAsyncTransaction)
 
@@ -401,7 +408,6 @@
         """
         return fail(ConnectionError(self.reason))
 
-
     execSQL = _everything
     commit  = _everything
     abort   = _everything
@@ -411,9 +417,9 @@
 class _WaitingTxn(object):
     """
     A L{_WaitingTxn} is an implementation of L{IAsyncTransaction} which cannot
-    yet actually execute anything, so it waits and spools SQL requests for later
-    execution.  When a L{_ConnectedTxn} becomes available later, it can be
-    unspooled onto that.
+    yet actually execute anything, so it waits and spools SQL requests for
+    later execution.  When a L{_ConnectedTxn} becomes available later, it can
+    be unspooled onto that.
     """
 
     implements(IAsyncTransaction)
@@ -442,8 +448,7 @@
         a Deferred to not interfere with the originally submitted order of
         commands.
         """
-        while self._spool:
-            yield self._spool.pop(0)
+        return _destructively(self._spool)
 
 
     def _unspool(self, other):
@@ -492,8 +497,9 @@
         """
         Callback for C{commit} and C{abort} Deferreds.
         """
-        for operation in self._hooks:
+        for operation in _destructively(self._hooks):
             yield operation()
+        self.clear()
         returnValue(ignored)
 
 
@@ -501,10 +507,19 @@
         """
         Implement L{IAsyncTransaction.postCommit}.
         """
-        self._hooks.append(operation)
+        if self._hooks is not None:
+            self._hooks.append(operation)
 
 
+    def clear(self):
+        """
+        Remove all hooks from this operation.  Once this is called, no
+        more hooks can be added ever again.
+        """
+        self._hooks = None
 
+
+
 class _CommitAndAbortHooks(object):
     """
     Shared implementation of post-commit and post-abort hooks.
@@ -524,6 +539,7 @@
         """
         pre = self._preCommit.runHooks()
         def ok(ignored):
+            self._abort.clear()
             return doCommit().addCallback(self._commit.runHooks)
         def failed(why):
             return self.abort().addCallback(lambda ignored: why)
@@ -639,9 +655,9 @@
             d = self._currentBlock._startExecuting()
             d.addCallback(self._finishExecuting)
         elif self._blockedQueue is not None:
-            # If there aren't any pending blocks any more, and there are spooled
-            # statements that aren't part of a block, unspool all the statements
-            # that have been held up until this point.
+            # If there aren't any pending blocks any more, and there are
+            # spooled statements that aren't part of a block, unspool all the
+            # statements that have been held up until this point.
             bq = self._blockedQueue
             self._blockedQueue = None
             bq._unspool(self)
@@ -649,8 +665,8 @@
 
     def _finishExecuting(self, result):
         """
-        The active block just finished executing.  Clear it and see if there are
-        more blocks to execute, or if all the blocks are done and we should
+        The active block just finished executing.  Clear it and see if there
+        are more blocks to execute, or if all the blocks are done and we should
         execute any queued free statements.
         """
         self._currentBlock = None
@@ -659,8 +675,9 @@
 
     def commit(self):
         if self._blockedQueue is not None:
-            # We're in the process of executing a block of commands.  Wait until
-            # they're done.  (Commit will be repeated in _checkNextBlock.)
+            # We're in the process of executing a block of commands.  Wait
+            # until they're done.  (Commit will be repeated in
+            # _checkNextBlock.)
             return self._blockedQueue.commit()
         def reallyCommit():
             self._markComplete()
@@ -670,6 +687,8 @@
 
     def abort(self):
         self._markComplete()
+        self._commit.clear()
+        self._preCommit.clear()
         result = super(_SingleTxn, self).abort()
         if self in self._pool._waiting:
             self._stopWaiting()
@@ -785,9 +804,9 @@
 
         @param raiseOnZeroRowCount: see L{IAsyncTransaction.execSQL}
 
-        @param track: an internal parameter; was this called by application code
-            or as part of unspooling some previously-queued requests?  True if
-            application code, False if unspooling.
+        @param track: an internal parameter; was this called by application
+            code or as part of unspooling some previously-queued requests?
+            True if application code, False if unspooling.
         """
         if track and self._ended:
             raise AlreadyFinishedError()
@@ -970,8 +989,8 @@
         super(ConnectionPool, self).stopService()
         self._stopping = True
 
-        # Phase 1: Cancel any transactions that are waiting so they won't try to
-        # eagerly acquire new connections as they flow into the free-list.
+        # Phase 1: Cancel any transactions that are waiting so they won't try
+        # to eagerly acquire new connections as they flow into the free-list.
         while self._waiting:
             waiting = self._waiting[0]
             waiting._stopWaiting()
@@ -991,10 +1010,10 @@
         # ThreadHolders.
         while self._free:
             # Releasing a L{_ConnectedTxn} doesn't automatically recycle it /
-            # remove it the way aborting a _SingleTxn does, so we need to .pop()
-            # here.  L{_ConnectedTxn.stop} really shouldn't be able to fail, as
-            # it's just stopping the thread, and the holder's stop() is
-            # independently submitted from .abort() / .close().
+            # remove it the way aborting a _SingleTxn does, so we need to
+            # .pop() here.  L{_ConnectedTxn.stop} really shouldn't be able to
+            # fail, as it's just stopping the thread, and the holder's stop()
+            # is independently submitted from .abort() / .close().
             yield self._free.pop()._releaseConnection()
 
         tp = self.reactor.getThreadPool()
@@ -1011,8 +1030,8 @@
     def connection(self, label="<unlabeled>"):
         """
         Find and immediately return an L{IAsyncTransaction} object.  Execution
-        of statements, commit and abort on that transaction may be delayed until
-        a real underlying database connection is available.
+        of statements, commit and abort on that transaction may be delayed
+        until a real underlying database connection is available.
 
         @return: an L{IAsyncTransaction}
         """
@@ -1158,6 +1177,7 @@
     def toString(self, inObject):
         return dumps(inObject)
 
+
     def fromString(self, inString):
         return loads(inString)
 
@@ -1193,8 +1213,7 @@
                 if f.type in command.errors:
                     returnValue(f)
                 else:
-                    log.err(Failure(),
-                            "shared database connection pool encountered error")
+                    log.err(Failure(), "shared database connection pool error")
                     raise FailsafeException()
             else:
                 returnValue(val)
@@ -1286,6 +1305,7 @@
     """
 
 
+
 class ConnectionPoolConnection(AMP):
     """
     A L{ConnectionPoolConnection} is a single connection to a
@@ -1402,7 +1422,8 @@
     A client which can execute SQL.
     """
 
-    def __init__(self, dialect=POSTGRES_DIALECT, paramstyle=DEFAULT_PARAM_STYLE):
+    def __init__(self, dialect=POSTGRES_DIALECT,
+                 paramstyle=DEFAULT_PARAM_STYLE):
         # See DEFAULT_PARAM_STYLE FIXME above.
         super(ConnectionPoolClient, self).__init__()
         self._nextID    = count().next
@@ -1428,8 +1449,8 @@
         """
         Create a new networked provider of L{IAsyncTransaction}.
 
-        (This will ultimately call L{ConnectionPool.connection} on the other end
-        of the wire.)
+        (This will ultimately call L{ConnectionPool.connection} on the other
+        end of the wire.)
 
         @rtype: L{IAsyncTransaction}
         """
@@ -1478,12 +1499,12 @@
         @param derived: either C{None} or a C{list} of L{IDerivedParameter}
             providers initially passed into the C{execSQL} that started this
             query.  The values of these object swill mutate the original input
-            parameters to resemble them.  Although L{IDerivedParameter.preQuery}
-            and L{IDerivedParameter.postQuery} are invoked on the other end of
-            the wire, the local objects will be made to appear as though they
-            were called here.
+            parameters to resemble them.  Although
+            L{IDerivedParameter.preQuery} and L{IDerivedParameter.postQuery}
+            are invoked on the other end of the wire, the local objects will be
+            made to appear as though they were called here.
 
-        @param noneResult: should the result of the query be C{None} (i.e. did
+        @param noneResult: should the result of the query be C{None} (i.e.  did
             it not have a C{description} on the cursor).
         """
         if noneResult and not self.results:
@@ -1492,8 +1513,8 @@
             results = self.results
         if derived is not None:
             # 1) Bleecchh.
-            # 2) FIXME: add some direct tests in test_adbapi2, the unit test for
-            # this crosses some abstraction boundaries so it's a little
+            # 2) FIXME: add some direct tests in test_adbapi2, the unit test
+            # for this crosses some abstraction boundaries so it's a little
             # integration-y and in the tests for twext.enterprise.dal
             for remote, local in zip(derived, self._deriveDerived()):
                 local.__dict__ = remote.__dict__
@@ -1519,8 +1540,8 @@
 class _NetTransaction(_CommitAndAbortHooks):
     """
     A L{_NetTransaction} is an L{AMP}-protocol-based provider of the
-    L{IAsyncTransaction} interface.  It sends SQL statements, query results, and
-    commit/abort commands via an AMP socket to a pooling process.
+    L{IAsyncTransaction} interface.  It sends SQL statements, query results,
+    and commit/abort commands via an AMP socket to a pooling process.
     """
 
     implements(IAsyncTransaction)
@@ -1562,7 +1583,8 @@
             args = []
         client = self._client
         queryID = str(client._nextID())
-        query = client._queries[queryID] = _Query(sql, raiseOnZeroRowCount, args)
+        query = client._queries[queryID] = _Query(sql, raiseOnZeroRowCount,
+                                                  args)
         result = (
             client.callRemote(
                 ExecSQL, queryID=queryID, sql=sql, args=args,
@@ -1594,6 +1616,8 @@
 
 
     def abort(self):
+        self._commit.clear()
+        self._preCommit.clear()
         return self._complete(Abort).addCallback(self._abort.runHooks)
 
 
@@ -1617,6 +1641,7 @@
             self.abort().addErrback(shush)
 
 
+
 class _NetCommandBlock(object):
     """
     Net command block.
@@ -1650,10 +1675,10 @@
         """
         Execute some SQL on this command block.
         """
-        if  (self._ended or
-             self._transaction._completed and
-             not self._transaction._committing or
-             self._transaction._committed):
+        if (
+            self._ended or self._transaction._completed and
+            not self._transaction._committing or self._transaction._committed
+        ):
             raise AlreadyFinishedError()
         return self._transaction.execSQL(sql, args, raiseOnZeroRowCount,
                                          self._blockID)
@@ -1670,4 +1695,3 @@
             EndBlock, blockID=self._blockID,
             transactionID=self._transaction._transactionID
         )
-

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/syntax.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/syntax.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/syntax.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1686,7 +1686,46 @@
             SQLFragment(' in %s mode' % (self.mode,)))
 
 
+class DatabaseLock(_LockingStatement):
+    """
+    An SQL exclusive session level advisory lock
+    """
 
+    def _toSQL(self, queryGenerator):
+        assert(queryGenerator.dialect == POSTGRES_DIALECT)
+        return SQLFragment('select pg_advisory_lock(1)')
+
+
+    def on(self, txn, *a, **kw):
+        """
+        Override on() to only execute on Postgres
+        """
+        if txn.dialect == POSTGRES_DIALECT:
+            return super(DatabaseLock, self).on(txn, *a, **kw)
+
+        return succeed(None)
+
+
+class DatabaseUnlock(_LockingStatement):
+    """
+    An SQL exclusive session level advisory lock
+    """
+
+    def _toSQL(self, queryGenerator):
+        assert(queryGenerator.dialect == POSTGRES_DIALECT)
+        return SQLFragment('select pg_advisory_unlock(1)')
+
+
+    def on(self, txn, *a, **kw):
+        """
+        Override on() to only execute on Postgres
+        """
+        if txn.dialect == POSTGRES_DIALECT:
+            return super(DatabaseUnlock, self).on(txn, *a, **kw)
+
+        return succeed(None)
+
+
 class Savepoint(_LockingStatement):
     """
     An SQL 'savepoint' statement.

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/test/test_sqlsyntax.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/test/test_sqlsyntax.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/dal/test/test_sqlsyntax.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -25,7 +25,8 @@
     TableMismatch, Parameter, Max, Len, NotEnoughValues,
     Savepoint, RollbackToSavepoint, ReleaseSavepoint, SavepointAction,
     Union, Intersect, Except, SetExpression, DALError,
-    ResultAliasSyntax, Count, QueryGenerator, ALL_COLUMNS)
+    ResultAliasSyntax, Count, QueryGenerator, ALL_COLUMNS,
+    DatabaseLock, DatabaseUnlock)
 from twext.enterprise.dal.syntax import FixedPlaceholder, NumericPlaceholder
 from twext.enterprise.dal.syntax import Function
 from twext.enterprise.dal.syntax import SchemaSyntax
@@ -1314,6 +1315,22 @@
                           SQLFragment("lock table FOO in exclusive mode"))
 
 
+    def test_databaseLock(self):
+        """
+        L{DatabaseLock} generates a ('pg_advisory_lock') statement
+        """
+        self.assertEquals(DatabaseLock().toSQL(),
+                          SQLFragment("select pg_advisory_lock(1)"))
+
+
+    def test_databaseUnlock(self):
+        """
+        L{DatabaseUnlock} generates a ('pg_advisory_unlock') statement
+        """
+        self.assertEquals(DatabaseUnlock().toSQL(),
+                          SQLFragment("select pg_advisory_unlock(1)"))
+
+
     def test_savepoint(self):
         """
         L{Savepoint} generates a ('savepoint') statement.

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/ienterprise.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/ienterprise.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/ienterprise.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -67,7 +67,6 @@
         A copy of the 'paramstyle' attribute from a DB-API 2.0 module.
         """)
 
-
     dialect = Attribute(
         """
         A copy of the 'dialect' attribute from the connection pool.  One of the
@@ -100,8 +99,8 @@
     """
     Asynchronous execution of SQL.
 
-    Note that there is no {begin()} method; if an L{IAsyncTransaction} exists at
-    all, it is assumed to have been started.
+    Note that there is no C{begin()} method; if an L{IAsyncTransaction} exists
+    at all, it is assumed to have been started.
     """
 
     def commit():
@@ -167,17 +166,18 @@
 
         This is useful when using database-specific features such as
         sub-transactions where order of execution is importnat, but where
-        application code may need to perform I/O to determine what SQL, exactly,
-        it wants to execute.  Consider this fairly contrived example for an
-        imaginary database::
+        application code may need to perform I/O to determine what SQL,
+        exactly, it wants to execute.  Consider this fairly contrived example
+        for an imaginary database::
 
             def storeWebPage(url, block):
                 block.execSQL("BEGIN SUB TRANSACTION")
                 got = getPage(url)
                 def gotPage(data):
-                    block.execSQL("INSERT INTO PAGES (TEXT) VALUES (?)", [data])
+                    block.execSQL("INSERT INTO PAGES (TEXT) VALUES (?)",
+                                  [data])
                     block.execSQL("INSERT INTO INDEX (TOKENS) VALUES (?)",
-                                 [tokenize(data)])
+                                  [tokenize(data)])
                     lastStmt = block.execSQL("END SUB TRANSACTION")
                     block.end()
                     return lastStmt
@@ -187,12 +187,12 @@
                             lambda x: txn.commit(), lambda f: txn.abort()
                           )
 
-        This fires off all the C{getPage} requests in parallel, and prepares all
-        the necessary SQL immediately as the results arrive, but executes those
-        statements in order.  In the above example, this makes sure to store the
-        page and its tokens together, another use for this might be to store a
-        computed aggregate (such as a sum) at a particular point in a
-        transaction, without sacrificing parallelism.
+        This fires off all the C{getPage} requests in parallel, and prepares
+        all the necessary SQL immediately as the results arrive, but executes
+        those statements in order.  In the above example, this makes sure to
+        store the page and its tokens together, another use for this might be
+        to store a computed aggregate (such as a sum) at a particular point in
+        a transaction, without sacrificing parallelism.
 
         @rtype: L{ICommandBlock}
         """
@@ -208,21 +208,21 @@
 
     def end():
         """
-        End this command block, allowing other commands queued on the underlying
-        transaction to end.
+        End this command block, allowing other commands queued on the
+        underlying transaction to end.
 
         @note: This is I{not} the same as either L{IAsyncTransaction.commit} or
             L{IAsyncTransaction.abort}, since it does not denote success or
             failure; merely that the command block has completed and other
             statements may now be executed.  Since sub-transactions are a
             database-specific feature, they must be implemented at a
-            higher-level than this facility provides (although this facility may
-            be useful in their implementation).  Also note that, unlike either
-            of those methods, this does I{not} return a Deferred: if you want to
-            know when the block has completed, simply add a callback to the last
-            L{ICommandBlock.execSQL} call executed on this L{ICommandBlock}.
-            (This may be changed in a future version for the sake of
-            convenience, however.)
+            higher-level than this facility provides (although this facility
+            may be useful in their implementation).  Also note that, unlike
+            either of those methods, this does I{not} return a Deferred: if you
+            want to know when the block has completed, simply add a callback to
+            the last L{ICommandBlock.execSQL} call executed on this
+            L{ICommandBlock}.  (This may be changed in a future version for the
+            sake of convenience, however.)
         """
 
 
@@ -306,7 +306,8 @@
             L{WorkProposal}
         """
 
+
     def transferProposalCallbacks(self, newQueuer):
         """
         Transfer the registered callbacks to the new queuer.
-        """
\ No newline at end of file
+        """

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/test/test_adbapi2.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/test/test_adbapi2.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/enterprise/test/test_adbapi2.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -18,13 +18,15 @@
 Tests for L{twext.enterprise.adbapi2}.
 """
 
+import gc
+
 from zope.interface.verify import verifyObject
 
 from twisted.python.failure import Failure
 
 from twisted.trial.unittest import TestCase
 
-from twisted.internet.defer import Deferred, fail
+from twisted.internet.defer import Deferred, fail, succeed, inlineCallbacks
 
 from twisted.test.proto_helpers import StringTransport
 
@@ -43,7 +45,37 @@
 from twext.enterprise.fixtures import RollbackFail
 from twext.enterprise.fixtures import CommitFail
 from twext.enterprise.adbapi2 import Commit
+from twext.enterprise.adbapi2 import _HookableOperation
 
+
+class TrashCollector(object):
+    """
+    Test helper for monitoring gc.garbage.
+    """
+    def __init__(self, testCase):
+        self.testCase = testCase
+        testCase.addCleanup(self.checkTrash)
+        self.start()
+
+
+    def start(self):
+        gc.collect()
+        self.garbageStart = len(gc.garbage)
+
+
+    def checkTrash(self):
+        """
+        Ensure that the test has added no additional garbage.
+        """
+        gc.collect()
+        newGarbage = gc.garbage[self.garbageStart:]
+        if newGarbage:
+            # Don't clean up twice.
+            self.start()
+            self.testCase.fail("New garbage: " + repr(newGarbage))
+
+
+
 class AssertResultHelper(object):
     """
     Mixin for asserting about synchronous Deferred results.
@@ -300,8 +332,8 @@
     def test_stopServiceWithSpooled(self):
         """
         When L{ConnectionPool.stopService} is called when spooled transactions
-        are outstanding, any pending L{Deferreds} returned by those transactions
-        will be failed with L{ConnectionError}.
+        are outstanding, any pending L{Deferreds} returned by those
+        transactions will be failed with L{ConnectionError}.
         """
         # Use up the free slots so we have to spool.
         hold = []
@@ -450,7 +482,8 @@
         stopResult = self.resultOf(self.pool.stopService())
         # Sanity check that we haven't actually stopped it yet
         self.assertEquals(abortResult, [])
-        # We haven't fired it yet, so the service had better not have stopped...
+        # We haven't fired it yet, so the service had better not have
+        # stopped...
         self.assertEquals(stopResult, [])
         d.callback(None)
         self.flushHolders()
@@ -465,7 +498,6 @@
         """
         t = self.createTransaction()
         self.resultOf(t.execSQL("echo", []))
-        import gc
         conns = self.factory.connections
         self.assertEquals(len(conns), 1)
         self.assertEquals(conns[0]._rollbackCount, 0)
@@ -477,6 +509,60 @@
         self.assertEquals(conns[0]._commitCount, 0)
 
 
+    def circularReferenceTest(self, finish, hook):
+        """
+        Collecting a completed (committed or aborted) L{IAsyncTransaction}
+        should not leak any circular references.
+        """
+        tc = TrashCollector(self)
+        commitExecuted = []
+        def carefullyManagedScope():
+            t = self.createTransaction()
+            def holdAReference():
+                """
+                This is a hook that holds a reference to 't'.
+                """
+                commitExecuted.append(True)
+                return t.execSQL("teardown", [])
+            hook(t, holdAReference)
+            finish(t)
+        self.failIf(commitExecuted, "Commit hook executed.")
+        carefullyManagedScope()
+        tc.checkTrash()
+
+
+    def test_noGarbageOnCommit(self):
+        """
+        Committing a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.commit(),
+                                   lambda txn, hook: txn.preCommit(hook))
+
+
+    def test_noGarbageOnCommitWithAbortHook(self):
+        """
+        Committing a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.commit(),
+                                   lambda txn, hook: txn.postAbort(hook))
+
+
+    def test_noGarbageOnAbort(self):
+        """
+        Aborting a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.abort(),
+                                   lambda txn, hook: txn.preCommit(hook))
+
+
+    def test_noGarbageOnAbortWithPostCommitHook(self):
+        """
+        Aborting a transaction does not cause gc garbage.
+        """
+        self.circularReferenceTest(lambda txn: txn.abort(),
+                                   lambda txn, hook: txn.postCommit(hook))
+
+
     def test_tooManyConnectionsWhileOthersFinish(self):
         """
         L{ConnectionPool.connection} will not spawn more than the maximum
@@ -553,10 +639,11 @@
 
     def test_reConnectWhenFirstExecFails(self):
         """
-        Generally speaking, DB-API 2.0 adapters do not provide information about
-        the cause of a failed 'execute' method; they definitely don't provide it
-        in a way which can be identified as related to the syntax of the query,
-        the state of the database itself, the state of the connection, etc.
+        Generally speaking, DB-API 2.0 adapters do not provide information
+        about the cause of a failed 'execute' method; they definitely don't
+        provide it in a way which can be identified as related to the syntax of
+        the query, the state of the database itself, the state of the
+        connection, etc.
 
         Therefore the best general heuristic for whether the connection to the
         database has been lost and needs to be re-established is to catch
@@ -564,8 +651,8 @@
         transaction.
         """
         # Allow 'connect' to succeed.  This should behave basically the same
-        # whether connect() happened to succeed in some previous transaction and
-        # it's recycling the underlying transaction, or connect() just
+        # whether connect() happened to succeed in some previous transaction
+        # and it's recycling the underlying transaction, or connect() just
         # succeeded.  Either way you just have a _SingleTxn wrapping a
         # _ConnectedTxn.
         txn = self.createTransaction()
@@ -636,8 +723,8 @@
         """
         class BindingSpecificException(Exception):
             """
-            Exception that's a placeholder for something that a database binding
-            might raise.
+            Exception that's a placeholder for something that a database
+            binding might raise.
             """
         def alsoFailClose(factory):
             factory.childCloseWillFail(BindingSpecificException())
@@ -738,8 +825,8 @@
         therefore pointless, and can be ignored.  Furthermore, actually
         executing the commit and propagating a possible connection-oriented
         error causes clients to see errors, when, if those clients had actually
-        executed any statements, the connection would have been recycled and the
-        statement transparently re-executed by the logic tested by
+        executed any statements, the connection would have been recycled and
+        the statement transparently re-executed by the logic tested by
         L{test_reConnectWhenFirstExecFails}.
         """
         txn = self.createTransaction()
@@ -758,12 +845,12 @@
 
     def test_reConnectWhenSecondExecFailsThenFirstExecFails(self):
         """
-        Other connection-oriented errors might raise exceptions if they occur in
-        the middle of a transaction, but that should cause the error to be
-        caught, the transaction to be aborted, and the (closed) connection to be
-        recycled, where the next transaction that attempts to do anything with
-        it will encounter the error immediately and discover it needs to be
-        recycled.
+        Other connection-oriented errors might raise exceptions if they occur
+        in the middle of a transaction, but that should cause the error to be
+        caught, the transaction to be aborted, and the (closed) connection to
+        be recycled, where the next transaction that attempts to do anything
+        with it will encounter the error immediately and discover it needs to
+        be recycled.
 
         It would be better if this behavior were invisible, but that could only
         be accomplished with more precise database exceptions.  We may come up
@@ -780,9 +867,9 @@
         self.assertEquals(self.factory.connections[0].executions, 2)
         # Reconnection should work exactly as before.
         self.assertEquals(self.factory.connections[0].closed, False)
-        # Application code has to roll back its transaction at this point, since
-        # it failed (and we don't necessarily know why it failed: not enough
-        # information).
+        # Application code has to roll back its transaction at this point,
+        # since it failed (and we don't necessarily know why it failed: not
+        # enough information).
         self.resultOf(txn.abort())
         self.factory.connections[0].executions = 0 # re-set for next test
         self.assertEquals(len(self.factory.connections), 1)
@@ -888,7 +975,7 @@
         self.assertEquals(len(e), 1)
 
 
-    def test_twoCommandBlocks(self, flush=lambda : None):
+    def test_twoCommandBlocks(self, flush=lambda: None):
         """
         When execution of one command block is complete, it will proceed to the
         next queued block, then to regular SQL executed on the transaction.
@@ -932,9 +1019,9 @@
     def test_commandBlockDelaysCommit(self):
         """
         Some command blocks need to run asynchronously, without the overall
-        transaction-managing code knowing how far they've progressed.  Therefore
-        when you call {IAsyncTransaction.commit}(), it should not actually take
-        effect if there are any pending command blocks.
+        transaction-managing code knowing how far they've progressed.
+        Therefore when you call {IAsyncTransaction.commit}(), it should not
+        actually take effect if there are any pending command blocks.
         """
         txn = self.createTransaction()
         block = txn.commandBlock()
@@ -1078,8 +1165,8 @@
 
     def pump(self):
         """
-        Deliver all input from the client to the server, then from the server to
-        the client.
+        Deliver all input from the client to the server, then from the server
+        to the client.
         """
         a = self.moveData(self.c2s)
         b = self.moveData(self.s2c)
@@ -1187,3 +1274,31 @@
         self.assertEquals(len(self.factory.connections), 1)
 
 
+class HookableOperationTests(TestCase):
+    """
+    Tests for L{_HookableOperation}.
+    """
+
+    @inlineCallbacks
+    def test_clearPreventsSubsequentAddHook(self):
+        """
+        After clear() or runHooks() are called, subsequent calls to addHook()
+        are NO-OPs.
+        """
+        def hook():
+            return succeed(None)
+
+        hookOp = _HookableOperation()
+        hookOp.addHook(hook)
+        self.assertEquals(len(hookOp._hooks), 1)
+        hookOp.clear()
+        self.assertEquals(hookOp._hooks, None)
+
+        hookOp = _HookableOperation()
+        hookOp.addHook(hook)
+        yield hookOp.runHooks()
+        self.assertEquals(hookOp._hooks, None)
+        hookOp.addHook(hook)
+        self.assertEquals(hookOp._hooks, None)
+
+

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/sendfdport.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/sendfdport.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/sendfdport.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -95,6 +95,7 @@
     used to transmit sockets to a subprocess.
 
     @ivar skt: the UNIX socket used as the sendmsg() transport.
+    @type skt: L{socket.socket}
 
     @ivar outgoingSocketQueue: an outgoing queue of sockets to send to the
         subprocess, along with their descriptions (strings describing their
@@ -107,7 +108,11 @@
         from the subprocess: this is an application-specific indication of how
         ready this subprocess is to receive more connections.  A typical usage
         would be to count the open connections: this is what is passed to
-    @type status: C{str}
+    @type status: See L{IStatusWatcher} for an explanation of which methods
+        determine this type.
+
+    @ivar dispatcher: The socket dispatcher that owns this L{_SubprocessSocket}
+    @type dispatcher: L{InheritedSocketDispatcher}
     """
 
     def __init__(self, dispatcher, skt, status):
@@ -117,6 +122,7 @@
         self.skt = skt          # XXX needs to be set non-blocking by somebody
         self.fileno = skt.fileno
         self.outgoingSocketQueue = []
+        self.pendingCloseSocketQueue = []
 
 
     def sendSocketToPeer(self, skt, description):
@@ -127,7 +133,7 @@
         self.startWriting()
 
 
-    def doRead(self):
+    def doRead(self, recvmsg=recvmsg):
         """
         Receive a status / health message and record it.
         """
@@ -137,10 +143,12 @@
             if se.errno not in (EAGAIN, ENOBUFS):
                 raise
         else:
-            self.dispatcher.statusMessage(self, data)
+            closeCount = self.dispatcher.statusMessage(self, data)
+            for ignored in xrange(closeCount):
+                self.pendingCloseSocketQueue.pop(0).close()
 
 
-    def doWrite(self):
+    def doWrite(self, sendfd=sendfd):
         """
         Transmit as many queued pending file descriptors as we can.
         """
@@ -153,6 +161,10 @@
                     self.outgoingSocketQueue.insert(0, (skt, desc))
                     return
                 raise
+
+            # Ready to close this socket; wait until it is acknowledged.
+            self.pendingCloseSocketQueue.append(skt)
+
         if not self.outgoingSocketQueue:
             self.stopWriting()
 
@@ -185,7 +197,7 @@
         than the somewhat more abstract language that would be accurate.
     """
 
-    def initialStatus():
+    def initialStatus(): #@NoSelf
         """
         A new socket was created and added to the dispatcher.  Compute an
         initial value for its status.
@@ -194,7 +206,7 @@
         """
 
 
-    def newConnectionStatus(previousStatus):
+    def newConnectionStatus(previousStatus): #@NoSelf
         """
         A new connection was sent to a given socket.  Compute its status based
         on the previous status of that socket.
@@ -206,7 +218,7 @@
         """
 
 
-    def statusFromMessage(previousStatus, message):
+    def statusFromMessage(previousStatus, message): #@NoSelf
         """
         A status message was received by a worker.  Convert the previous status
         value (returned from L{newConnectionStatus}, L{initialStatus}, or
@@ -220,7 +232,18 @@
         """
 
 
+    def closeCountFromStatus(previousStatus): #@NoSelf
+        """
+        Based on a status previously returned from a method on this
+        L{IStatusWatcher}, determine how many sockets may be closed.
 
+        @return: a 2-tuple of C{number of sockets that may safely be closed},
+            C{new status}.
+        @rtype: 2-tuple of (C{int}, C{<opaque>})
+        """
+
+
+
 class InheritedSocketDispatcher(object):
     """
     Used by one or more L{InheritingProtocolFactory}s, this keeps track of a
@@ -260,10 +283,11 @@
         The status of a connection has changed; update all registered status
         change listeners.
         """
-        subsocket.status = self.statusWatcher.statusFromMessage(
-            subsocket.status, message
-        )
-        self.statusWatcher.statusesChanged(self.statuses)
+        watcher = self.statusWatcher
+        status = watcher.statusFromMessage(subsocket.status, message)
+        closeCount, subsocket.status = watcher.closeCountFromStatus(status)
+        watcher.statusesChanged(self.statuses)
+        return closeCount
 
 
     def sendFileDescriptor(self, skt, description):
@@ -291,7 +315,7 @@
         # XXX Maybe want to send along 'description' or 'skt' or some
         # properties thereof? -glyph
         selectedSocket.status = self.statusWatcher.newConnectionStatus(
-           selectedSocket.status
+            selectedSocket.status
         )
         self.statusWatcher.statusesChanged(self.statuses)
 
@@ -305,7 +329,7 @@
             subSocket.startReading()
 
 
-    def addSocket(self):
+    def addSocket(self, socketpair=lambda: socketpair(AF_UNIX, SOCK_DGRAM)):
         """
         Add a C{sendmsg()}-oriented AF_UNIX socket to the pool of sockets being
         used for transmitting file descriptors to child processes.
@@ -314,7 +338,7 @@
             C{fileno()} as part of the C{childFDs} argument to
             C{spawnProcess()}, then close it.
         """
-        i, o = socketpair(AF_UNIX, SOCK_DGRAM)
+        i, o = socketpair()
         i.setblocking(False)
         o.setblocking(False)
         a = _SubprocessSocket(self, o, self.statusWatcher.initialStatus())
@@ -412,4 +436,3 @@
         """
         self.statusQueue.append(statusMessage)
         self.startWriting()
-

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/test/test_sendfdport.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/test/test_sendfdport.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/internet/test/test_sendfdport.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -23,14 +23,25 @@
 import os
 import fcntl
 
+from zope.interface.verify import verifyClass
+from zope.interface import implementer
+
 from twext.internet.sendfdport import InheritedSocketDispatcher
 
 from twext.web2.metafd import ConnectionLimiter
 from twisted.internet.interfaces import IReactorFDSet
 from twisted.trial.unittest import TestCase
-from zope.interface import implementer
 
- at implementer(IReactorFDSet)
+def verifiedImplementer(interface):
+    def _(cls):
+        result = implementer(interface)(cls)
+        verifyClass(interface, result)
+        return result
+    return _
+
+
+
+ at verifiedImplementer(IReactorFDSet)
 class ReaderAdder(object):
 
     def __init__(self):
@@ -50,7 +61,23 @@
         self.writers.append(writer)
 
 
+    def removeAll(self):
+        self.__init__()
 
+
+    def getWriters(self):
+        return self.writers[:]
+
+
+    def removeReader(self, reader):
+        self.readers.remove(reader)
+
+
+    def removeWriter(self, writer):
+        self.writers.remove(writer)
+
+
+
 def isNonBlocking(skt):
     """
     Determine if the given socket is blocking or not.
@@ -66,22 +93,11 @@
 
 
 
-from zope.interface.verify import verifyClass
-from zope.interface import implementer
-
-def verifiedImplementer(interface):
-    def _(cls):
-        result = implementer(interface)(cls)
-        verifyClass(interface, result)
-        return result
-    return _
-
-
-
 @verifiedImplementer(IStatusWatcher)
 class Watcher(object):
     def __init__(self, q):
         self.q = q
+        self._closeCounter = 1
 
 
     def newConnectionStatus(self, previous):
@@ -100,7 +116,13 @@
         return 0
 
 
+    def closeCountFromStatus(self, status):
+        result = (self._closeCounter, status)
+        self._closeCounter += 1
+        return result
 
+
+
 class InheritedSocketDispatcherTests(TestCase):
     """
     Inherited socket dispatcher tests.
@@ -110,6 +132,51 @@
         self.dispatcher.reactor = ReaderAdder()
 
 
+    def test_closeSomeSockets(self):
+        """
+        L{InheritedSocketDispatcher} determines how many sockets to close from
+        L{IStatusWatcher.closeCountFromStatus}.
+        """
+        self.dispatcher.statusWatcher = Watcher([])
+        class SocketForClosing(object):
+            blocking = True
+            closed = False
+            def setblocking(self, b):
+                self.blocking = b
+            def fileno(self):
+                return object()
+            def close(self):
+                self.closed = True
+
+        one = SocketForClosing()
+        two = SocketForClosing()
+        three = SocketForClosing()
+
+        self.dispatcher.addSocket(
+            lambda: (SocketForClosing(), SocketForClosing())
+        )
+
+        self.dispatcher.sendFileDescriptor(one, "one")
+        self.dispatcher.sendFileDescriptor(two, "two")
+        self.dispatcher.sendFileDescriptor(three, "three")
+        def sendfd(unixSocket, tcpSocket, description):
+            pass
+        # Put something into the socket-close queue.
+        self.dispatcher._subprocessSockets[0].doWrite(sendfd)
+        # Nothing closed yet.
+        self.assertEquals(one.closed, False)
+        self.assertEquals(two.closed, False)
+        self.assertEquals(three.closed, False)
+
+        def recvmsg(fileno):
+            return 'data', 0, 0
+        self.dispatcher._subprocessSockets[0].doRead(recvmsg)
+        # One socket closed.
+        self.assertEquals(one.closed, True)
+        self.assertEquals(two.closed, False)
+        self.assertEquals(three.closed, False)
+
+
     def test_nonBlocking(self):
         """
         Creating a L{_SubprocessSocket} via
@@ -165,6 +232,7 @@
         message = "whatever"
         # Need to have a socket that will accept the descriptors.
         dispatcher.addSocket()
-        dispatcher.statusMessage(dispatcher._subprocessSockets[0], message)
-        dispatcher.statusMessage(dispatcher._subprocessSockets[0], message)
+        subskt = dispatcher._subprocessSockets[0]
+        dispatcher.statusMessage(subskt, message)
+        dispatcher.statusMessage(subskt, message)
         self.assertEquals(q, [[-1], [-2]])

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/patches.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/patches.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/patches.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -26,6 +26,8 @@
 from twisted.python.versions import Version
 from twisted.python.modules import getModule
 
+
+
 def _hasIPv6ClientSupport():
     """
     Does the loaded version of Twisted have IPv6 client support?
@@ -34,8 +36,9 @@
     if version > lastVersionWithoutIPv6Clients:
         return True
     elif version == lastVersionWithoutIPv6Clients:
-        # It could be a snapshot of trunk or a branch with this bug fixed. Don't
-        # load the module, though, as that would be a bunch of unnecessary work.
+        # It could be a snapshot of trunk or a branch with this bug fixed.
+        # Don't load the module, though, as that would be a bunch of
+        # unnecessary work.
         return "_resolveIPv6" in (getModule("twisted.internet.tcp")
                                   .filePath.getContent())
     else:
@@ -45,8 +48,8 @@
 
 def _addBackports():
     """
-    We currently require 2 backported bugfixes from a future release of Twisted,
-    for IPv6 support:
+    We currently require 2 backported bugfixes from a future release of
+    Twisted, for IPv6 support:
 
         - U{IPv6 client support <http://tm.tl/5085>}
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/log.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/log.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/log.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -34,7 +34,8 @@
         log = Logger()
 
         def oops(self, data):
-            self.log.error("Oops! Invalid data from server: {data!r}", data=data)
+            self.log.error("Oops! Invalid data from server: {data!r}",
+                           data=data)
 
 C{Logger}s have namespaces, for which logging can be configured independently.
 Namespaces may be specified by passing in a C{namespace} argument to L{Logger}
@@ -76,14 +77,16 @@
 from zope.interface import Interface, implementer
 from twisted.python.constants import NamedConstant, Names
 from twisted.python.failure import Failure
-from twisted.python.reflect import safe_str
+from twisted.python.reflect import safe_str, safe_repr
 import twisted.python.log
 from twisted.python.log import msg as twistedLogMessage
 from twisted.python.log import addObserver, removeObserver
 from twisted.python.log import ILogObserver as ILegacyLogObserver
 
+OBSERVER_REMOVED = (
+    "Temporarily removing observer {observer} due to exception: {e}"
+)
 
-
 #
 # Log level definitions
 #
@@ -150,24 +153,27 @@
         """
         return cls._levelPriorities[constant]
 
-LogLevel._levelPriorities = dict((constant, idx)
-                                 for (idx, constant) in
-                                     (enumerate(LogLevel.iterconstants())))
 
+LogLevel._levelPriorities = dict(
+    (constant, idx) for (idx, constant) in
+    (enumerate(LogLevel.iterconstants()))
+)
 
 
+
 #
 # Mappings to Python's logging module
 #
 pythonLogLevelMapping = {
-    LogLevel.debug   : logging.DEBUG,
-    LogLevel.info    : logging.INFO,
-    LogLevel.warn    : logging.WARNING,
-    LogLevel.error   : logging.ERROR,
-   #LogLevel.critical: logging.CRITICAL,
+    LogLevel.debug: logging.DEBUG,
+    LogLevel.info:  logging.INFO,
+    LogLevel.warn:  logging.WARNING,
+    LogLevel.error: logging.ERROR,
+    # LogLevel.critical: logging.CRITICAL,
 }
 
 
+
 ##
 # Loggers
 ##
@@ -206,21 +212,20 @@
         return formatWithCall(format, event)
 
     except BaseException as e:
-        try:
-            return formatUnformattableEvent(event, e)
-        except:
-            return u"MESSAGE LOST"
+        return formatUnformattableEvent(event, e)
 
 
 
 def formatUnformattableEvent(event, error):
     """
-    Formats an event as a L{unicode} that describes the event
-    generically and a formatting error.
+    Formats an event as a L{unicode} that describes the event generically and a
+    formatting error.
 
     @param event: a logging event
+    @type dict: L{dict}
 
     @param error: the formatting error
+    @type error: L{Exception}
 
     @return: a L{unicode}
     """
@@ -229,35 +234,22 @@
             u"Unable to format event {event!r}: {error}"
             .format(event=event, error=error)
         )
-    except BaseException as error:
-        #
+    except BaseException:
         # Yikes, something really nasty happened.
         #
-        # Try to recover as much formattable data as possible;
-        # hopefully at least the namespace is sane, which will
-        # help you find the offending logger.
-        #
-        try:
-            items = []
+        # Try to recover as much formattable data as possible; hopefully at
+        # least the namespace is sane, which will help you find the offending
+        # logger.
+        failure = Failure()
 
-            for key, value in event.items():
-                try:
-                    items.append(u"{key!r} = ".format(key=key))
-                except:
-                    items.append(u"<UNFORMATTABLE KEY> = ")
-                try:
-                    items.append(u"{value!r}".format(value=value))
-                except:
-                    items.append(u"<UNFORMATTABLE VALUE>")
+        text = ", ".join(" = ".join((safe_repr(key), safe_repr(value)))
+                         for key, value in event.items())
 
-            text = ", ".join(items)
-        except:
-            text = ""
-
         return (
-            u"MESSAGE LOST: Unformattable object logged: {error}\n"
-            u"Recoverable data: {text}"
-            .format(text=text)
+            u"MESSAGE LOST: unformattable object logged: {error}\n"
+            u"Recoverable data: {text}\n"
+            u"Exception during formatting:\n{failure}"
+            .format(error=safe_repr(error), failure=failure, text=text)
         )
 
 
@@ -344,28 +336,24 @@
         @param kwargs: additional keyword parameters to include with
             the event.
         """
-        if level not in LogLevel.iterconstants(): # FIXME: Updated Twisted supports 'in' on constants container
+        # FIXME: Updated Twisted supports 'in' on constants container
+        if level not in LogLevel.iterconstants():
             self.failure(
                 "Got invalid log level {invalidLevel!r} in {logger}.emit().",
                 Failure(InvalidLogLevelError(level)),
-                invalidLevel = level,
-                logger = self,
+                invalidLevel=level,
+                logger=self,
             )
             #level = LogLevel.error
             # FIXME: continue to emit?
             return
 
-        event = kwargs
-        event.update(
-            log_logger    = self,
-            log_level     = level,
-            log_namespace = self.namespace,
-            log_source    = self.source,
-            log_format    = format,
-            log_time      = time.time(),
+        kwargs.update(
+            log_logger=self, log_level=level, log_namespace=self.namespace,
+            log_source=self.source, log_format=format, log_time=time.time(),
         )
 
-        self.publisher(event)
+        self.publisher(kwargs)
 
 
     def failure(self, format, failure=None, level=LogLevel.error, **kwargs):
@@ -381,8 +369,9 @@
 
         or::
 
-            d = deferred_frob(knob)
-            d.addErrback(lambda f: log.failure, "While frobbing {knob}", f, knob=knob)
+            d = deferredFrob(knob)
+            d.addErrback(lambda f: log.failure, "While frobbing {knob}",
+                         f, knob=knob)
 
         @param format: a message format using new-style (PEP 3101)
             formatting.  The logging event (which is a L{dict}) is
@@ -397,7 +386,7 @@
             event.
         """
         if failure is None:
-            failure=Failure()
+            failure = Failure()
 
         self.emit(level, format, log_failure=failure, **kwargs)
 
@@ -410,10 +399,10 @@
     """
 
     def __init__(self, logger=None):
-        if logger is not None:
+        if logger is None:
+            self.newStyleLogger = Logger(Logger._namespaceFromCallingContext())
+        else:
             self.newStyleLogger = logger
-        else:
-            self.newStyleLogger = Logger(Logger._namespaceFromCallingContext())
 
 
     def __getattribute__(self, name):
@@ -446,10 +435,12 @@
             _stuff = Failure(_stuff)
 
         if isinstance(_stuff, Failure):
-            self.newStyleLogger.emit(LogLevel.error, failure=_stuff, why=_why, isError=1, **kwargs)
+            self.newStyleLogger.emit(LogLevel.error, failure=_stuff, why=_why,
+                                     isError=1, **kwargs)
         else:
             # We got called with an invalid _stuff.
-            self.newStyleLogger.emit(LogLevel.error, repr(_stuff), why=_why, isError=1, **kwargs)
+            self.newStyleLogger.emit(LogLevel.error, repr(_stuff), why=_why,
+                                     isError=1, **kwargs)
 
 
 
@@ -475,13 +466,15 @@
 
     setattr(Logger, level.name, log_emit)
 
-for level in LogLevel.iterconstants(): 
-    bindEmit(level)
 
-del level
 
+def _bindLevels():
+    for level in LogLevel.iterconstants():
+        bindEmit(level)
 
+_bindLevels()
 
+
 #
 # Observers
 #
@@ -545,11 +538,11 @@
             pass
 
 
-    def __call__(self, event): 
+    def __call__(self, event):
         for observer in self.observers:
             try:
                 observer(event)
-            except:
+            except BaseException as e:
                 #
                 # We have to remove the offending observer because
                 # we're going to badmouth it to all of its friends
@@ -558,8 +551,8 @@
                 #
                 self.removeObserver(observer)
                 try:
-                    self.log.failure("Observer {observer} raised an exception; removing.", observer=observer)
-                except:
+                    self.log.failure(OBSERVER_REMOVED, observer=observer, e=e)
+                except BaseException:
                     pass
                 finally:
                     self.addObserver(observer)
@@ -639,6 +632,8 @@
     """
     L{ILogFilterPredicate} that filters out events with a log level
     lower than the log level for the event's namespace.
+
+    Events that not not have a log level or namespace are also dropped.
     """
 
     def __init__(self):
@@ -701,11 +696,15 @@
 
 
     def __call__(self, event):
-        level     = event["log_level"]
-        namespace = event["log_namespace"]
+        level     = event.get("log_level", None)
+        namespace = event.get("log_namespace", None)
 
-        if (LogLevel._priorityForLevel(level) <
-            LogLevel._priorityForLevel(self.logLevelForNamespace(namespace))):
+        if (
+            level is None or
+            namespace is None or
+            LogLevel._priorityForLevel(level) <
+            LogLevel._priorityForLevel(self.logLevelForNamespace(namespace))
+        ):
             return PredicateResult.no
 
         return PredicateResult.maybe
@@ -725,8 +724,8 @@
         """
         self.legacyObserver = legacyObserver
 
-    
-    def __call__(self, event): 
+
+    def __call__(self, event):
         prefix = "[{log_namespace}#{log_level.name}] ".format(**event)
 
         level = event["log_level"]
@@ -756,7 +755,9 @@
         if "log_failure" in event:
             event["failure"] = event["log_failure"]
             event["isError"] = 1
-            event["why"] = "{prefix}{message}".format(prefix=prefix, message=formatEvent(event))
+            event["why"] = "{prefix}{message}".format(
+                prefix=prefix, message=formatEvent(event)
+            )
 
         self.legacyObserver(**event)
 
@@ -814,7 +815,8 @@
         self.legacyLogObserver = LegacyLogObserver(twistedLogMessage)
         self.filteredPublisher = LogPublisher(self.legacyLogObserver)
         self.levels            = LogLevelFilterPredicate()
-        self.filters           = FilteringLogObserver(self.filteredPublisher, (self.levels,))
+        self.filters           = FilteringLogObserver(self.filteredPublisher,
+                                                      (self.levels,))
         self.rootPublisher     = LogPublisher(self.filters)
 
 
@@ -862,6 +864,7 @@
     def __init__(self, submapping):
         self._submapping = submapping
 
+
     def __getitem__(self, key):
         callit = key.endswith(u"()")
         realKey = key[:-2] if callit else key
@@ -871,6 +874,7 @@
         return value
 
 
+
 def formatWithCall(formatString, mapping):
     """
     Format a string like L{unicode.format}, but:
@@ -930,16 +934,20 @@
             continue
 
         for name, obj in module.__dict__.iteritems():
-            legacyLogger = LegacyLogger(logger=Logger(namespace=module.__name__))
+            newLogger = Logger(namespace=module.__name__)
+            legacyLogger = LegacyLogger(logger=newLogger)
 
             if obj is twisted.python.log:
-                log.info("Replacing Twisted log module object {0} in {1}".format(name, module.__name__))
+                log.info("Replacing Twisted log module object {0} in {1}"
+                         .format(name, module.__name__))
                 setattr(module, name, legacyLogger)
             elif obj is twisted.python.log.msg:
-                log.info("Replacing Twisted log.msg object {0} in {1}".format(name, module.__name__))
+                log.info("Replacing Twisted log.msg object {0} in {1}"
+                         .format(name, module.__name__))
                 setattr(module, name, legacyLogger.msg)
             elif obj is twisted.python.log.err:
-                log.info("Replacing Twisted log.err object {0} in {1}".format(name, module.__name__))
+                log.info("Replacing Twisted log.err object {0} in {1}"
+                         .format(name, module.__name__))
                 setattr(module, name, legacyLogger.err)
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/test/test_log.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/test/test_log.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/python/test/test_log.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -23,11 +23,11 @@
 from twext.python.log import (
     LogLevel, InvalidLogLevelError,
     pythonLogLevelMapping,
-    formatEvent, formatWithCall,
+    formatEvent, formatUnformattableEvent, formatWithCall,
     Logger, LegacyLogger,
-    ILogObserver, LogPublisher,
+    ILogObserver, LogPublisher, DefaultLogPublisher,
     FilteringLogObserver, PredicateResult,
-    LogLevelFilterPredicate,
+    LogLevelFilterPredicate, OBSERVER_REMOVED
 )
 
 
@@ -59,7 +59,7 @@
             twistedLogging.removeObserver(observer)
 
         self.emitted = {
-            "level" : level,
+            "level":  level,
             "format": format,
             "kwargs": kwargs,
         }
@@ -67,8 +67,8 @@
 
 
 class TestLegacyLogger(LegacyLogger):
-    def __init__(self):
-        LegacyLogger.__init__(self, logger=TestLogger())
+    def __init__(self, logger=TestLogger()):
+        LegacyLogger.__init__(self, logger=logger)
 
 
 
@@ -131,7 +131,8 @@
         """
         self.failUnless(logLevelForNamespace(None), defaultLogLevel)
         self.failUnless(logLevelForNamespace(""), defaultLogLevel)
-        self.failUnless(logLevelForNamespace("rocker.cool.namespace"), defaultLogLevel)
+        self.failUnless(logLevelForNamespace("rocker.cool.namespace"),
+                        defaultLogLevel)
 
 
     def test_setLogLevel(self):
@@ -142,22 +143,30 @@
         setLogLevelForNamespace("twext.web2", LogLevel.debug)
         setLogLevelForNamespace("twext.web2.dav", LogLevel.warn)
 
-        self.assertEquals(logLevelForNamespace(None                        ), LogLevel.error)
-        self.assertEquals(logLevelForNamespace("twisted"                   ), LogLevel.error)
-        self.assertEquals(logLevelForNamespace("twext.web2"                ), LogLevel.debug)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav"            ), LogLevel.warn)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"       ), LogLevel.warn)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"), LogLevel.warn)
+        self.assertEquals(logLevelForNamespace(None),
+                          LogLevel.error)
+        self.assertEquals(logLevelForNamespace("twisted"),
+                          LogLevel.error)
+        self.assertEquals(logLevelForNamespace("twext.web2"),
+                          LogLevel.debug)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav"),
+                          LogLevel.warn)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"),
+                          LogLevel.warn)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"),
+                          LogLevel.warn)
 
 
     def test_setInvalidLogLevel(self):
         """
         Can't pass invalid log levels to setLogLevelForNamespace().
         """
-        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace, "twext.web2", object())
+        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace,
+                          "twext.web2", object())
 
         # Level must be a constant, not the name of a constant
-        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace, "twext.web2", "debug")
+        self.assertRaises(InvalidLogLevelError, setLogLevelForNamespace,
+                          "twext.web2", "debug")
 
 
     def test_clearLogLevels(self):
@@ -169,11 +178,14 @@
 
         clearLogLevels()
 
-        self.assertEquals(logLevelForNamespace("twisted"                   ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2"                ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav"            ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"       ), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"), defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twisted"), defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2"), defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav"),
+                          defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test"),
+                          defaultLogLevel)
+        self.assertEquals(logLevelForNamespace("twext.web2.dav.test1.test2"),
+                          defaultLogLevel)
 
 
     def test_namespace_default(self):
@@ -191,14 +203,17 @@
         mean that the format key ought to be I{called} rather than stringified.
         """
         self.assertEquals(
-            formatWithCall(u"Hello, {world}. {callme()}.",
-                           dict(world="earth",
-                                callme=lambda: "maybe")),
+            formatWithCall(
+                u"Hello, {world}. {callme()}.",
+                dict(world="earth", callme=lambda: "maybe")
+            ),
             "Hello, earth. maybe."
         )
         self.assertEquals(
-            formatWithCall(u"Hello, {repr()!r}.",
-                           dict(repr=lambda: 'repr')),
+            formatWithCall(
+                u"Hello, {repr()!r}.",
+                dict(repr=lambda: "repr")
+            ),
             "Hello, 'repr'."
         )
 
@@ -262,7 +277,7 @@
         self.assertIn(repr(event), result)
 
 
-    def test_formatEventYouSoNasty(self):
+    def test_formatUnformattableEvent(self):
         """
         Formatting an event that's just plain out to get us.
         """
@@ -273,24 +288,52 @@
         self.assertIn(repr(event), result)
 
 
-#     def test_formatEventYouSoNastyOMGMakeItStop(self):
-#         """
-#         Formatting an event that's just plain out to get us and is
-#         really determined.
-#         """
-#         badRepr = 
+    def test_formatUnformattableEventWithUnformattableKey(self):
+        """
+        Formatting an unformattable event that has an unformattable key.
+        """
+        event = {
+            "log_format": "{evil()}",
+            "evil": lambda: 1/0,
+            Unformattable(): "gurk",
+        }
+        result = formatEvent(event)
+        self.assertIn("MESSAGE LOST: unformattable object logged:", result)
+        self.assertIn("Recoverable data:", result)
+        self.assertIn("Exception during formatting:", result)
 
-#         event = dict(
-#             log_format="{evil()}",
-#             evil=lambda: 1/0,
-#         )
-#         result = formatEvent(event)
 
-#         self.assertIn("Unable to format event", result)
-#         self.assertIn(repr(event), result)
+    def test_formatUnformattableEventWithUnformattableValue(self):
+        """
+        Formatting an unformattable event that has an unformattable value.
+        """
+        event = dict(
+            log_format="{evil()}",
+            evil=lambda: 1/0,
+            gurk=Unformattable(),
+        )
+        result = formatEvent(event)
+        self.assertIn("MESSAGE LOST: unformattable object logged:", result)
+        self.assertIn("Recoverable data:", result)
+        self.assertIn("Exception during formatting:", result)
 
 
+    def test_formatUnformattableEventWithUnformattableErrorOMGWillItStop(self):
+        """
+        Formatting an unformattable event that has an unformattable value.
+        """
+        event = dict(
+            log_format="{evil()}",
+            evil=lambda: 1/0,
+            recoverable="okay",
+        )
+        # Call formatUnformattableEvent() directly with a bogus exception.
+        result = formatUnformattableEvent(event, Unformattable())
+        self.assertIn("MESSAGE LOST: unformattable object logged:", result)
+        self.assertIn(repr("recoverable") + " = " + repr("okay"), result)
 
+
+
 class LoggerTests(SetUpTearDown, unittest.TestCase):
     """
     Tests for L{Logger}.
@@ -322,8 +365,8 @@
 
     def test_sourceAvailableForFormatting(self):
         """
-        On instances that have a L{Logger} class attribute, the C{log_source} key
-        is available to format strings.
+        On instances that have a L{Logger} class attribute, the C{log_source}
+        key is available to format strings.
         """
         obj = LogComposedObject("hello")
         log = obj.log
@@ -359,16 +402,19 @@
             self.assertEquals(log.emitted["kwargs"]["junk"], message)
 
             if level >= logLevelForNamespace(log.namespace):
+                self.assertTrue(hasattr(log, "event"), "No event observed.")
                 self.assertEquals(log.event["log_format"], format)
                 self.assertEquals(log.event["log_level"], level)
                 self.assertEquals(log.event["log_namespace"], __name__)
                 self.assertEquals(log.event["log_source"], None)
 
-                self.assertEquals(log.event["logLevel"], pythonLogLevelMapping[level])
+                self.assertEquals(log.event["logLevel"],
+                                  pythonLogLevelMapping[level])
 
                 self.assertEquals(log.event["junk"], message)
 
-                # FIXME: this checks the end of message because we do formatting in emit()
+                # FIXME: this checks the end of message because we do
+                # formatting in emit()
                 self.assertEquals(
                     formatEvent(log.event),
                     message
@@ -407,10 +453,10 @@
 
         log.warn(
             "*",
-            log_format = "#",
-            log_level = LogLevel.error,
-            log_namespace = "*namespace*",
-            log_source = "*source*",
+            log_format="#",
+            log_level=LogLevel.error,
+            log_namespace="*namespace*",
+            log_source="*source*",
         )
 
         # FIXME: Should conflicts log errors?
@@ -487,24 +533,232 @@
         self.assertEquals(set((o1, o3)), set(publisher.observers))
 
 
+    def test_removeObserverNotRegistered(self):
+        """
+        L{LogPublisher.removeObserver} removes an observer that is not
+        registered.
+        """
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = LogPublisher(o1, o2)
+        publisher.removeObserver(o3)
+        self.assertEquals(set((o1, o2)), set(publisher.observers))
+
+
     def test_fanOut(self):
         """
         L{LogPublisher} calls its observers.
         """
-        e1 = []
-        e2 = []
-        e3 = []
+        event = dict(foo=1, bar=2)
 
-        o1 = lambda e: e1.append(e)
-        o2 = lambda e: e2.append(e)
-        o3 = lambda e: e3.append(e)
+        events1 = []
+        events2 = []
+        events3 = []
 
+        o1 = lambda e: events1.append(e)
+        o2 = lambda e: events2.append(e)
+        o3 = lambda e: events3.append(e)
+
         publisher = LogPublisher(o1, o2, o3)
+        publisher(event)
+        self.assertIn(event, events1)
+        self.assertIn(event, events2)
+        self.assertIn(event, events3)
+
+
+    def test_observerRaises(self):
+        nonTestEvents = []
+        Logger.publisher.addObserver(lambda e: nonTestEvents.append(e))
+
+        event = dict(foo=1, bar=2)
+        exception = RuntimeError("ARGH! EVIL DEATH!")
+
+        events = []
+
+        def observer(event):
+            events.append(event)
+            raise exception
+
+        publisher = LogPublisher(observer)
+        publisher(event)
+
+        # Verify that the observer saw my event
+        self.assertIn(event, events)
+
+        # Verify that the observer raised my exception
+        errors = self.flushLoggedErrors(exception.__class__)
+        self.assertEquals(len(errors), 1)
+        self.assertIdentical(errors[0].value, exception)
+
+        # Verify that the exception was logged
+        for event in nonTestEvents:
+            if (
+                event.get("log_format", None) == OBSERVER_REMOVED and
+                getattr(event.get("failure", None), "value") is exception
+            ):
+                break
+        else:
+            self.fail("Observer raised an exception "
+                      "and the exception was not logged.")
+
+
+    def test_observerRaisesAndLoggerHatesMe(self):
+        nonTestEvents = []
+        Logger.publisher.addObserver(lambda e: nonTestEvents.append(e))
+
+        event = dict(foo=1, bar=2)
+        exception = RuntimeError("ARGH! EVIL DEATH!")
+
+        def observer(event):
+            raise RuntimeError("Sad panda")
+
+        class GurkLogger(Logger):
+            def failure(self, *args, **kwargs):
+                raise exception
+
+        publisher = LogPublisher(observer)
+        publisher.log = GurkLogger()
+        publisher(event)
+
+        # Here, the lack of an exception thus far is a success, of sorts
+
+
+
+class DefaultLogPublisherTests(SetUpTearDown, unittest.TestCase):
+    def test_addObserver(self):
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=True)
+        publisher.addObserver(o3, filtered=False)
+
+        self.assertEquals(
+            set((o1, o2, publisher.legacyLogObserver)),
+            set(publisher.filteredPublisher.observers),
+            "Filtered observers do not match expected set"
+        )
+        self.assertEquals(
+            set((o3, publisher.filters)),
+            set(publisher.rootPublisher.observers),
+            "Root observers do not match expected set"
+        )
+
+
+    def test_addObserverAgain(self):
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=True)
+        publisher.addObserver(o3, filtered=False)
+
+        # Swap filtered-ness of o2 and o3
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=False)
+        publisher.addObserver(o3, filtered=True)
+
+        self.assertEquals(
+            set((o1, o3, publisher.legacyLogObserver)),
+            set(publisher.filteredPublisher.observers),
+            "Filtered observers do not match expected set"
+        )
+        self.assertEquals(
+            set((o2, publisher.filters)),
+            set(publisher.rootPublisher.observers),
+            "Root observers do not match expected set"
+        )
+
+
+    def test_removeObserver(self):
+        o1 = lambda e: None
+        o2 = lambda e: None
+        o3 = lambda e: None
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(o1)
+        publisher.addObserver(o2, filtered=True)
+        publisher.addObserver(o3, filtered=False)
         publisher.removeObserver(o2)
-        self.assertEquals(set((o1, o3)), set(publisher.observers))
+        publisher.removeObserver(o3)
 
+        self.assertEquals(
+            set((o1, publisher.legacyLogObserver)),
+            set(publisher.filteredPublisher.observers),
+            "Filtered observers do not match expected set"
+        )
+        self.assertEquals(
+            set((publisher.filters,)),
+            set(publisher.rootPublisher.observers),
+            "Root observers do not match expected set"
+        )
 
 
+    def test_filteredObserver(self):
+        namespace = __name__
+
+        event_debug = dict(log_namespace=namespace,
+                           log_level=LogLevel.debug, log_format="")
+        event_error = dict(log_namespace=namespace,
+                           log_level=LogLevel.error, log_format="")
+        events = []
+
+        observer = lambda e: events.append(e)
+
+        publisher = DefaultLogPublisher()
+
+        publisher.addObserver(observer, filtered=True)
+        publisher(event_debug)
+        publisher(event_error)
+        self.assertNotIn(event_debug, events)
+        self.assertIn(event_error, events)
+
+
+    def test_filteredObserverNoFilteringKeys(self):
+        event_debug = dict(log_level=LogLevel.debug)
+        event_error = dict(log_level=LogLevel.error)
+        event_none  = dict()
+        events = []
+
+        observer = lambda e: events.append(e)
+
+        publisher = DefaultLogPublisher()
+        publisher.addObserver(observer, filtered=True)
+        publisher(event_debug)
+        publisher(event_error)
+        publisher(event_none)
+        self.assertNotIn(event_debug, events)
+        self.assertNotIn(event_error, events)
+        self.assertNotIn(event_none, events)
+
+
+    def test_unfilteredObserver(self):
+        namespace = __name__
+
+        event_debug = dict(log_namespace=namespace, log_level=LogLevel.debug,
+                           log_format="")
+        event_error = dict(log_namespace=namespace, log_level=LogLevel.error,
+                           log_format="")
+        events = []
+
+        observer = lambda e: events.append(e)
+
+        publisher = DefaultLogPublisher()
+
+        publisher.addObserver(observer, filtered=False)
+        publisher(event_debug)
+        publisher(event_error)
+        self.assertIn(event_debug, events)
+        self.assertIn(event_error, events)
+
+
+
 class FilteringLogObserverTests(SetUpTearDown, unittest.TestCase):
     """
     Tests for L{FilteringLogObserver}.
@@ -552,11 +806,16 @@
             def no(event):
                 return PredicateResult.no
 
+            @staticmethod
+            def bogus(event):
+                return None
+
         predicates = (getattr(Filters, f) for f in filters)
         eventsSeen = []
         trackingObserver = lambda e: eventsSeen.append(e)
         filteringObserver = FilteringLogObserver(trackingObserver, predicates)
-        for e in events: filteringObserver(e)
+        for e in events:
+            filteringObserver(e)
 
         return [e["count"] for e in eventsSeen]
 
@@ -564,25 +823,35 @@
     def test_shouldLogEvent_noFilters(self):
         self.assertEquals(self.filterWith(), [0, 1, 2, 3])
 
+
     def test_shouldLogEvent_noFilter(self):
         self.assertEquals(self.filterWith("notTwo"), [0, 1, 3])
 
+
     def test_shouldLogEvent_yesFilter(self):
         self.assertEquals(self.filterWith("twoPlus"), [0, 1, 2, 3])
 
+
     def test_shouldLogEvent_yesNoFilter(self):
         self.assertEquals(self.filterWith("twoPlus", "no"), [2, 3])
 
+
     def test_shouldLogEvent_yesYesNoFilter(self):
-        self.assertEquals(self.filterWith("twoPlus", "twoMinus", "no"), [0, 1, 2, 3])
+        self.assertEquals(self.filterWith("twoPlus", "twoMinus", "no"),
+                          [0, 1, 2, 3])
 
 
+    def test_shouldLogEvent_badPredicateResult(self):
+        self.assertRaises(TypeError, self.filterWith, "bogus")
+
+
     def test_call(self):
         e = dict(obj=object())
 
         def callWithPredicateResult(result):
             seen = []
-            observer = FilteringLogObserver(lambda e: seen.append(e), (lambda e: result,))
+            observer = FilteringLogObserver(lambda e: seen.append(e),
+                                            (lambda e: result,))
             observer(e)
             return seen
 
@@ -597,6 +866,14 @@
     Tests for L{LegacyLogger}.
     """
 
+    def test_namespace_default(self):
+        """
+        Default namespace is module name.
+        """
+        log = TestLegacyLogger(logger=None)
+        self.assertEquals(log.newStyleLogger.namespace, __name__)
+
+
     def test_passThroughAttributes(self):
         """
         C{__getattribute__} on L{LegacyLogger} is passing through to Twisted's
@@ -619,19 +896,22 @@
         log = TestLegacyLogger()
 
         message = "Hi, there."
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
 
         log.msg(message, **kwargs)
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.info)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.info)
         self.assertEquals(log.newStyleLogger.emitted["format"], message)
 
         for key, value in kwargs.items():
-            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key], value)
+            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key],
+                                 value)
 
         log.msg(foo="")
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.info)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.info)
         self.assertIdentical(log.newStyleLogger.emitted["format"], None)
 
 
@@ -642,7 +922,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
 
         try:
             raise exception
@@ -659,7 +939,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
         why = "Because I said so."
 
         try:
@@ -677,7 +957,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
         why = "Because I said so."
 
         try:
@@ -695,7 +975,7 @@
         log = TestLegacyLogger()
 
         exception = RuntimeError("Oh me, oh my.")
-        kwargs = { "foo": "bar", "obj": object() }
+        kwargs = {"foo": "bar", "obj": object()}
         why = "Because I said so."
         bogus = object()
 
@@ -707,12 +987,14 @@
         errors = self.flushLoggedErrors(exception.__class__)
         self.assertEquals(len(errors), 0)
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.error)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.error)
         self.assertEquals(log.newStyleLogger.emitted["format"], repr(bogus))
         self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["why"], why)
 
         for key, value in kwargs.items():
-            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key], value)
+            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key],
+                                 value)
 
 
     def legacy_err(self, log, kwargs, why, exception):
@@ -724,11 +1006,24 @@
         errors = self.flushLoggedErrors(exception.__class__)
         self.assertEquals(len(errors), 1)
 
-        self.assertIdentical(log.newStyleLogger.emitted["level"], LogLevel.error)
+        self.assertIdentical(log.newStyleLogger.emitted["level"],
+                             LogLevel.error)
         self.assertEquals(log.newStyleLogger.emitted["format"], None)
-        self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["failure"].__class__, Failure)
-        self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["failure"].value, exception)
-        self.assertIdentical(log.newStyleLogger.emitted["kwargs"]["why"], why)
+        emittedKwargs = log.newStyleLogger.emitted["kwargs"]
+        self.assertIdentical(emittedKwargs["failure"].__class__, Failure)
+        self.assertIdentical(emittedKwargs["failure"].value, exception)
+        self.assertIdentical(emittedKwargs["why"], why)
 
         for key, value in kwargs.items():
-            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key], value)
+            self.assertIdentical(log.newStyleLogger.emitted["kwargs"][key],
+                                 value)
+
+
+
+class Unformattable(object):
+    """
+    An object that raises an exception from C{__repr__}.
+    """
+
+    def __repr__(self):
+        return str(1/0)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/channel/http.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/channel/http.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/channel/http.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -726,6 +726,10 @@
     betweenRequestsTimeOut = 15
     # Timeout between lines or bytes while reading a request
     inputTimeOut = 60 * 4
+    # Timeout between end of request read and end of response write
+    idleTimeOut = 60 * 5
+    # Timeout when closing non-persistent connection
+    closeTimeOut = 20
 
     # maximum length of headers (10KiB)
     maxHeaderLength = 10240
@@ -744,7 +748,7 @@
     _readLost = False
     _writeLost = False
     
-    _lingerTimer = None
+    _abortTimer = None
     chanRequest = None
 
     def _callLater(self, secs, fun):
@@ -823,10 +827,10 @@
         self.chanRequest = None
         self.setLineMode()
         
-        # Disable the idle timeout, in case this request takes a long
+        # Set an idle timeout, in case this request takes a long
         # time to finish generating output.
         if len(self.requests) > 0:
-            self.setTimeout(None)
+            self.setTimeout(self.idleTimeOut)
         
     def _startNextRequest(self):
         # notify next request, if present, it can start writing
@@ -881,57 +885,29 @@
             # incoming requests.
             self._callLater(0, self._startNextRequest)
         else:
-            self.lingeringClose()
+            # Set an abort timer in case an orderly close hangs
+            self.setTimeout(None)
+            self._abortTimer = reactor.callLater(self.closeTimeOut, self._abortTimeout)
+            #reactor.callLater(0.1, self.transport.loseConnection)
+            self.transport.loseConnection()
 
     def timeoutConnection(self):
         #log.info("Timing out client: %s" % str(self.transport.getPeer()))
+        # Set an abort timer in case an orderly close hangs
+        self._abortTimer = reactor.callLater(self.closeTimeOut, self._abortTimeout)
         policies.TimeoutMixin.timeoutConnection(self)
 
-    def lingeringClose(self):
-        """
-        This is a bit complicated. This process is necessary to ensure proper
-        workingness when HTTP pipelining is in use.
+    def _abortTimeout(self):
+        log.error("Connection aborted - took too long to close: {c}", c=str(self.transport.getPeer()))
+        self._abortTimer = None
+        self.transport.abortConnection()
 
-        Here is what it wants to do:
-
-            1.  Finish writing any buffered data, then close our write side.
-                While doing so, read and discard any incoming data.
-
-            2.  When that happens (writeConnectionLost called), wait up to 20
-                seconds for the remote end to close their write side (our read
-                side).
-
-            3.
-                - If they do (readConnectionLost called), close the socket,
-                  and cancel the timeout.
-
-                - If that doesn't happen, the timer fires, and makes the
-                  socket close anyways.
-        """
-        
-        # Close write half
-        self.transport.loseWriteConnection()
-        
-        # Throw out any incoming data
-        self.dataReceived = self.lineReceived = lambda *args: None
-        self.transport.resumeProducing()
-
-    def writeConnectionLost(self):
-        # Okay, all data has been written
-        # In 20 seconds, actually close the socket
-        self._lingerTimer = reactor.callLater(20, self._lingerClose)
-        self._writeLost = True
-        
-    def _lingerClose(self):
-        self._lingerTimer = None
-        self.transport.loseConnection()
-        
     def readConnectionLost(self):
         """Read connection lost"""
         # If in the lingering-close state, lose the socket.
-        if self._lingerTimer:
-            self._lingerTimer.cancel()
-            self._lingerTimer = None
+        if self._abortTimer:
+            self._abortTimer.cancel()
+            self._abortTimer = None
             self.transport.loseConnection()
             return
         

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/test/test_util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/test/test_util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/test/test_util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -7,10 +7,10 @@
 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 # copies of the Software, and to permit persons to whom the Software is
 # furnished to do so, subject to the following conditions:
-# 
+#
 # The above copyright notice and this permission notice shall be included in all
 # copies or substantial portions of the Software.
-# 
+#
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -42,6 +42,7 @@
         self.assertEquals(util.normalizeURL("///../"), "/")
         self.assertEquals(util.normalizeURL("/.."), "/")
 
+
     def test_joinURL(self):
         """
         joinURL()
@@ -67,6 +68,7 @@
         self.assertEquals(util.joinURL("/foo", "/../"), "/")
         self.assertEquals(util.joinURL("/foo", "/./"), "/foo/")
 
+
     def test_parentForURL(self):
         """
         parentForURL()
@@ -83,6 +85,8 @@
         self.assertEquals(util.parentForURL("http://server/foo/bar/."), "http://server/foo/")
         self.assertEquals(util.parentForURL("http://server/foo/bar"), "http://server/foo/")
         self.assertEquals(util.parentForURL("http://server/foo/bar/"), "http://server/foo/")
+        self.assertEquals(util.parentForURL("http://server/foo/bar?x=1&y=2"), "http://server/foo/")
+        self.assertEquals(util.parentForURL("http://server/foo/bar/?x=1&y=2"), "http://server/foo/")
         self.assertEquals(util.parentForURL("/"), None)
         self.assertEquals(util.parentForURL("/foo/.."), None)
         self.assertEquals(util.parentForURL("/foo/../"), None)
@@ -94,3 +98,5 @@
         self.assertEquals(util.parentForURL("/foo/bar/."), "/foo/")
         self.assertEquals(util.parentForURL("/foo/bar"), "/foo/")
         self.assertEquals(util.parentForURL("/foo/bar/"), "/foo/")
+        self.assertEquals(util.parentForURL("/foo/bar?x=1&y=2"), "/foo/")
+        self.assertEquals(util.parentForURL("/foo/bar/?x=1&y=2"), "/foo/")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/dav/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -8,10 +8,10 @@
 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 # copies of the Software, and to permit persons to whom the Software is
 # furnished to do so, subject to the following conditions:
-# 
+#
 # The above copyright notice and this permission notice shall be included in all
 # copies or substantial portions of the Software.
-# 
+#
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -61,7 +61,8 @@
 def allDataFromStream(stream, filter=None):
     data = []
     def gotAllData(_):
-        if not data: return None
+        if not data:
+            return None
         result = "".join([str(x) for x in data])
         if filter is None:
             return result
@@ -69,6 +70,8 @@
             return filter(result)
     return readStream(stream, data.append).addCallback(gotAllData)
 
+
+
 def davXMLFromStream(stream):
     # FIXME:
     #   This reads the request body into a string and then parses it.
@@ -77,6 +80,7 @@
     if stream is None:
         return succeed(None)
 
+
     def parse(xml):
         try:
             doc = WebDAVDocument.fromString(xml)
@@ -87,11 +91,16 @@
             raise
     return allDataFromStream(stream, parse)
 
+
+
 def noDataFromStream(stream):
     def gotData(data):
-        if data: raise ValueError("Stream contains unexpected data.")
+        if data:
+            raise ValueError("Stream contains unexpected data.")
     return readStream(stream, gotData)
 
+
+
 ##
 # URLs
 ##
@@ -111,9 +120,10 @@
         if path[0] == "/":
             count = 0
             for char in path:
-                if char != "/": break
+                if char != "/":
+                    break
                 count += 1
-            path = path[count-1:]
+            path = path[count - 1:]
 
         return path
 
@@ -123,6 +133,8 @@
 
     return urlunsplit((scheme, host, urllib.quote(path), query, fragment))
 
+
+
 def joinURL(*urls):
     """
     Appends URLs in series.
@@ -142,16 +154,19 @@
     else:
         return url + trailing
 
+
+
 def parentForURL(url):
     """
     Extracts the URL of the containing collection resource for the resource
-    corresponding to a given URL.
+    corresponding to a given URL. This removes any query or fragment pieces.
+
     @param url: an absolute (server-relative is OK) URL.
     @return: the normalized URL of the collection resource containing the
         resource corresponding to C{url}.  The returned URL will always contain
         a trailing C{"/"}.
     """
-    (scheme, host, path, query, fragment) = urlsplit(normalizeURL(url))
+    (scheme, host, path, _ignore_query, _ignore_fragment) = urlsplit(normalizeURL(url))
 
     index = path.rfind("/")
     if index is 0:
@@ -165,8 +180,10 @@
         else:
             path = path[:index] + "/"
 
-    return urlunsplit((scheme, host, path, query, fragment))
+    return urlunsplit((scheme, host, path, None, None))
 
+
+
 ##
 # Python magic
 ##
@@ -180,6 +197,8 @@
     caller = inspect.getouterframes(inspect.currentframe())[1][3]
     raise NotImplementedError("Method %s is unimplemented in subclass %s" % (caller, obj.__class__))
 
+
+
 def bindMethods(module, clazz, prefixes=("preconditions_", "http_", "report_")):
     """
     Binds all functions in the given module (as defined by that module's

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/metafd.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/metafd.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/metafd.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -23,6 +23,8 @@
 
 from functools import total_ordering
 
+from zope.interface import implementer
+
 from twext.internet.sendfdport import (
     InheritedPort, InheritedSocketDispatcher, InheritingProtocolFactory)
 from twext.internet.tcp import MaxAcceptTCPServer
@@ -30,7 +32,9 @@
 from twext.web2.channel.http import HTTPFactory
 from twisted.application.service import MultiService, Service
 from twisted.internet import reactor
+from twisted.python.util import FancyStrMixin
 from twisted.internet.tcp import Server
+from twext.internet.sendfdport import IStatusWatcher
 
 log = Logger()
 
@@ -161,12 +165,16 @@
 
 
 @total_ordering
-class WorkerStatus(object):
+class WorkerStatus(FancyStrMixin, object):
     """
     The status of a worker process.
     """
 
-    def __init__(self, acknowledged=0, unacknowledged=0, started=0):
+    showAttributes = ("acknowledged unacknowledged started abandoned unclosed"
+                      .split())
+
+    def __init__(self, acknowledged=0, unacknowledged=0, started=0,
+                 abandoned=0, unclosed=0):
         """
         Create a L{ConnectionStatus} with a number of sent connections and a
         number of un-acknowledged connections.
@@ -179,29 +187,45 @@
             the subprocess which have never received a status response (a
             "C{+}" status message).
 
+        @param abandoned: The number of connections which have been sent to
+            this worker, but were not acknowledged at the moment that the
+            worker restarted.
+
         @param started: The number of times this worker has been started.
+
+        @param unclosed: The number of sockets which have been sent to the
+            subprocess but not yet closed.
         """
         self.acknowledged = acknowledged
         self.unacknowledged = unacknowledged
         self.started = started
+        self.abandoned = abandoned
+        self.unclosed = unclosed
 
 
+    def effective(self):
+        """
+        The current effective load.
+        """
+        return self.acknowledged + self.unacknowledged
+
+
     def restarted(self):
         """
         The L{WorkerStatus} derived from the current status of a process and
         the fact that it just restarted.
         """
-        return self.__class__(0, self.unacknowledged, self.started + 1)
+        return self.__class__(0, 0, self.started + 1, self.unacknowledged)
 
 
     def _tuplify(self):
-        return (self.acknowledged, self.unacknowledged, self.started)
+        return tuple(getattr(self, attr) for attr in self.showAttributes)
 
 
     def __lt__(self, other):
         if not isinstance(other, WorkerStatus):
             return NotImplemented
-        return self._tuplify() < other._tuplify()
+        return self.effective() < other.effective()
 
 
     def __eq__(self, other):
@@ -213,20 +237,20 @@
     def __add__(self, other):
         if not isinstance(other, WorkerStatus):
             return NotImplemented
-        return self.__class__(self.acknowledged + other.acknowledged,
-                              self.unacknowledged + other.unacknowledged,
-                              self.started + other.started)
+        a = self._tuplify()
+        b = other._tuplify()
+        c = [a1 + b1 for (a1, b1) in zip(a, b)]
+        return self.__class__(*c)
 
 
     def __sub__(self, other):
         if not isinstance(other, WorkerStatus):
             return NotImplemented
-        return self + self.__class__(-other.acknowledged,
-                                     -other.unacknowledged,
-                                     -other.started)
+        return self + self.__class__(*[-x for x in other._tuplify()])
 
 
 
+ at implementer(IStatusWatcher)
 class ConnectionLimiter(MultiService, object):
     """
     Connection limiter for use with L{InheritedSocketDispatcher}.
@@ -234,6 +258,8 @@
     This depends on statuses being reported by L{ReportingHTTPFactory}
     """
 
+    _outstandingRequests = 0
+
     def __init__(self, maxAccepts, maxRequests):
         """
         Create a L{ConnectionLimiter} with an associated dispatcher and
@@ -300,9 +326,18 @@
         else:
             # '+' acknowledges that the subprocess has taken on the work.
             return previousStatus + WorkerStatus(acknowledged=1,
-                                                 unacknowledged=-1)
+                                                 unacknowledged=-1,
+                                                 unclosed=1)
 
 
+    def closeCountFromStatus(self, status):
+        """
+        Determine the number of sockets to close from the current status.
+        """
+        toClose = status.unclosed
+        return (toClose, status - WorkerStatus(unclosed=toClose))
+
+
     def newConnectionStatus(self, previousStatus):
         """
         Determine the effect of a new connection being sent on a subprocess
@@ -320,20 +355,18 @@
         C{self.dispatcher.statuses} attribute, which is what
         C{self.outstandingRequests} uses to compute it.)
         """
-        current = sum(status.acknowledged
+        current = sum(status.effective()
                       for status in self.dispatcher.statuses)
         self._outstandingRequests = current # preserve for or= field in log
         maximum = self.maxRequests
         overloaded = (current >= maximum)
-        if overloaded:
-            for f in self.factories:
-                f.myServer.myPort.stopReading()
-        else:
-            for f in self.factories:
-                f.myServer.myPort.startReading()
+        for f in self.factories:
+            if overloaded:
+                f.loadAboveMaximum()
+            else:
+                f.loadNominal()
 
 
-    _outstandingRequests = 0
     @property # make read-only
     def outstandingRequests(self):
         return self._outstandingRequests
@@ -367,6 +400,20 @@
         self.maxRequests = limiter.maxRequests
 
 
+    def loadAboveMaximum(self):
+        """
+        The current server load has exceeded the maximum allowable.
+        """
+        self.myServer.myPort.stopReading()
+
+
+    def loadNominal(self):
+        """
+        The current server load is nominal; proceed with reading requests.
+        """
+        self.myServer.myPort.startReading()
+
+
     @property
     def outstandingRequests(self):
         return self.limiter.outstandingRequests

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_http.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_http.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_http.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -14,7 +14,7 @@
 from twisted.internet.defer import waitForDeferred, deferredGenerator
 from twisted.protocols import loopback
 from twisted.python import util, runtime
-from twext.web2.channel.http import SSLRedirectRequest, HTTPFactory
+from twext.web2.channel.http import SSLRedirectRequest, HTTPFactory, HTTPChannel
 from twisted.internet.task import deferLater
 
 
@@ -319,6 +319,10 @@
         self.loseConnection()
 
 
+    def abortConnection(self):
+        self.aborted = True
+
+
     def getHost(self):
         """
         Synthesize a slightly more realistic 'host' thing.
@@ -409,6 +413,13 @@
 
     requestClass = TestRequest
 
+    def setUp(self):
+        super(HTTPTests, self).setUp()
+
+        # We always need this set to True - previous tests may have changed it
+        HTTPChannel.allowPersistentConnections = True
+
+
     def connect(self, logFile=None, **protocol_kwargs):
         cxn = TestConnection()
 
@@ -850,6 +861,42 @@
         self.compareResult(cxn, cmds, data)
         return deferLater(reactor, 0.5, self.assertDone, cxn) # Wait for timeout
 
+    def testTimeout_idleRequest(self):
+        cxn = self.connect(idleTimeOut=0.3)
+        cmds = [[]]
+        data = ""
+
+        cxn.client.write("GET / HTTP/1.1\r\n\r\n")
+        cmds[0] += [('init', 'GET', '/', (1, 1), 0, ()),
+                    ('contentComplete',)]
+        self.compareResult(cxn, cmds, data)
+
+        return deferLater(reactor, 0.5, self.assertDone, cxn) # Wait for timeout
+
+    def testTimeout_abortRequest(self):
+        cxn = self.connect(allowPersistentConnections=False, closeTimeOut=0.3)
+        cxn.client.transport.loseConnection = lambda : None
+        cmds = [[]]
+        data = ""
+
+        cxn.client.write("GET / HTTP/1.1\r\n\r\n")
+        cmds[0] += [('init', 'GET', '/', (1, 1), 0, ()),
+                    ('contentComplete',)]
+        self.compareResult(cxn, cmds, data)
+
+        response = TestResponse()
+        response.headers.setRawHeaders("Content-Length", ("0",))
+        cxn.requests[0].writeResponse(response)
+        response.finish()
+
+        data += "HTTP/1.1 200 OK\r\nContent-Length: 0\r\nConnection: close\r\n\r\n"
+
+        self.compareResult(cxn, cmds, data)
+        def _check(cxn):
+            self.assertDone(cxn)
+            self.assertTrue(cxn.serverToClient.aborted)
+        return deferLater(reactor, 0.5, self.assertDone, cxn) # Wait for timeout
+
     def testConnectionCloseRequested(self):
         cxn = self.connect()
         cmds = [[]]
@@ -883,6 +930,26 @@
         self.compareResult(cxn, cmds, data)
         self.assertDone(cxn)
 
+    def testConnectionKeepAliveOff(self):
+        cxn = self.connect(allowPersistentConnections=False)
+        cmds = [[]]
+        data = ""
+
+        cxn.client.write("GET / HTTP/1.1\r\n\r\n")
+        cmds[0] += [('init', 'GET', '/', (1, 1), 0, ()),
+                    ('contentComplete',)]
+        self.compareResult(cxn, cmds, data)
+
+        response = TestResponse()
+        response.headers.setRawHeaders("Content-Length", ("0",))
+        cxn.requests[0].writeResponse(response)
+        response.finish()
+
+        data += "HTTP/1.1 200 OK\r\nContent-Length: 0\r\nConnection: close\r\n\r\n"
+
+        self.compareResult(cxn, cmds, data)
+        self.assertDone(cxn)
+
     def testExtraCRLFs(self):
         cxn = self.connect()
         cmds = [[]]

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_metafd.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_metafd.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/web2/test/test_metafd.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -30,6 +30,7 @@
 from twisted.application.service import Service
 
 from twext.internet.test.test_sendfdport import ReaderAdder
+from twext.web2.metafd import WorkerStatus
 from twisted.trial.unittest import TestCase
 
 
@@ -60,6 +61,7 @@
         return ("4.3.2.1", 4321)
 
 
+
 class InheritedPortForTesting(sendfdport.InheritedPort):
     """
     L{sendfdport.InheritedPort} subclass that prevents certain I/O operations
@@ -91,15 +93,19 @@
     def startReading(self):
         "Do nothing."
 
+
     def stopReading(self):
         "Do nothing."
 
+
     def startWriting(self):
         "Do nothing."
 
+
     def stopWriting(self):
         "Do nothing."
 
+
     def __init__(self, *a, **kw):
         super(ServerTransportForTesting, self).__init__(*a, **kw)
         self.reactor = None
@@ -163,6 +169,7 @@
         builder = LimiterBuilder(self)
         builder.fillUp()
         self.assertEquals(builder.port.reading, False) # sanity check
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
         builder.loadDown()
         self.assertEquals(builder.port.reading, True)
 
@@ -176,30 +183,87 @@
         builder = LimiterBuilder(self)
         builder.fillUp()
         self.assertEquals(builder.port.reading, False)
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
         builder.processRestart()
         self.assertEquals(builder.port.reading, True)
 
 
+    def test_unevenLoadDistribution(self):
+        """
+        Subprocess sockets should be selected for subsequent socket sends by
+        ascending status.  Status should sum sent and successfully subsumed
+        sockets.
+        """
+        builder = LimiterBuilder(self)
+        # Give one simulated worker a higher acknowledged load than the other.
+        builder.fillUp(True, 1)
+        # There should still be plenty of spare capacity.
+        self.assertEquals(builder.port.reading, True)
+        # Then slam it with a bunch of incoming requests.
+        builder.fillUp(False, builder.limiter.maxRequests - 1)
+        # Now capacity is full.
+        self.assertEquals(builder.port.reading, False)
+        # And everyone should have an even amount of work.
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
 
+
+    def test_processStopsReadingEvenWhenConnectionsAreNotAcknowledged(self):
+        """
+        L{ConnectionLimiter.statusesChanged} determines whether the current
+        number of outstanding requests is above the limit.
+        """
+        builder = LimiterBuilder(self)
+        builder.fillUp(acknowledged=False)
+        self.assertEquals(builder.highestLoad(), builder.requestsPerSocket)
+        self.assertEquals(builder.port.reading, False)
+        builder.processRestart()
+        self.assertEquals(builder.port.reading, True)
+
+
+    def test_workerStatusRepr(self):
+        """
+        L{WorkerStatus.__repr__} will show all the values associated with the
+        status of the worker.
+        """
+        self.assertEquals(repr(WorkerStatus(1, 2, 3, 4, 5)),
+                          "<WorkerStatus acknowledged=1 unacknowledged=2 "
+                          "started=3 abandoned=4 unclosed=5>")
+
+
+
 class LimiterBuilder(object):
     """
     A L{LimiterBuilder} can build a L{ConnectionLimiter} and associated objects
     for a given unit test.
     """
 
-    def __init__(self, test, maxReq=3):
-        self.limiter = ConnectionLimiter(2, maxRequests=maxReq)
+    def __init__(self, test, requestsPerSocket=3, socketCount=2):
+        # Similar to MaxRequests in the configuration.
+        self.requestsPerSocket = requestsPerSocket
+        # Similar to ProcessCount in the configuration.
+        self.socketCount = socketCount
+        self.limiter = ConnectionLimiter(
+            2, maxRequests=requestsPerSocket * socketCount
+        )
         self.dispatcher = self.limiter.dispatcher
         self.dispatcher.reactor = ReaderAdder()
         self.service = Service()
         self.limiter.addPortService("TCP", 4321, "127.0.0.1", 5,
                                     self.serverServiceMakerMaker(self.service))
-        self.dispatcher.addSocket()
+        for ignored in xrange(socketCount):
+            self.dispatcher.addSocket()
         # Has to be running in order to add stuff.
         self.limiter.startService()
         self.port = self.service.myPort
 
 
+    def highestLoad(self):
+        return max(
+            skt.status.effective()
+            for skt in self.limiter.dispatcher._subprocessSockets
+        )
+
+
     def serverServiceMakerMaker(self, s):
         """
         Make a serverServiceMaker for use with
@@ -214,21 +278,30 @@
         def serverServiceMaker(port, factory, *a, **k):
             s.factory = factory
             s.myPort = NotAPort()
-            s.myPort.startReading() # TODO: technically, should wait for startService
+            # TODO: technically, the following should wait for startService
+            s.myPort.startReading()
             factory.myServer = s
             return s
         return serverServiceMaker
 
 
-    def fillUp(self):
+    def fillUp(self, acknowledged=True, count=0):
         """
         Fill up all the slots on the connection limiter.
+
+        @param acknowledged: Should the virtual connections created by this
+            method send a message back to the dispatcher indicating that the
+            subprocess has acknowledged receipt of the file descriptor?
+
+        @param count: Amount of load to add; default to the maximum that the
+            limiter.
         """
-        for x in range(self.limiter.maxRequests):
+        for x in range(count or self.limiter.maxRequests):
             self.dispatcher.sendFileDescriptor(None, "SSL")
-            self.dispatcher.statusMessage(
-                self.dispatcher._subprocessSockets[0], "+"
-            )
+            if acknowledged:
+                self.dispatcher.statusMessage(
+                    self.dispatcher._subprocessSockets[0], "+"
+                )
 
 
     def processRestart(self):

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/aggregate.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/aggregate.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/aggregate.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -45,13 +45,16 @@
 
         for service in services:
             if not IDirectoryService.implementedBy(service.__class__):
-                raise ValueError("Not a directory service: %s" % (service,))
+                raise ValueError(
+                    "Not a directory service: {0}".format(service)
+                )
 
             for recordType in service.recordTypes():
                 if recordType in recordTypes:
                     raise DirectoryConfigurationError(
-                        "Aggregated services may not vend the same record type: %s"
-                        % (recordType,)
+                        "Aggregated services may not vend "
+                        "the same record type: {0}"
+                        .format(recordType)
                     )
                 recordTypes.add(recordType)
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/directory.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/directory.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -47,7 +47,7 @@
     fieldName  = FieldName
 
     normalizedFields = {
-        FieldName.guid:           lambda g: UUID(g).hex,
+        FieldName.guid: lambda g: UUID(g).hex,
         FieldName.emailAddresses: lambda e: e.lower(),
     }
 
@@ -57,9 +57,9 @@
 
 
     def __repr__(self):
-        return "<%s %r>" % (
-            self.__class__.__name__,
-            self.realmName,
+        return (
+            "<{self.__class__.__name__} {self.realmName!r}>"
+            .format(self=self)
         )
 
 
@@ -76,7 +76,9 @@
             the whole directory should be searched.
         @type records: L{set} or L{frozenset}
         """
-        return fail(QueryNotSupportedError("Unknown expression: %s" % (expression,)))
+        return fail(QueryNotSupportedError(
+            "Unknown expression: {0}".format(expression)
+        ))
 
 
     @inlineCallbacks
@@ -109,7 +111,9 @@
             elif operand == Operand.OR:
                 results |= recordsMatchingExpression
             else:
-                raise QueryNotSupportedError("Unknown operand: %s" % (operand,))
+                raise QueryNotSupportedError(
+                    "Unknown operand: {0}".format(operand)
+                )
 
         returnValue(results)
 
@@ -120,12 +124,16 @@
 
     @inlineCallbacks
     def recordWithUID(self, uid):
-        returnValue(uniqueResult((yield self.recordsWithFieldValue(FieldName.uid, uid))))
-               
+        returnValue(uniqueResult(
+            (yield self.recordsWithFieldValue(FieldName.uid, uid))
+        ))
 
+
     @inlineCallbacks
     def recordWithGUID(self, guid):
-        returnValue(uniqueResult((yield self.recordsWithFieldValue(FieldName.guid, guid))))
+        returnValue(uniqueResult(
+            (yield self.recordsWithFieldValue(FieldName.guid, guid))
+        ))
 
 
     def recordsWithRecordType(self, recordType):
@@ -136,12 +144,15 @@
     def recordWithShortName(self, recordType, shortName):
         returnValue(uniqueResult((yield self.recordsFromQuery((
             MatchExpression(FieldName.recordType, recordType),
-            MatchExpression(FieldName.shortNames, shortName ),
+            MatchExpression(FieldName.shortNames, shortName),
         )))))
 
 
     def recordsWithEmailAddress(self, emailAddress):
-        return self.recordsWithFieldValue(FieldName.emailAddresses, emailAddress)
+        return self.recordsWithFieldValue(
+            FieldName.emailAddresses,
+            emailAddress,
+        )
 
 
     def updateRecords(self, records, create=False):
@@ -168,21 +179,31 @@
     def __init__(self, service, fields):
         for fieldName in self.requiredFields:
             if fieldName not in fields or not fields[fieldName]:
-                raise ValueError("%s field is required." % (fieldName,))
+                raise ValueError("{0} field is required.".format(fieldName))
 
             if FieldName.isMultiValue(fieldName):
                 values = fields[fieldName]
                 if len(values) == 0:
-                    raise ValueError("%s field must have at least one value." % (fieldName,))
+                    raise ValueError(
+                        "{0} field must have at least one value."
+                        .format(fieldName)
+                    )
                 for value in values:
                     if not value:
-                        raise ValueError("%s field must not be empty." % (fieldName,))
+                        raise ValueError(
+                            "{0} field must not be empty.".format(fieldName)
+                        )
 
-        if fields[FieldName.recordType] not in service.recordType.iterconstants():
-            raise ValueError("Record type must be one of %r, not %r." % (
-                tuple(service.recordType.iterconstants()),
-                fields[FieldName.recordType]
-            ))
+        if (
+            fields[FieldName.recordType] not in
+            service.recordType.iterconstants()
+        ):
+            raise ValueError(
+                "Record type must be one of {0!r}, not {1!r}.".format(
+                    tuple(service.recordType.iterconstants()),
+                    fields[FieldName.recordType],
+                )
+            )
 
         # Normalize fields
         normalizedFields = {}
@@ -197,16 +218,18 @@
                 normalizedFields[name] = tuple((normalize(v) for v in value))
             else:
                 normalizedFields[name] = normalize(value)
-        
+
         self.service = service
         self.fields  = normalizedFields
 
 
     def __repr__(self):
-        return "<%s (%s)%s>" % (
-            self.__class__.__name__,
-            describe(self.recordType),
-            self.shortNames[0],
+        return (
+            "<{self.__class__.__name__} ({recordType}){shortName}>".format(
+                self=self,
+                recordType=describe(self.recordType),
+                shortName=self.shortNames[0],
+            )
         )
 
 
@@ -262,9 +285,9 @@
 
     def members(self):
         if self.recordType == RecordType.group:
-            raise NotImplementedError()
+            raise NotImplementedError("Subclasses must implement members()")
         return succeed(())
 
 
     def groups(self):
-        raise NotImplementedError()
+        raise NotImplementedError("Subclasses must implement groups()")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/expression.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/expression.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/expression.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -72,7 +72,11 @@
     @ivar flags: L{NamedConstant} specifying additional options
     """
 
-    def __init__(self, fieldName, fieldValue, matchType=MatchType.equals, flags=None):
+    def __init__(
+        self,
+        fieldName, fieldValue,
+        matchType=MatchType.equals, flags=None
+    ):
         self.fieldName  = fieldName
         self.fieldValue = fieldValue
         self.matchType  = matchType
@@ -85,12 +89,16 @@
         if self.flags is None:
             flags = ""
         else:
-            flags = " (%s)" % (describe(self.flags),)
+            flags = " ({0})".format(describe(self.flags))
 
-        return "<%s: %r %s %r%s>" % (
-            self.__class__.__name__,
-            describe(self.fieldName),
-            describe(self.matchType),
-            describe(self.fieldValue),
-            flags
+        return (
+            "<{self.__class__.__name__}: {fieldName!r} "
+            "{matchType} {fieldValue!r}{flags}>"
+            .format(
+                self=self,
+                fieldName=describe(self.fieldName),
+                matchType=describe(self.matchType),
+                fieldValue=describe(self.fieldValue),
+                flags=flags,
+            )
         )

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/idirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/idirectory.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/idirectory.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -51,16 +51,22 @@
     Directory service generic error.
     """
 
+
+
 class DirectoryConfigurationError(DirectoryServiceError):
     """
     Directory configurtion error.
     """
 
+
+
 class DirectoryAvailabilityError(DirectoryServiceError):
     """
     Directory not available.
     """
 
+
+
 class UnknownRecordTypeError(DirectoryServiceError):
     """
     Unknown record type.
@@ -69,16 +75,22 @@
         DirectoryServiceError.__init__(self, token)
         self.token = token
 
+
+
 class QueryNotSupportedError(DirectoryServiceError):
     """
     Query not supported.
     """
 
+
+
 class NoSuchRecordError(DirectoryServiceError):
     """
     Record does not exist.
     """
 
+
+
 class NotAllowedError(DirectoryServiceError):
     """
     Apparently, you can't do that.
@@ -123,6 +135,7 @@
     fullNames.multiValue      = True
     emailAddresses.multiValue = True
 
+
     @staticmethod
     def isMultiValue(name):
         return getattr(name, "multiValue", False)
@@ -157,106 +170,143 @@
     A directory service may allow support the editing, removal and
     addition of records.
     """
-    realmName = Attribute("The name of the authentication realm this service represents.")
+    realmName = Attribute(
+        "The name of the authentication realm this service represents."
+    )
 
+
     def recordTypes():
         """
         @return: an iterable of L{NamedConstant}s denoting the record
             types that are kept in this directory.
         """
 
+
     def recordsFromExpression(self, expression):
         """
         Find records matching an expression.
+
         @param expression: an expression to apply
         @type expression: L{object}
+
         @return: a deferred iterable of matching L{IDirectoryRecord}s.
+
         @raises: L{QueryNotSupportedError} if the expression is not
             supported by this directory service.
         """
 
+
     def recordsFromQuery(expressions, operand=Operand.AND):
         """
         Find records by composing a query consisting of an iterable of
         expressions and an operand.
+
         @param expressions: expressions to query against
         @type expressions: iterable of L{object}s
+
         @param operand: an operand
         @type operand: a L{NamedConstant}
+
         @return: a deferred iterable of matching L{IDirectoryRecord}s.
+
         @raises: L{QueryNotSupportedError} if the query is not
             supported by this directory service.
         """
 
+
     def recordsWithFieldValue(fieldName, value):
         """
         Find records that have the given field name with the given
         value.
+
         @param fieldName: a field name
         @type fieldName: L{NamedConstant}
+
         @param value: a value to match
         @type value: L{bytes}
+
         @return: a deferred iterable of L{IDirectoryRecord}s.
         """
 
+
     def recordWithUID(uid):
         """
         Find the record that has the given UID.
+
         @param uid: a UID
         @type uid: L{bytes}
+
         @return: a deferred iterable of L{IDirectoryRecord}s, or
             C{None} if there is no such record.
         """
-               
+
+
     def recordWithGUID(guid):
         """
         Find the record that has the given GUID.
+
         @param guid: a GUID
         @type guid: L{bytes}
+
         @return: a deferred iterable of L{IDirectoryRecord}s, or
             C{None} if there is no such record.
         """
 
+
     def recordsWithRecordType(recordType):
         """
         Find the records that have the given record type.
+
         @param recordType: a record type
         @type recordType: L{NamedConstant}
+
         @return: a deferred iterable of L{IDirectoryRecord}s.
         """
 
+
     def recordWithShortName(recordType, shortName):
         """
         Find the record that has the given record type and short name.
+
         @param recordType: a record type
         @type recordType: L{NamedConstant}
+
         @param shortName: a short name
         @type shortName: L{bytes}
+
         @return: a deferred iterable of L{IDirectoryRecord}s, or
             C{None} if there is no such record.
         """
 
+
     def recordsWithEmailAddress(emailAddress):
         """
         Find the records that have the given email address.
+
         @param emailAddress: an email address
         @type emailAddress: L{bytes}
+
         @return: a deferred iterable of L{IDirectoryRecord}s, or
             C{None} if there is no such record.
         """
 
+
     def updateRecords(records, create=False):
         """
         Updates existing directory records.
+
         @param records: the records to update
         @type records: iterable of L{IDirectoryRecord}s
+
         @param create: if true, create records if necessary
         @type create: boolean
         """
 
+
     def removeRecords(uids):
         """
         Removes the records with the given UIDs.
+
         @param uids: the UIDs of the records to remove
         @type uids: iterable of L{bytes}
         """
@@ -294,6 +344,7 @@
     service = Attribute("The L{IDirectoryService} this record exists in.")
     fields  = Attribute("A mapping with L{NamedConstant} keys.")
 
+
     def members():
         """
         Find the records that are members of this group.  Only direct
@@ -302,6 +353,7 @@
             direct members of this group.
         """
 
+
     def groups():
         """
         Find the group records that this record is a member of.  Only

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/index.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/index.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/index.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -29,7 +29,8 @@
 from twisted.python.constants import Names, NamedConstant
 from twisted.internet.defer import succeed, inlineCallbacks, returnValue
 
-from twext.who.util import ConstantsContainer, describe, uniqueResult, iterFlags
+from twext.who.util import ConstantsContainer
+from twext.who.util import describe, uniqueResult, iterFlags
 from twext.who.idirectory import FieldName as BaseFieldName
 from twext.who.expression import MatchExpression, MatchType, MatchFlags
 from twext.who.directory import DirectoryService as BaseDirectoryService
@@ -57,7 +58,10 @@
     XML directory service.
     """
 
-    fieldName = ConstantsContainer(chain(BaseDirectoryService.fieldName.iterconstants(), FieldName.iterconstants()))
+    fieldName = ConstantsContainer(chain(
+        BaseDirectoryService.fieldName.iterconstants(),
+        FieldName.iterconstants()
+    ))
 
     indexedFields = (
         BaseFieldName.recordType,
@@ -90,7 +94,7 @@
         """
         Load records.
         """
-        raise NotImplementedError("Subclasses should implement loadRecords().")
+        raise NotImplementedError("Subclasses must implement loadRecords().")
 
 
     def flush(self):
@@ -112,7 +116,9 @@
                 elif flag == MatchFlags.caseInsensitive:
                     normalize = lambda x: x.lower()
                 else:
-                    raise NotImplementedError("Unknown query flag: %s" % (describe(flag),))
+                    raise NotImplementedError(
+                        "Unknown query flag: {0}".format(describe(flag))
+                    )
 
         return predicate, normalize
 
@@ -131,16 +137,27 @@
         matchType  = expression.matchType
 
         if matchType == MatchType.startsWith:
-            indexKeys = (key for key in fieldIndex if predicate(normalize(key).startswith(matchValue)))
+            indexKeys = (
+                key for key in fieldIndex
+                if predicate(normalize(key).startswith(matchValue))
+            )
         elif matchType == MatchType.contains:
-            indexKeys = (key for key in fieldIndex if predicate(matchValue in normalize(key)))
+            indexKeys = (
+                key for key in fieldIndex
+                if predicate(matchValue in normalize(key))
+            )
         elif matchType == MatchType.equals:
             if predicate(True):
                 indexKeys = (matchValue,)
             else:
-                indexKeys = (key for key in fieldIndex if normalize(key) != matchValue)
+                indexKeys = (
+                    key for key in fieldIndex
+                    if normalize(key) != matchValue
+                )
         else:
-            raise NotImplementedError("Unknown match type: %s" % (describe(matchType),))
+            raise NotImplementedError(
+                "Unknown match type: {0}".format(describe(matchType))
+            )
 
         matchingRecords = set()
         for key in indexKeys:
@@ -165,18 +182,25 @@
         matchType  = expression.matchType
 
         if matchType == MatchType.startsWith:
-            match = lambda fieldValue: predicate(fieldValue.startswith(matchValue))
+            match = lambda fieldValue: predicate(
+                fieldValue.startswith(matchValue)
+            )
         elif matchType == MatchType.contains:
             match = lambda fieldValue: predicate(matchValue in fieldValue)
         elif matchType == MatchType.equals:
             match = lambda fieldValue: predicate(fieldValue == matchValue)
         else:
-            raise NotImplementedError("Unknown match type: %s" % (describe(matchType),))
+            raise NotImplementedError(
+                "Unknown match type: {0}".format(describe(matchType))
+            )
 
         result = set()
 
         if records is None:
-            records = (uniqueResult(values) for values in self.index[self.fieldName.uid].itervalues())
+            records = (
+                uniqueResult(values) for values
+                in self.index[self.fieldName.uid].itervalues()
+            )
 
         for record in records:
             fieldValues = record.fields.get(expression.fieldName, None)
@@ -194,11 +218,17 @@
     def recordsFromExpression(self, expression, records=None):
         if isinstance(expression, MatchExpression):
             if expression.fieldName in self.indexedFields:
-                return self.indexedRecordsFromMatchExpression(expression, records=records)
+                return self.indexedRecordsFromMatchExpression(
+                    expression, records=records
+                )
             else:
-                return self.unIndexedRecordsFromMatchExpression(expression, records=records)
+                return self.unIndexedRecordsFromMatchExpression(
+                    expression, records=records
+                )
         else:
-            return BaseDirectoryService.recordsFromExpression(self, expression, records=records)
+            return BaseDirectoryService.recordsFromExpression(
+                self, expression, records=records
+            )
 
 
 
@@ -206,6 +236,7 @@
     """
     XML directory record
     """
+
     @inlineCallbacks
     def members(self):
         members = set()
@@ -215,4 +246,6 @@
 
 
     def groups(self):
-        return self.service.recordsWithFieldValue(FieldName.memberUIDs, self.uid)
+        return self.service.recordsWithFieldValue(
+            FieldName.memberUIDs, self.uid
+        )

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -40,20 +40,23 @@
         myConstants = {}
         for constant in constants:
             if constant.name in myConstants:
-                raise ValueError("Name conflict: %r" % (constant.name,))
+                raise ValueError("Name conflict: {0}".format(constant.name))
             myConstants[constant.name] = constant
 
         self._constants = myConstants
 
+
     def __getattr__(self, name):
         try:
             return self._constants[name]
         except KeyError:
             raise AttributeError(name)
 
+
     def iterconstants(self):
         return self._constants.itervalues()
 
+
     def lookupByName(self, name):
         try:
             return self._constants[name]
@@ -61,16 +64,20 @@
             raise ValueError(name)
 
 
+
 def uniqueResult(values):
     result = None
     for value in values:
         if result is None:
             result = value
         else:
-            raise DirectoryServiceError("Multiple values found where one expected.")
+            raise DirectoryServiceError(
+                "Multiple values found where one expected."
+            )
     return result
 
 
+
 def describe(constant):
     if isinstance(constant, FlagConstant):
         parts = []
@@ -81,6 +88,7 @@
         return getattr(constant, "description", constant.name)
 
 
+
 def iterFlags(flags):
     if hasattr(flags, "__iter__"):
         return flags

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/xml.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/xml.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twext/who/xml.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -144,9 +144,11 @@
         else:
             realmName = repr(realmName)
 
-        return "<%s %s>" % (
-            self.__class__.__name__,
-            realmName,
+        return (
+            "<{self.__class__.__name__} {realmName}>".format(
+                self=self,
+                realmName=realmName,
+            )
         )
 
 
@@ -201,7 +203,10 @@
         #
         if stat:
             self.filePath.restat()
-            cacheTag = (self.filePath.getModificationTime(), self.filePath.getsize())
+            cacheTag = (
+                self.filePath.getModificationTime(),
+                self.filePath.getsize()
+            )
             if cacheTag == self._cacheTag:
                 return
         else:
@@ -225,9 +230,13 @@
         #
         directoryNode = etree.getroot()
         if directoryNode.tag != self.element.directory.value:
-            raise ParseError("Incorrect root element: %s" % (directoryNode.tag,))
+            raise ParseError(
+                "Incorrect root element: {0}".format(directoryNode.tag)
+            )
 
-        realmName = directoryNode.get(self.attribute.realm.value, "").encode("utf-8")
+        realmName = directoryNode.get(
+            self.attribute.realm.value, ""
+        ).encode("utf-8")
 
         if not realmName:
             raise ParseError("No realm name.")
@@ -239,7 +248,9 @@
 
         for recordNode in directoryNode:
             try:
-                records.add(self.parseRecordNode(recordNode, unknownFieldElements))
+                records.add(
+                    self.parseRecordNode(recordNode, unknownFieldElements)
+                )
             except UnknownRecordTypeError as e:
                 unknownRecordTypes.add(e.token)
 
@@ -277,10 +288,14 @@
 
 
     def parseRecordNode(self, recordNode, unknownFieldElements=None):
-        recordTypeAttribute = recordNode.get(self.attribute.recordType.value, "").encode("utf-8")
+        recordTypeAttribute = recordNode.get(
+            self.attribute.recordType.value, ""
+        ).encode("utf-8")
         if recordTypeAttribute:
             try:
-                recordType = self.value.lookupByValue(recordTypeAttribute).recordType
+                recordType = (
+                    self.value.lookupByValue(recordTypeAttribute).recordType
+                )
             except (ValueError, AttributeError):
                 raise UnknownRecordTypeError(recordTypeAttribute)
         else:
@@ -357,9 +372,14 @@
             for (name, value) in record.fields.items():
                 if name == self.fieldName.recordType:
                     if value in recordTypes:
-                        recordNode.set(self.attribute.recordType.value, recordTypes[value])
+                        recordNode.set(
+                            self.attribute.recordType.value,
+                            recordTypes[value]
+                        )
                     else:
-                        raise AssertionError("Unknown record type: %r" % (value,))
+                        raise AssertionError(
+                            "Unknown record type: {0}".format(value)
+                        )
 
                 else:
                     if name in fieldNames:
@@ -376,7 +396,9 @@
                             recordNode.append(subNode)
 
                     else:
-                        raise AssertionError("Unknown field name: %r" % (name,))
+                        raise AssertionError(
+                            "Unknown field name: {0!r}".format(name)
+                        )
 
         # Walk through the record nodes in the XML tree and apply
         # updates.

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/caldavxml.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/caldavxml.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/caldavxml.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -65,7 +65,11 @@
     "calendar-query-extended",
 )
 
+caldav_timezones_by_reference_compliance = (
+    "calendar-no-timezone",
+)
 
+
 class CalDAVElement (WebDAVElement):
     """
     CalDAV XML element.

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/appleopendirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/appleopendirectory.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/appleopendirectory.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1378,7 +1378,8 @@
 def buildNestedQueryFromTokens(tokens, mapping):
     """
     Build a DS query espression such that all the tokens must appear in either
-    the fullName (anywhere) or emailAddresses (at the beginning).
+    the fullName (anywhere), emailAddresses (at the beginning) or record name
+    (at the beginning).
     
     @param tokens: The tokens to search on
     @type tokens: C{list} of C{str}
@@ -1394,6 +1395,7 @@
     fields = [
         ("fullName", dsattributes.eDSContains),
         ("emailAddresses", dsattributes.eDSStartsWith),
+        ("recordName", dsattributes.eDSStartsWith),
     ]
 
     outer = []

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/directory.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/directory.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -533,10 +533,11 @@
         )
         for record in resources:
             guid = record.guid
-            assignments.append(("%s#calendar-proxy-write" % (guid,),
-                               record.externalProxies()))
-            assignments.append(("%s#calendar-proxy-read" % (guid,),
-                               record.externalReadOnlyProxies()))
+            if record.enabledForCalendaring:
+                assignments.append(("%s#calendar-proxy-write" % (guid,),
+                                   record.externalProxies()))
+                assignments.append(("%s#calendar-proxy-read" % (guid,),
+                                   record.externalReadOnlyProxies()))
 
         return assignments
 
@@ -813,7 +814,7 @@
             # populated the membership cache, and if so, return immediately
             if isPopulated:
                 self.log.info("Group membership cache is already populated")
-                returnValue((fast, 0))
+                returnValue((fast, 0, 0))
 
             # We don't care what others are doing right now, we need to update
             useLock = False
@@ -832,15 +833,21 @@
         else:
             self.log.info("Group membership snapshot file exists: %s" %
                 (membershipsCacheFile.path,))
-            previousMembers = pickle.loads(membershipsCacheFile.getContent())
             callGroupsChanged = True
+            try:
+                previousMembers = pickle.loads(membershipsCacheFile.getContent())
+            except:
+                self.log.warn("Could not parse snapshot; will regenerate cache")
+                fast = False
+                previousMembers = {}
+                callGroupsChanged = False
 
         if useLock:
             self.log.info("Attempting to acquire group membership cache lock")
             acquiredLock = (yield self.cache.acquireLock())
             if not acquiredLock:
                 self.log.info("Group membership cache lock held by another process")
-                returnValue((fast, 0))
+                returnValue((fast, 0, 0))
             self.log.info("Acquired lock")
 
         if not fast and self.useExternalProxies:
@@ -850,7 +857,11 @@
             if extProxyCacheFile.exists():
                 self.log.info("External proxies snapshot file exists: %s" %
                     (extProxyCacheFile.path,))
-                previousAssignments = pickle.loads(extProxyCacheFile.getContent())
+                try:
+                    previousAssignments = pickle.loads(extProxyCacheFile.getContent())
+                except:
+                    self.log.warn("Could not parse external proxies snapshot")
+                    previousAssignments = []
 
             if useLock:
                 yield self.cache.extendLock()

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/ldapdirectory.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/ldapdirectory.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -391,6 +391,12 @@
 
         # Build filter
         filterstr = "(|(%s=*)(%s=*))" % (readAttr, writeAttr)
+        # ...taking into account only calendar-enabled records
+        enabledAttr = self.rdnSchema["locations"]["calendarEnabledAttr"]
+        enabledValue = self.rdnSchema["locations"]["calendarEnabledValue"]
+        if enabledAttr and enabledValue:
+            filterstr = "(&(%s=%s)%s)" % (enabledAttr, enabledValue, filterstr)
+
         attrlist = [guidAttr, readAttr, writeAttr]
 
         # Query the LDAP server
@@ -1046,7 +1052,7 @@
 
                 try:
                     record = self._ldapResultToRecord(dn, attrs, recordType)
-                    self.log.debug("Got LDAP record %s" % (record,))
+                    self.log.debug("Got LDAP record {rec}", rec=record)
 
                     if not unrestricted:
                         self.log.debug("%s is not enabled because it's not a member of group: %s" % (dn, self.restrictToGroup))

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_buildquery.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_buildquery.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_buildquery.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -140,17 +140,17 @@
         query = buildNestedQueryFromTokens(["foo"], OpenDirectoryService._ODFields)
         self.assertEquals(
             query.generate(),
-            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))"
+            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*)(dsAttrTypeStandard:RecordName=foo*))"
         )
 
         query = buildNestedQueryFromTokens(["foo", "bar"], OpenDirectoryService._ODFields)
         self.assertEquals(
             query.generate(),
-            "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*)))"
+            "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*)(dsAttrTypeStandard:RecordName=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*)(dsAttrTypeStandard:RecordName=bar*)))"
         )
 
         query = buildNestedQueryFromTokens(["foo", "bar", "baz"], OpenDirectoryService._ODFields)
         self.assertEquals(
             query.generate(),
-            "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*))(|(dsAttrTypeStandard:RealName=*baz*)(dsAttrTypeStandard:EMailAddress=baz*)))"
+            "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*)(dsAttrTypeStandard:RecordName=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*)(dsAttrTypeStandard:RecordName=bar*))(|(dsAttrTypeStandard:RealName=*baz*)(dsAttrTypeStandard:EMailAddress=baz*)(dsAttrTypeStandard:RecordName=baz*)))"
         )

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_directory.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/directory/test/test_directory.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -243,7 +243,7 @@
         # Prevent an update by locking the cache
         acquiredLock = (yield cache.acquireLock())
         self.assertTrue(acquiredLock)
-        self.assertEquals((False, 0), (yield updater.updateCache()))
+        self.assertEquals((False, 0, 0), (yield updater.updateCache()))
 
         # You can't lock when already locked:
         acquiredLockAgain = (yield cache.acquireLock())
@@ -540,7 +540,167 @@
                 groups,
             )
 
+        #
+        # Now remove all external assignments, and those should take effect.
+        #
+        def fakeExternalProxiesEmpty():
+            return []
 
+        updater = GroupMembershipCacheUpdater(
+            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
+            cache=cache, useExternalProxies=True,
+            externalProxiesSource=fakeExternalProxiesEmpty)
+
+        yield updater.updateCache()
+
+        delegates = (
+
+            # record name
+            # read-write delegators
+            # read-only delegators
+            # groups delegate is in (restricted to only those groups
+            #   participating in delegation)
+
+            # Note: "transporter" is now gone for everyone
+
+            ("wsanchez",
+             set(["mercury", "apollo", "orion", "gemini"]),
+             set(["non_calendar_proxy"]),
+             set(['left_coast',
+                  'both_coasts',
+                  'recursive1_coasts',
+                  'recursive2_coasts',
+                  'gemini#calendar-proxy-write',
+                ]),
+            ),
+            ("cdaboo",
+             set(["apollo", "orion", "non_calendar_proxy"]),
+             set(["non_calendar_proxy"]),
+             set(['both_coasts',
+                  'non_calendar_group',
+                  'recursive1_coasts',
+                  'recursive2_coasts',
+                ]),
+            ),
+            ("lecroy",
+             set(["apollo", "mercury", "non_calendar_proxy"]),
+             set(),
+             set(['both_coasts',
+                  'left_coast',
+                      'non_calendar_group',
+                ]),
+            ),
+        )
+
+        for name, write, read, groups in delegates:
+            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
+
+            proxyFor = (yield delegate.proxyFor(True))
+            self.assertEquals(
+                set([p.record.guid for p in proxyFor]),
+                write,
+            )
+            proxyFor = (yield delegate.proxyFor(False))
+            self.assertEquals(
+                set([p.record.guid for p in proxyFor]),
+                read,
+            )
+            groupsIn = (yield delegate.groupMemberships())
+            uids = set()
+            for group in groupsIn:
+                try:
+                    uid = group.uid # a sub-principal
+                except AttributeError:
+                    uid = group.record.guid # a regular group
+                uids.add(uid)
+            self.assertEquals(
+                set(uids),
+                groups,
+            )
+
+        #
+        # Now add back an external assignments, and those should take effect.
+        #
+        def fakeExternalProxiesAdded():
+            return [
+                (
+                    "transporter#calendar-proxy-write",
+                    set(["8B4288F6-CC82-491D-8EF9-642EF4F3E7D0"])
+                ),
+            ]
+
+        updater = GroupMembershipCacheUpdater(
+            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
+            cache=cache, useExternalProxies=True,
+            externalProxiesSource=fakeExternalProxiesAdded)
+
+        yield updater.updateCache()
+
+        delegates = (
+
+            # record name
+            # read-write delegators
+            # read-only delegators
+            # groups delegate is in (restricted to only those groups
+            #   participating in delegation)
+
+            ("wsanchez",
+             set(["mercury", "apollo", "orion", "gemini"]),
+             set(["non_calendar_proxy"]),
+             set(['left_coast',
+                  'both_coasts',
+                  'recursive1_coasts',
+                  'recursive2_coasts',
+                  'gemini#calendar-proxy-write',
+                ]),
+            ),
+            ("cdaboo",
+             set(["apollo", "orion", "non_calendar_proxy"]),
+             set(["non_calendar_proxy"]),
+             set(['both_coasts',
+                  'non_calendar_group',
+                  'recursive1_coasts',
+                  'recursive2_coasts',
+                ]),
+            ),
+            ("lecroy",
+             set(["apollo", "mercury", "non_calendar_proxy", "transporter"]),
+             set(),
+             set(['both_coasts',
+                  'left_coast',
+                  'non_calendar_group',
+                  'transporter#calendar-proxy-write',
+                ]),
+            ),
+        )
+
+        for name, write, read, groups in delegates:
+            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
+
+            proxyFor = (yield delegate.proxyFor(True))
+            self.assertEquals(
+                set([p.record.guid for p in proxyFor]),
+                write,
+            )
+            proxyFor = (yield delegate.proxyFor(False))
+            self.assertEquals(
+                set([p.record.guid for p in proxyFor]),
+                read,
+            )
+            groupsIn = (yield delegate.groupMemberships())
+            uids = set()
+            for group in groupsIn:
+                try:
+                    uid = group.uid # a sub-principal
+                except AttributeError:
+                    uid = group.record.guid # a regular group
+                uids.add(uid)
+            self.assertEquals(
+                set(uids),
+                groups,
+            )
+
+
     def test_diffAssignments(self):
         """
         Ensure external proxy assignment diffing works
@@ -667,7 +827,7 @@
         # as indicated by the return value for "fast".  Note that the cache
         # is already populated so updateCache( ) in fast mode will not do
         # anything, and numMembers will be 0.
-        fast, numMembers = (yield updater.updateCache(fast=True))
+        fast, numMembers, numChanged = (yield updater.updateCache(fast=True))
         self.assertEquals(fast, True)
         self.assertEquals(numMembers, 0)
 
@@ -678,61 +838,70 @@
         self.assertEquals(numChanged, 0)
 
         # Verify the snapshot contains the pickled dictionary we expect
+        expected = {
+            "46D9D716-CBEE-490F-907A-66FA6C3767FF":
+                set([
+                    u"00599DAF-3E75-42DD-9DB7-52617E79943F",
+                ]),
+            "5A985493-EE2C-4665-94CF-4DFEA3A89500":
+                set([
+                    u"non_calendar_group",
+                    u"recursive1_coasts",
+                    u"recursive2_coasts",
+                    u"both_coasts"
+                ]),
+            "6423F94A-6B76-4A3A-815B-D52CFD77935D":
+                set([
+                    u"left_coast",
+                    u"recursive1_coasts",
+                    u"recursive2_coasts",
+                    u"both_coasts"
+                ]),
+            "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1":
+                set([
+                    u"left_coast",
+                    u"both_coasts"
+                ]),
+            "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0":
+                set([
+                    u"non_calendar_group",
+                    u"left_coast",
+                    u"both_coasts"
+                ]),
+            "left_coast":
+                 set([
+                     u"both_coasts"
+                 ]),
+            "recursive1_coasts":
+                 set([
+                     u"recursive1_coasts",
+                     u"recursive2_coasts"
+                 ]),
+            "recursive2_coasts":
+                set([
+                    u"recursive1_coasts",
+                    u"recursive2_coasts"
+                ]),
+            "right_coast":
+                set([
+                    u"both_coasts"
+                ])
+        }
         members = pickle.loads(snapshotFile.getContent())
-        self.assertEquals(
-            members,
-            {
-                "46D9D716-CBEE-490F-907A-66FA6C3767FF":
-                    set([
-                        u"00599DAF-3E75-42DD-9DB7-52617E79943F",
-                    ]),
-                "5A985493-EE2C-4665-94CF-4DFEA3A89500":
-                    set([
-                        u"non_calendar_group",
-                        u"recursive1_coasts",
-                        u"recursive2_coasts",
-                        u"both_coasts"
-                    ]),
-                "6423F94A-6B76-4A3A-815B-D52CFD77935D":
-                    set([
-                        u"left_coast",
-                        u"recursive1_coasts",
-                        u"recursive2_coasts",
-                        u"both_coasts"
-                    ]),
-                "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1":
-                    set([
-                        u"left_coast",
-                        u"both_coasts"
-                    ]),
-                "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0":
-                    set([
-                        u"non_calendar_group",
-                        u"left_coast",
-                        u"both_coasts"
-                    ]),
-                "left_coast":
-                     set([
-                         u"both_coasts"
-                     ]),
-                "recursive1_coasts":
-                     set([
-                         u"recursive1_coasts",
-                         u"recursive2_coasts"
-                     ]),
-                "recursive2_coasts":
-                    set([
-                        u"recursive1_coasts",
-                        u"recursive2_coasts"
-                    ]),
-                "right_coast":
-                    set([
-                        u"both_coasts"
-                    ])
-            }
-        )
+        self.assertEquals(members, expected)
+        
+        # "Corrupt" the snapshot and verify it is regenerated properly
+        snapshotFile.setContent("xyzzy")
+        cache.delete("group-cacher-populated")
+        fast, numMembers, numChanged = (yield updater.updateCache(fast=True))
+        self.assertEquals(fast, False)
+        self.assertEquals(numMembers, 9)
+        self.assertEquals(numChanged, 9)
+        self.assertTrue(snapshotFile.exists())
+        members = pickle.loads(snapshotFile.getContent())
+        self.assertEquals(members, expected)
+        
 
-
     def test_autoAcceptMembers(self):
         """
         autoAcceptMembers( ) returns an empty list if no autoAcceptGroup is

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/method/report_sync_collection.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/method/report_sync_collection.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/method/report_sync_collection.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -58,6 +58,14 @@
 
     responses = []
 
+    # Do not support limit
+    if sync_collection.sync_limit is not None:
+        raise HTTPError(ErrorResponse(
+            responsecode.INSUFFICIENT_STORAGE_SPACE,
+            element.NumberOfMatchesWithinLimits(),
+            "Report limit not supported",
+        ))
+
     # Process Depth and sync-level for backwards compatibility
     # Use sync-level if present and ignore Depth, else use Depth
     if sync_collection.sync_level:

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/resource.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/resource.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/resource.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -324,7 +324,7 @@
         @param transaction: optional transaction to use instead of associated transaction
         @type transaction: L{txdav.caldav.idav.ITransaction}
         """
-        result = yield super(CalDAVResource, self).renderHTTP(request)
+        response = yield super(CalDAVResource, self).renderHTTP(request)
         if transaction is None:
             transaction = self._associatedTransaction
         if transaction is not None:
@@ -332,9 +332,19 @@
                 yield transaction.abort()
             else:
                 yield transaction.commit()
-        returnValue(result)
 
+                # Log extended item
+                if transaction.logItems:
+                    if not hasattr(request, "extendedLogItems"):
+                        request.extendedLogItems = {}
+                    request.extendedLogItems.update(transaction.logItems)
 
+                # May need to reset the last-modified header in the response as txn.commit() can change it due to pre-commit hooks
+                if response.headers.hasHeader("last-modified"):
+                    response.headers.setHeader("last-modified", self.lastModified())
+        returnValue(response)
+
+
     # Begin transitional new-store resource interface:
 
     def copyDeadPropertiesTo(self, other):
@@ -2547,15 +2557,6 @@
         return self._newStoreHome.hasCalendarResourceUIDSomewhereElse(uid, ok_object._newStoreObject, mode)
 
 
-    def getCalendarResourcesForUID(self, uid, allow_shared=False):
-        """
-        Return all child object resources with the specified UID.
-
-        Pass through direct to store.
-        """
-        return self._newStoreHome.getCalendarResourcesForUID(uid, allow_shared)
-
-
     def defaultAccessControlList(self):
         myPrincipal = self.principalForRecord()
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/scheduling_store/caldav/resource.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/scheduling_store/caldav/resource.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/scheduling_store/caldav/resource.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -422,8 +422,12 @@
                 authz = (yield request.locateResource(principalURL))
                 self._associatedTransaction._authz_uid = authz.record.guid
 
+        # Log extended item
+        if not hasattr(request, "extendedLogItems"):
+            request.extendedLogItems = {}
+
         # This is a local CALDAV scheduling operation.
-        scheduler = CalDAVScheduler(self._associatedTransaction, self.parent._newStoreHome.uid())
+        scheduler = CalDAVScheduler(self._associatedTransaction, self.parent._newStoreHome.uid(), logItems=request.extendedLogItems)
 
         # Do the POST processing treating
         result = (yield scheduler.doSchedulingViaPOST(originator, recipients, calendar))

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/stdconfig.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/stdconfig.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -54,7 +54,7 @@
     },
     "twistedcaldav.directory.appleopendirectory.OpenDirectoryService": {
         "node": "/Search",
-        "cacheTimeout": 10, # Minutes
+        "cacheTimeout": 1, # Minutes
         "batchSize": 100, # for splitting up large queries
         "negativeCaching": False,
         "restrictEnabledRecords": False,
@@ -62,7 +62,7 @@
         "recordTypes": ("users", "groups"),
     },
     "twistedcaldav.directory.ldapdirectory.LdapDirectoryService": {
-        "cacheTimeout": 10, # Minutes
+        "cacheTimeout": 1, # Minutes
         "negativeCaching": False,
         "warningThresholdSeconds": 3,
         "batchSize": 500, # for splitting up large queries
@@ -307,9 +307,15 @@
     "FailIfUpgradeNeeded"  : True, # Set to True to prevent the server or utility tools
                                    # tools from running if the database needs a schema
                                    # upgrade.
-    "StopAfterUpgradeTriggerFile" : "stop_after_upgrade", # if this file exists
-        # in ConfigRoot, stop the service after finishing upgrade phase
+    "StopAfterUpgradeTriggerFile" : "stop_after_upgrade",   # if this file exists in ConfigRoot, stop
+                                                            # the service after finishing upgrade phase
 
+    "UpgradeHomePrefix"    : "",    # When upgrading, only upgrade homes where the owner UID starts with
+                                    # with the specified prefix. The upgrade will only be partial and only
+                                    # apply to upgrade pieces that affect entire homes. The upgrade will
+                                    # need to be run again without this prefix set to complete the overall
+                                    # upgrade.
+
     #
     # Types of service provided
     #
@@ -449,6 +455,7 @@
     #
     "AccessLogFile"  : "access.log", # Apache-style access log
     "ErrorLogFile"   : "error.log", # Server activity log
+    "AgentLogFile"   : "agent.log", # Agent activity log
     "ErrorLogEnabled"   : True, # True = use log file, False = stdout
     "ErrorLogRotateMB"  : 10, # Rotate error log after so many megabytes
     "ErrorLogMaxRotatedFiles"  : 5, # Retain this many error log files
@@ -563,8 +570,8 @@
         }
     },
 
-    "EnableTimezonesByReference" : False, # Strip out VTIMEZONES that are known
-    "UsePackageTimezones" : False, # Use timezone data from twistedcaldav.zoneinfo - don't copy to Data directory
+    "EnableTimezonesByReference" : True, # Strip out VTIMEZONES that are known
+    "UsePackageTimezones"        : False, # Use timezone data from twistedcaldav.zoneinfo - don't copy to Data directory
 
     "EnableBatchUpload"       : True, # POST batch uploads
     "MaxResourcesBatchUpload" : 100, # Maximum number of resources in a batch POST
@@ -825,7 +832,12 @@
                                    # connections used per worker process.
 
     "ListenBacklog": 2024,
-    "IdleConnectionTimeOut": 15,
+
+    "IncomingDataTimeOut": 60,          # Max. time between request lines
+    "PipelineIdleTimeOut": 15,          # Max. time between pipelined requests
+    "IdleConnectionTimeOut": 60 * 6,    # Max. time for response processing
+    "CloseConnectionTimeOut": 15,       # Max. time for client close
+
     "UIDReservationTimeOut": 30 * 60,
 
     "MaxMultigetWithDataHrefs": 5000,
@@ -996,6 +1008,10 @@
     # America/Los_Angeles.
     "DefaultTimezone" : "",
 
+    # After this many seconds of no admin requests, shutdown the agent.  Zero
+    # means no automatic shutdown.
+    "AgentInactivityTimeoutSeconds"  : 4 * 60 * 60,
+
     # These two aren't relative to ConfigRoot:
     "Includes": [], # Other plists to parse after this one
     "WritableConfigFile" : "", # which config file calendarserver_config should
@@ -1082,6 +1098,7 @@
     ("ConfigRoot", ("Scheduling", "iSchedule", "DKIM", "PrivateExchanges",)),
     ("LogRoot", "AccessLogFile"),
     ("LogRoot", "ErrorLogFile"),
+    ("LogRoot", "AgentLogFile"),
     ("LogRoot", ("Postgres", "LogFile",)),
     ("LogRoot", ("LogDatabase", "StatisticsLogFile",)),
     ("LogRoot", "AccountingLogRoot"),
@@ -1537,6 +1554,8 @@
             compliance += caldavxml.caldav_managed_attachments_compliance
         if configDict.Scheduling.Options.TimestampAttendeePartStatChanges:
             compliance += customxml.calendarserver_partstat_changes_compliance
+        if configDict.EnableTimezonesByReference:
+            compliance += caldavxml.caldav_timezones_by_reference_compliance
     else:
         compliance = ()
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/storebridge.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/storebridge.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/storebridge.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -77,7 +77,7 @@
 import hashlib
 import time
 import uuid
-from twext.web2 import responsecode
+from twext.web2 import responsecode, http_headers, http
 from twext.web2.iweb import IResponse
 from twistedcaldav.customxml import calendarserver_namespace
 from twistedcaldav.instance import InvalidOverriddenInstanceError, \
@@ -2222,6 +2222,41 @@
         response.headers.setHeader("content-type", self.contentType())
         returnValue(response)
 
+
+    @inlineCallbacks
+    def checkPreconditions(self, request):
+        """
+        We override the base class to trap the failure case and process any Prefer header.
+        """
+
+        try:
+            response = yield super(_CommonObjectResource, self).checkPreconditions(request)
+        except HTTPError as e:
+            if e.response.code == responsecode.PRECONDITION_FAILED:
+                response = yield self._processPrefer(request, e.response)
+                raise HTTPError(response)
+            else:
+                raise
+
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def _processPrefer(self, request, response):
+        # Look for Prefer header
+        prefer = request.headers.getHeader("prefer", {})
+        returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
+
+        if returnRepresentation and (response.code / 100 == 2 or response.code == responsecode.PRECONDITION_FAILED):
+            oldcode = response.code
+            response = (yield self.http_GET(request))
+            if oldcode in (responsecode.CREATED, responsecode.PRECONDITION_FAILED):
+                response.code = oldcode
+            response.headers.removeHeader("content-location")
+            response.headers.setHeader("content-location", self.url())
+
+        returnValue(response)
+
     # The following are used to map store exceptions into HTTP error responses
     StoreExceptionsStatusErrors = set()
     StoreExceptionsErrors = {}
@@ -2601,7 +2636,76 @@
         AttachmentRemoveFailed: (caldav_namespace, "valid-attachment-remove",),
     }
 
+
     @inlineCallbacks
+    def _checkPreconditions(self, request):
+        """
+        We override the base class to handle the special implicit scheduling weak ETag behavior
+        for compatibility with old clients using If-Match.
+        """
+
+        if config.Scheduling.CalDAV.ScheduleTagCompatibility:
+
+            if self.exists():
+                etags = self.scheduleEtags
+                if len(etags) > 1:
+                    # This is almost verbatim from twext.web2.static.checkPreconditions
+                    if request.method not in ("GET", "HEAD"):
+
+                        # Always test against the current etag first just in case schedule-etags is out of sync
+                        etag = (yield self.etag())
+                        etags = (etag,) + tuple([http_headers.ETag(schedule_etag) for schedule_etag in etags])
+
+                        # Loop over each tag and succeed if any one matches, else re-raise last exception
+                        exists = self.exists()
+                        last_modified = self.lastModified()
+                        last_exception = None
+                        for etag in etags:
+                            try:
+                                http.checkPreconditions(
+                                    request,
+                                    entityExists=exists,
+                                    etag=etag,
+                                    lastModified=last_modified,
+                                )
+                            except HTTPError, e:
+                                last_exception = e
+                            else:
+                                break
+                        else:
+                            if last_exception:
+                                raise last_exception
+
+                    # Check per-method preconditions
+                    method = getattr(self, "preconditions_" + request.method, None)
+                    if method:
+                        returnValue((yield method(request)))
+                    else:
+                        returnValue(None)
+
+        result = (yield super(CalendarObjectResource, self).checkPreconditions(request))
+        returnValue(result)
+
+
+    @inlineCallbacks
+    def checkPreconditions(self, request):
+        """
+        We override the base class to do special schedule tag processing.
+        """
+
+        try:
+            response = yield self._checkPreconditions(request)
+        except HTTPError as e:
+            if e.response.code == responsecode.PRECONDITION_FAILED:
+                response = yield self._processPrefer(request, e.response)
+                raise HTTPError(response)
+            else:
+                raise
+
+        returnValue(response)
+
+
+    @inlineCallbacks
     def http_PUT(self, request):
 
         # Content-type check
@@ -2615,7 +2719,14 @@
             ))
 
         # Do schedule tag check
-        schedule_tag_match = self.validIfScheduleMatch(request)
+        try:
+            schedule_tag_match = self.validIfScheduleMatch(request)
+        except HTTPError as e:
+            if e.response.code == responsecode.PRECONDITION_FAILED:
+                response = yield self._processPrefer(request, e.response)
+                raise HTTPError(response)
+            else:
+                raise
 
         # Read the calendar component from the stream
         try:
@@ -2681,18 +2792,9 @@
 
                 request.addResponseFilter(_removeEtag, atEnd=True)
 
-            # Look for Prefer header
-            prefer = request.headers.getHeader("prefer", {})
-            returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
+            # Handle Prefer header
+            response = yield self._processPrefer(request, response)
 
-            if returnRepresentation and response.code / 100 == 2:
-                oldcode = response.code
-                response = (yield self.http_GET(request))
-                if oldcode == responsecode.CREATED:
-                    response.code = responsecode.CREATED
-                response.headers.removeHeader("content-location")
-                response.headers.setHeader("content-location", self.url())
-
             returnValue(response)
 
         # Handle the various store errors
@@ -2871,18 +2973,12 @@
                 raise
 
         # Look for Prefer header
-        prefer = request.headers.getHeader("prefer", {})
-        returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
-        if returnRepresentation:
-            result = (yield self.render(request))
-            result.code = OK
-            result.headers.removeHeader("content-location")
-            result.headers.setHeader("content-location", request.path)
-        else:
-            result = post_result
+        result = yield self._processPrefer(request, post_result)
+
         if action in ("attachment-add", "attachment-update",):
             result.headers.setHeader("location", location)
             result.headers.addRawHeader("Cal-Managed-ID", attachment.managedID())
+
         returnValue(result)
 
 
@@ -3313,17 +3409,8 @@
                 request.addResponseFilter(_removeEtag, atEnd=True)
 
             # Look for Prefer header
-            prefer = request.headers.getHeader("prefer", {})
-            returnRepresentation = any([key == "return" and value == "representation" for key, value, _ignore_args in prefer])
+            response = yield self._processPrefer(request, response)
 
-            if returnRepresentation and response.code / 100 == 2:
-                oldcode = response.code
-                response = (yield self.http_GET(request))
-                if oldcode == responsecode.CREATED:
-                    response.code = responsecode.CREATED
-                response.headers.removeHeader("content-location")
-                response.headers.setHeader("content-location", self.url())
-
             returnValue(response)
 
         # Handle the various store errors

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Africa/Juba.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Africa/Juba.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Africa/Juba.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -9,7 +9,7 @@
 DTSTART:19310101T000000
 RDATE:19310101T000000
 TZNAME:CAST
-TZOFFSETFROM:+020624
+TZOFFSETFROM:+021008
 TZOFFSETTO:+0200
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Anguilla.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Anguilla.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Anguilla.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -9,7 +9,7 @@
 DTSTART:19120302T000000
 RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041216
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Araguaina.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Araguaina.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Araguaina.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -30,6 +30,7 @@
 RDATE:19981011T000000
 RDATE:19991003T000000
 RDATE:20021103T000000
+RDATE:20121021T000000
 TZNAME:BRST
 TZOFFSETFROM:-0300
 TZOFFSETTO:-0200
@@ -64,7 +65,7 @@
 RDATE:19980301T000000
 RDATE:19990221T000000
 RDATE:20000227T000000
-RDATE:20150222T000000
+RDATE:20130217T000000
 TZNAME:BRT
 TZOFFSETFROM:-0200
 TZOFFSETTO:-0300
@@ -94,6 +95,7 @@
 DTSTART:19900917T000000
 RDATE:19900917T000000
 RDATE:20030924T000000
+RDATE:20130901T000000
 TZNAME:BRT
 TZOFFSETFROM:-0300
 TZOFFSETTO:-0300
@@ -119,26 +121,5 @@
 TZOFFSETFROM:-0200
 TZOFFSETTO:-0300
 END:STANDARD
-BEGIN:DAYLIGHT
-DTSTART:20121021T000000
-RRULE:FREQ=YEARLY;BYDAY=3SU;BYMONTH=10
-TZNAME:BRST
-TZOFFSETFROM:-0300
-TZOFFSETTO:-0200
-END:DAYLIGHT
-BEGIN:STANDARD
-DTSTART:20130217T000000
-RRULE:FREQ=YEARLY;UNTIL=20140216T020000Z;BYDAY=3SU;BYMONTH=2
-TZNAME:BRT
-TZOFFSETFROM:-0200
-TZOFFSETTO:-0300
-END:STANDARD
-BEGIN:STANDARD
-DTSTART:20160221T000000
-RRULE:FREQ=YEARLY;UNTIL=20220220T020000Z;BYDAY=3SU;BYMONTH=2
-TZNAME:BRT
-TZOFFSETFROM:-0200
-TZOFFSETTO:-0300
-END:STANDARD
 END:VTIMEZONE
 END:VCALENDAR

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Argentina/San_Luis.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -144,6 +144,7 @@
 BEGIN:STANDARD
 DTSTART:19910601T000000
 RDATE:19910601T000000
+RDATE:20091011T000000
 TZNAME:ART
 TZOFFSETFROM:-0400
 TZOFFSETTO:-0300
@@ -178,7 +179,7 @@
 END:STANDARD
 BEGIN:DAYLIGHT
 DTSTART:20081012T000000
-RRULE:FREQ=YEARLY;UNTIL=20091011T040000Z;BYDAY=2SU;BYMONTH=10
+RDATE:20081012T000000
 TZNAME:WARST
 TZOFFSETFROM:-0400
 TZOFFSETTO:-0300

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Aruba.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Aruba.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Aruba.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -9,7 +9,7 @@
 DTSTART:19120212T000000
 RDATE:19120212T000000
 TZNAME:ANT
-TZOFFSETFROM:-044024
+TZOFFSETFROM:-043547
 TZOFFSETTO:-0430
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Cayman.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Cayman.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Cayman.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -10,13 +10,13 @@
 RDATE:18900101T000000
 TZNAME:KMT
 TZOFFSETFROM:-052532
-TZOFFSETTO:-050712
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Dominica.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Dominica.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Dominica.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Dominica
 X-LIC-LOCATION:America/Dominica
 BEGIN:STANDARD
-DTSTART:19110701T000100
-RDATE:19110701T000100
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040536
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grand_Turk.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grand_Turk.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grand_Turk.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -10,13 +10,13 @@
 RDATE:18900101T000000
 TZNAME:KMT
 TZOFFSETFROM:-044432
-TZOFFSETTO:-050712
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grenada.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grenada.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Grenada.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Grenada
 X-LIC-LOCATION:America/Grenada
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-0407
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Guadeloupe.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Guadeloupe.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Guadeloupe.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Guadeloupe
 X-LIC-LOCATION:America/Guadeloupe
 BEGIN:STANDARD
-DTSTART:19110608T000000
-RDATE:19110608T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040608
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Jamaica.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Jamaica.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Jamaica.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -9,14 +9,14 @@
 DTSTART:18900101T000000
 RDATE:18900101T000000
 TZNAME:KMT
-TZOFFSETFROM:-050712
-TZOFFSETTO:-050712
+TZOFFSETFROM:-050711
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Marigot.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Marigot.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Marigot.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Marigot
 X-LIC-LOCATION:America/Marigot
 BEGIN:STANDARD
-DTSTART:19110608T000000
-RDATE:19110608T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040608
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Montserrat.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Montserrat.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Montserrat.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Montserrat
 X-LIC-LOCATION:America/Montserrat
 BEGIN:STANDARD
-DTSTART:19110701T000100
-RDATE:19110701T000100
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040852
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Barthelemy.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Barthelemy.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Barthelemy.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/St_Barthelemy
 X-LIC-LOCATION:America/St_Barthelemy
 BEGIN:STANDARD
-DTSTART:19110608T000000
-RDATE:19110608T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040608
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Kitts.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Kitts.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Kitts.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -9,7 +9,7 @@
 DTSTART:19120302T000000
 RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041052
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Lucia.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Lucia.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Lucia.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,17 +6,10 @@
 TZID:America/St_Lucia
 X-LIC-LOCATION:America/St_Lucia
 BEGIN:STANDARD
-DTSTART:18900101T000000
-RDATE:18900101T000000
-TZNAME:CMT
-TZOFFSETFROM:-0404
-TZOFFSETTO:-0404
-END:STANDARD
-BEGIN:STANDARD
-DTSTART:19120101T000000
-RDATE:19120101T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-0404
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Thomas.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Thomas.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Thomas.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/St_Thomas
 X-LIC-LOCATION:America/St_Thomas
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041944
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Vincent.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Vincent.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/St_Vincent.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,17 +6,10 @@
 TZID:America/St_Vincent
 X-LIC-LOCATION:America/St_Vincent
 BEGIN:STANDARD
-DTSTART:18900101T000000
-RDATE:18900101T000000
-TZNAME:KMT
-TZOFFSETFROM:-040456
-TZOFFSETTO:-040456
-END:STANDARD
-BEGIN:STANDARD
-DTSTART:19120101T000000
-RDATE:19120101T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-040456
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Tortola.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Tortola.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Tortola.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Tortola
 X-LIC-LOCATION:America/Tortola
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041828
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Virgin.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Virgin.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/America/Virgin.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,10 @@
 TZID:America/Virgin
 X-LIC-LOCATION:America/Virgin
 BEGIN:STANDARD
-DTSTART:19110701T000000
-RDATE:19110701T000000
+DTSTART:19120302T000000
+RDATE:19120302T000000
 TZNAME:AST
-TZOFFSETFROM:-041944
+TZOFFSETFROM:-040604
 TZOFFSETTO:-0400
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/McMurdo.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,13 +6,62 @@
 TZID:Antarctica/McMurdo
 X-LIC-LOCATION:Antarctica/McMurdo
 BEGIN:STANDARD
-DTSTART:19560101T000000
-RDATE:19560101T000000
+DTSTART:18681102T000000
+RDATE:18681102T000000
 TZNAME:NZST
-TZOFFSETFROM:+0000
+TZOFFSETFROM:+113904
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19271106T020000
+RDATE:19271106T020000
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1230
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19280304T020000
+RDATE:19280304T020000
+TZNAME:NZMT
+TZOFFSETFROM:+1230
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19281014T020000
+RRULE:FREQ=YEARLY;UNTIL=19331007T143000Z;BYDAY=2SU;BYMONTH=10
+TZNAME:NZST
+TZOFFSETFROM:+1130
 TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19290317T020000
+RRULE:FREQ=YEARLY;UNTIL=19330318T140000Z;BYDAY=3SU;BYMONTH=3
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
 END:STANDARD
+BEGIN:STANDARD
+DTSTART:19340429T020000
+RRULE:FREQ=YEARLY;UNTIL=19400427T140000Z;BYDAY=-1SU;BYMONTH=4
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
+END:STANDARD
 BEGIN:DAYLIGHT
+DTSTART:19340930T020000
+RRULE:FREQ=YEARLY;UNTIL=19400928T143000Z;BYDAY=-1SU;BYMONTH=9
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19460101T000000
+RDATE:19460101T000000
+TZNAME:NZST
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1200
+END:STANDARD
+BEGIN:DAYLIGHT
 DTSTART:19741103T020000
 RDATE:19741103T020000
 RDATE:19891008T020000

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Antarctica/South_Pole.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,13 +6,62 @@
 TZID:Antarctica/South_Pole
 X-LIC-LOCATION:Antarctica/South_Pole
 BEGIN:STANDARD
-DTSTART:19560101T000000
-RDATE:19560101T000000
+DTSTART:18681102T000000
+RDATE:18681102T000000
 TZNAME:NZST
-TZOFFSETFROM:+0000
+TZOFFSETFROM:+113904
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19271106T020000
+RDATE:19271106T020000
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1230
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19280304T020000
+RDATE:19280304T020000
+TZNAME:NZMT
+TZOFFSETFROM:+1230
+TZOFFSETTO:+1130
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19281014T020000
+RRULE:FREQ=YEARLY;UNTIL=19331007T143000Z;BYDAY=2SU;BYMONTH=10
+TZNAME:NZST
+TZOFFSETFROM:+1130
 TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19290317T020000
+RRULE:FREQ=YEARLY;UNTIL=19330318T140000Z;BYDAY=3SU;BYMONTH=3
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
 END:STANDARD
+BEGIN:STANDARD
+DTSTART:19340429T020000
+RRULE:FREQ=YEARLY;UNTIL=19400427T140000Z;BYDAY=-1SU;BYMONTH=4
+TZNAME:NZMT
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1130
+END:STANDARD
 BEGIN:DAYLIGHT
+DTSTART:19340930T020000
+RRULE:FREQ=YEARLY;UNTIL=19400928T143000Z;BYDAY=-1SU;BYMONTH=9
+TZNAME:NZST
+TZOFFSETFROM:+1130
+TZOFFSETTO:+1200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19460101T000000
+RDATE:19460101T000000
+TZNAME:NZST
+TZOFFSETFROM:+1200
+TZOFFSETTO:+1200
+END:STANDARD
+BEGIN:DAYLIGHT
 DTSTART:19741103T020000
 RDATE:19741103T020000
 RDATE:19891008T020000

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Amman.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Amman.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Amman.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -106,7 +106,7 @@
 END:DAYLIGHT
 BEGIN:DAYLIGHT
 DTSTART:20020328T235959
-RRULE:FREQ=YEARLY;BYDAY=-1TH;BYMONTH=3
+RRULE:FREQ=YEARLY;UNTIL=20120329T215959Z;BYDAY=-1TH;BYMONTH=3
 TZNAME:EEST
 TZOFFSETFROM:+0200
 TZOFFSETTO:+0300
@@ -118,26 +118,12 @@
 TZOFFSETFROM:+0300
 TZOFFSETTO:+0200
 END:STANDARD
-BEGIN:DAYLIGHT
-DTSTART:20130328T235959
-RDATE:20130328T235959
-TZNAME:EEST
-TZOFFSETFROM:+0300
-TZOFFSETTO:+0300
-END:DAYLIGHT
 BEGIN:STANDARD
-DTSTART:20131025T010000
-RRULE:FREQ=YEARLY;BYDAY=-1FR;BYMONTH=10
-TZNAME:EET
+DTSTART:20121026T010000
+RDATE:20121026T010000
+TZNAME:AST
 TZOFFSETFROM:+0300
-TZOFFSETTO:+0200
-END:STANDARD
-BEGIN:DAYLIGHT
-DTSTART:20140327T235959
-RRULE:FREQ=YEARLY;BYDAY=-1TH;BYMONTH=3
-TZNAME:EEST
-TZOFFSETFROM:+0200
 TZOFFSETTO:+0300
-END:DAYLIGHT
+END:STANDARD
 END:VTIMEZONE
 END:VCALENDAR

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Dili.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Dili.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Dili.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -29,7 +29,7 @@
 BEGIN:STANDARD
 DTSTART:19760503T000000
 RDATE:19760503T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0800
 END:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Gaza.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Gaza.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Gaza.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -43,6 +43,7 @@
 RDATE:20090904T010000
 RDATE:20100811T000000
 RDATE:20110801T000000
+RDATE:20120921T010000
 TZNAME:EET
 TZOFFSETFROM:+0300
 TZOFFSETTO:+0200
@@ -186,7 +187,7 @@
 TZOFFSETTO:+0300
 END:DAYLIGHT
 BEGIN:STANDARD
-DTSTART:20120921T010000
+DTSTART:20130927T000000
 RRULE:FREQ=YEARLY;BYDAY=FR;BYMONTHDAY=21,22,23,24,25,26,27;BYMONTH=9
 TZNAME:EET
 TZOFFSETFROM:+0300

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Hebron.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Hebron.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Hebron.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -44,6 +44,7 @@
 RDATE:20100811T000000
 RDATE:20110801T000000
 RDATE:20110930T000000
+RDATE:20120921T010000
 TZNAME:EET
 TZOFFSETFROM:+0300
 TZOFFSETTO:+0200
@@ -178,7 +179,7 @@
 TZOFFSETTO:+0300
 END:DAYLIGHT
 BEGIN:STANDARD
-DTSTART:20120921T010000
+DTSTART:20130927T000000
 RRULE:FREQ=YEARLY;BYDAY=FR;BYMONTHDAY=21,22,23,24,25,26,27;BYMONTH=9
 TZNAME:EET
 TZOFFSETFROM:+0300

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jakarta.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jakarta.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jakarta.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -8,7 +8,7 @@
 BEGIN:STANDARD
 DTSTART:18670810T000000
 RDATE:18670810T000000
-TZNAME:JMT
+TZNAME:BMT
 TZOFFSETFROM:+070712
 TZOFFSETTO:+070712
 END:STANDARD
@@ -22,7 +22,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0720
 TZOFFSETTO:+0730
 END:STANDARD
@@ -36,28 +36,28 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19480501T000000
 RDATE:19480501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0800
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19500501T000000
 RDATE:19500501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0800
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19640101T000000
 RDATE:19640101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0700
 END:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jayapura.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jayapura.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Jayapura.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -8,7 +8,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:EIT
+TZNAME:WIT
 TZOFFSETFROM:+092248
 TZOFFSETTO:+0900
 END:STANDARD
@@ -22,7 +22,7 @@
 BEGIN:STANDARD
 DTSTART:19640101T000000
 RDATE:19640101T000000
-TZNAME:EIT
+TZNAME:WIT
 TZOFFSETFROM:+0930
 TZOFFSETTO:+0900
 END:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Makassar.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Makassar.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Makassar.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -15,7 +15,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+075736
 TZOFFSETTO:+0800
 END:STANDARD
@@ -29,7 +29,7 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0800
 END:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Pontianak.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Pontianak.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Pontianak.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -15,7 +15,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+071720
 TZOFFSETTO:+0730
 END:STANDARD
@@ -29,35 +29,35 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19480501T000000
 RDATE:19480501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0800
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19500501T000000
 RDATE:19500501T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0800
 TZOFFSETTO:+0730
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19640101T000000
 RDATE:19640101T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0730
 TZOFFSETTO:+0800
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19880101T000000
 RDATE:19880101T000000
-TZNAME:WIT
+TZNAME:WIB
 TZOFFSETFROM:+0800
 TZOFFSETTO:+0700
 END:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Asia/Ujung_Pandang.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -15,7 +15,7 @@
 BEGIN:STANDARD
 DTSTART:19321101T000000
 RDATE:19321101T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+075736
 TZOFFSETTO:+0800
 END:STANDARD
@@ -29,7 +29,7 @@
 BEGIN:STANDARD
 DTSTART:19450923T000000
 RDATE:19450923T000000
-TZNAME:CIT
+TZNAME:WITA
 TZOFFSETFROM:+0900
 TZOFFSETTO:+0800
 END:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Busingen.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Busingen.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Busingen.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,17 +6,17 @@
 TZID:Europe/Busingen
 X-LIC-LOCATION:Europe/Busingen
 BEGIN:STANDARD
-DTSTART:18480912T000000
-RDATE:18480912T000000
+DTSTART:18530716T000000
+RDATE:18530716T000000
 TZNAME:BMT
 TZOFFSETFROM:+003408
-TZOFFSETTO:+002944
+TZOFFSETTO:+002946
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:18940601T000000
 RDATE:18940601T000000
 TZNAME:CEST
-TZOFFSETFROM:+002944
+TZOFFSETFROM:+002946
 TZOFFSETTO:+0100
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Vaduz.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Vaduz.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Vaduz.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,31 @@
 TZID:Europe/Vaduz
 X-LIC-LOCATION:Europe/Vaduz
 BEGIN:STANDARD
+DTSTART:18530716T000000
+RDATE:18530716T000000
+TZNAME:BMT
+TZOFFSETFROM:+003408
+TZOFFSETTO:+002946
+END:STANDARD
+BEGIN:STANDARD
 DTSTART:18940601T000000
 RDATE:18940601T000000
+TZNAME:CEST
+TZOFFSETFROM:+002946
+TZOFFSETTO:+0100
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19410505T010000
+RRULE:FREQ=YEARLY;UNTIL=19420504T000000Z;BYDAY=1MO;BYMONTH=5
+TZNAME:CEST
+TZOFFSETFROM:+0100
+TZOFFSETTO:+0200
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:19411006T020000
+RRULE:FREQ=YEARLY;UNTIL=19421005T000000Z;BYDAY=1MO;BYMONTH=10
 TZNAME:CET
-TZOFFSETFROM:+003804
+TZOFFSETFROM:+0200
 TZOFFSETTO:+0100
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Zurich.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Zurich.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Europe/Zurich.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,17 +6,17 @@
 TZID:Europe/Zurich
 X-LIC-LOCATION:Europe/Zurich
 BEGIN:STANDARD
-DTSTART:18480912T000000
-RDATE:18480912T000000
+DTSTART:18530716T000000
+RDATE:18530716T000000
 TZNAME:BMT
 TZOFFSETFROM:+003408
-TZOFFSETTO:+002944
+TZOFFSETTO:+002946
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:18940601T000000
 RDATE:18940601T000000
 TZNAME:CEST
-TZOFFSETFROM:+002944
+TZOFFSETFROM:+002946
 TZOFFSETTO:+0100
 END:STANDARD
 BEGIN:DAYLIGHT

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Jamaica.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Jamaica.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Jamaica.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -9,14 +9,14 @@
 DTSTART:18900101T000000
 RDATE:18900101T000000
 TZNAME:KMT
-TZOFFSETFROM:-050712
-TZOFFSETTO:-050712
+TZOFFSETFROM:-050711
+TZOFFSETTO:-050711
 END:STANDARD
 BEGIN:STANDARD
 DTSTART:19120201T000000
 RDATE:19120201T000000
 TZNAME:EST
-TZOFFSETFROM:-050712
+TZOFFSETFROM:-050711
 TZOFFSETTO:-0500
 END:STANDARD
 BEGIN:STANDARD

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Fiji.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Fiji.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Fiji.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -43,7 +43,7 @@
 END:STANDARD
 BEGIN:DAYLIGHT
 DTSTART:20101024T020000
-RRULE:FREQ=YEARLY;BYDAY=-2SU;BYMONTH=10
+RRULE:FREQ=YEARLY;BYDAY=SU;BYMONTHDAY=21,22,23,24,25,26,27;BYMONTH=10
 TZNAME:FJST
 TZOFFSETFROM:+1200
 TZOFFSETTO:+1300

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Johnston.ics
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Johnston.ics	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/Pacific/Johnston.ics	2013-11-01 22:25:30 UTC (rev 11871)
@@ -6,10 +6,33 @@
 TZID:Pacific/Johnston
 X-LIC-LOCATION:Pacific/Johnston
 BEGIN:STANDARD
-DTSTART:18000101T000000
-RDATE:18000101T000000
+DTSTART:18960113T120000
+RDATE:18960113T120000
 TZNAME:HST
-TZOFFSETFROM:-1000
+TZOFFSETFROM:-103126
+TZOFFSETTO:-1030
+END:STANDARD
+BEGIN:STANDARD
+DTSTART:19330430T020000
+RDATE:19330430T020000
+RDATE:19420209T020000
+TZNAME:HDT
+TZOFFSETFROM:-1030
+TZOFFSETTO:-0930
+END:STANDARD
+BEGIN:STANDARD
+DTSTART:19330521T120000
+RDATE:19330521T120000
+RDATE:19450930T020000
+TZNAME:HST
+TZOFFSETFROM:-0930
+TZOFFSETTO:-1030
+END:STANDARD
+BEGIN:STANDARD
+DTSTART:19470608T020000
+RDATE:19470608T020000
+TZNAME:HST
+TZOFFSETFROM:-1030
 TZOFFSETTO:-1000
 END:STANDARD
 END:VTIMEZONE

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/links.txt
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/links.txt	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/links.txt	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1,4 +1,4 @@
-America/Virgin	America/St_Thomas
+America/Virgin	America/Port_of_Spain
 America/Buenos_Aires	America/Argentina/Buenos_Aires
 Hongkong	Asia/Hong_Kong
 Etc/GMT+0	Etc/GMT
@@ -6,25 +6,28 @@
 Australia/South	Australia/Adelaide
 America/Atka	America/Adak
 America/Coral_Harbour	America/Atikokan
-Africa/Asmera	Africa/Asmara
-America/Fort_Wayne	America/Indiana/Indianapolis
-Australia/LHI	Australia/Lord_Howe
+America/St_Lucia	America/Port_of_Spain
+Canada/Newfoundland	America/St_Johns
+America/Montserrat	America/Port_of_Spain
 PRC	Asia/Shanghai
 US/Mountain	America/Denver
 Asia/Thimbu	Asia/Thimphu
 America/Shiprock	America/Denver
+America/Grenada	America/Port_of_Spain
 Europe/Podgorica	Europe/Belgrade
+Africa/Juba	Africa/Khartoum
 Brazil/DeNoronha	America/Noronha
 Jamaica	America/Jamaica
 Arctic/Longyearbyen	Europe/Oslo
 Europe/Guernsey	Europe/London
 GB	Europe/London
-Canada/Mountain	America/Edmonton
+America/Aruba	America/Curacao
 Chile/EasterIsland	Pacific/Easter
 Etc/Universal	Etc/UTC
 Navajo	America/Denver
 America/Indianapolis	America/Indiana/Indianapolis
 Pacific/Truk	Pacific/Chuuk
+Canada/Mountain	America/Edmonton
 Pacific/Yap	Pacific/Chuuk
 America/Ensenada	America/Tijuana
 Europe/Sarajevo	Europe/Belgrade
@@ -46,19 +49,25 @@
 Asia/Saigon	Asia/Ho_Chi_Minh
 ROC	Asia/Taipei
 America/Louisville	America/Kentucky/Louisville
-America/St_Barthelemy	America/Guadeloupe
+America/St_Barthelemy	America/Port_of_Spain
+America/St_Thomas	America/Port_of_Spain
 America/Porto_Acre	America/Rio_Branco
-Europe/Isle_of_Man	Europe/London
+America/Rosario	America/Argentina/Cordoba
+America/Guadeloupe	America/Port_of_Spain
 Australia/West	Australia/Perth
 US/Eastern	America/New_York
 Libya	Africa/Tripoli
+America/Fort_Wayne	America/Indiana/Indianapolis
+Antarctica/McMurdo	Pacific/Auckland
 Canada/Saskatchewan	America/Regina
+Canada/Pacific	America/Vancouver
 Canada/Eastern	America/Toronto
 Iran	Asia/Tehran
 GB-Eire	Europe/London
 Etc/Greenwich	Etc/GMT
 Atlantic/Jan_Mayen	Europe/Oslo
 US/Central	America/Chicago
+America/St_Vincent	America/Port_of_Spain
 US/Pacific	America/Los_Angeles
 Portugal	Europe/Lisbon
 Europe/Tiraspol	Europe/Chisinau
@@ -70,7 +79,7 @@
 Asia/Ulan_Bator	Asia/Ulaanbaatar
 Kwajalein	Pacific/Kwajalein
 Australia/Yancowinna	Australia/Broken_Hill
-America/Marigot	America/Guadeloupe
+America/Marigot	America/Port_of_Spain
 America/Lower_Princes	America/Curacao
 Greenwich	Etc/GMT
 America/Mendoza	America/Argentina/Mendoza
@@ -82,7 +91,7 @@
 Asia/Tel_Aviv	Asia/Jerusalem
 Mexico/General	America/Mexico_City
 Asia/Istanbul	Europe/Istanbul
-America/Rosario	America/Argentina/Cordoba
+Europe/Isle_of_Man	Europe/London
 GMT0	Etc/GMT
 Europe/Mariehamn	Europe/Helsinki
 Australia/Victoria	Australia/Melbourne
@@ -96,27 +105,33 @@
 Asia/Ashkhabad	Asia/Ashgabat
 America/Knox_IN	America/Indiana/Knox
 America/Catamarca	America/Argentina/Catamarca
+Zulu	Etc/UTC
 GMT+0	Etc/GMT
 Poland	Europe/Warsaw
 Pacific/Samoa	Pacific/Pago_Pago
 US/Indiana-Starke	America/Indiana/Knox
-Canada/Newfoundland	America/St_Johns
+Australia/LHI	Australia/Lord_Howe
+Pacific/Johnston	Pacific/Honolulu
 GMT	Etc/GMT
 Canada/Yukon	America/Whitehorse
 Canada/Atlantic	America/Halifax
 US/Arizona	America/Phoenix
 Europe/San_Marino	Europe/Rome
 Australia/NSW	Australia/Sydney
-Canada/Pacific	America/Vancouver
+America/St_Kitts	America/Port_of_Spain
+Brazil/East	America/Sao_Paulo
 Etc/Zulu	Etc/UTC
+Singapore	Asia/Singapore
 Europe/Ljubljana	Europe/Belgrade
 US/Alaska	America/Anchorage
 Atlantic/Faeroe	Atlantic/Faroe
 Etc/GMT-0	Etc/GMT
+America/Anguilla	America/Port_of_Spain
 Israel	Asia/Jerusalem
 UCT	Etc/UCT
 NZ-CHAT	Pacific/Chatham
 Iceland	Atlantic/Reykjavik
+Brazil/Acre	America/Rio_Branco
 Europe/Vatican	Europe/Rome
 Australia/Queensland	Australia/Brisbane
 Africa/Timbuktu	Africa/Bamako
@@ -131,9 +146,9 @@
 Canada/Central	America/Winnipeg
 GMT-0	Etc/GMT
 W-SU	Europe/Moscow
-Zulu	Etc/UTC
+America/Dominica	America/Port_of_Spain
 Egypt	Africa/Cairo
-Singapore	Asia/Singapore
-Brazil/Acre	America/Rio_Branco
-Brazil/East	America/Sao_Paulo
-Antarctica/South_Pole	Antarctica/McMurdo
\ No newline at end of file
+America/Tortola	America/Port_of_Spain
+Europe/Vaduz	Europe/Zurich
+Africa/Asmera	Africa/Asmara
+Antarctica/South_Pole	Pacific/Auckland
\ No newline at end of file

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/timezones.xml
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/timezones.xml	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/timezones.xml	2013-11-01 22:25:30 UTC (rev 11871)
@@ -2,7 +2,7 @@
 <!DOCTYPE timezones SYSTEM "timezones.dtd">
 
 <timezones>
-  <dtstamp>2013-07-11T02:11:45Z</dtstamp>
+  <dtstamp>2013-10-01T01:19:11Z</dtstamp>
   <timezone>
     <tzid>Africa/Abidjan</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
@@ -138,8 +138,8 @@
   </timezone>
   <timezone>
     <tzid>Africa/Juba</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>2cecec633d0950df56d2022393afdfdb</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>3f633cfde1a12e6f297ba54460659a71</md5>
   </timezone>
   <timezone>
     <tzid>Africa/Kampala</tzid>
@@ -149,6 +149,7 @@
   <timezone>
     <tzid>Africa/Khartoum</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <alias>Africa/Juba</alias>
     <md5>e4a944da17c50b3e031e19dee17bec58</md5>
   </timezone>
   <timezone>
@@ -292,8 +293,8 @@
   </timezone>
   <timezone>
     <tzid>America/Anguilla</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>3a0d92a114885c5ee40e6b4115e7d144</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>dbe16a1225d25666094e89067392e9c8</md5>
   </timezone>
   <timezone>
     <tzid>America/Antigua</tzid>
@@ -302,8 +303,8 @@
   </timezone>
   <timezone>
     <tzid>America/Araguaina</tzid>
-    <dtstamp>2013-01-14T15:32:16Z</dtstamp>
-    <md5>2cac2a50050e86a3dcf0ce0c3aadcafd</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>4d0786c2a5a830c11420baa3adb032df</md5>
   </timezone>
   <timezone>
     <tzid>America/Argentina/Buenos_Aires</tzid>
@@ -364,8 +365,8 @@
   </timezone>
   <timezone>
     <tzid>America/Argentina/San_Luis</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>783baf3a55ec90ab162cb47c3fd07121</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>31db41adcfc7e217968729395ff3e670</md5>
   </timezone>
   <timezone>
     <tzid>America/Argentina/Tucuman</tzid>
@@ -379,8 +380,8 @@
   </timezone>
   <timezone>
     <tzid>America/Aruba</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>473119154a575c5de70495c9082565f2</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>877fdd70d2d3bfc3043c0a12ff8030af</md5>
   </timezone>
   <timezone>
     <tzid>America/Asuncion</tzid>
@@ -480,8 +481,8 @@
   </timezone>
   <timezone>
     <tzid>America/Cayman</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>07ca09e17378e117aac517b98ef07824</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>aed22af0be0d3c839b3ac941a21711de</md5>
   </timezone>
   <timezone>
     <tzid>America/Chicago</tzid>
@@ -524,6 +525,7 @@
     <dtstamp>2013-05-08T18:04:04Z</dtstamp>
     <alias>America/Kralendijk</alias>
     <alias>America/Lower_Princes</alias>
+    <alias>America/Aruba</alias>
     <md5>0b270fa38a9e55a4c48facbf5be02f99</md5>
   </timezone>
   <timezone>
@@ -557,8 +559,8 @@
   </timezone>
   <timezone>
     <tzid>America/Dominica</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>86c1ba04b479911b0cf0aa917a76e3fd</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7a07f99ab572aeac2baa3466c4ac60c5</md5>
   </timezone>
   <timezone>
     <tzid>America/Edmonton</tzid>
@@ -608,20 +610,18 @@
   </timezone>
   <timezone>
     <tzid>America/Grand_Turk</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>794fd7b29a023a5722b25b99bbb6281d</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>494b352a3fb06a2b4a4dd169aa3b98db</md5>
   </timezone>
   <timezone>
     <tzid>America/Grenada</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>32c4916ced899420efcc39a4ca47936e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>ed3d7b7bb03baf025941c7939ea85ece</md5>
   </timezone>
   <timezone>
     <tzid>America/Guadeloupe</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <alias>America/St_Barthelemy</alias>
-    <alias>America/Marigot</alias>
-    <md5>4b93fee3397a9dfc3687da25df948494</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7b37cd74d65c5961c765350b1f492663</md5>
   </timezone>
   <timezone>
     <tzid>America/Guatemala</tzid>
@@ -717,9 +717,9 @@
   </timezone>
   <timezone>
     <tzid>America/Jamaica</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
     <alias>Jamaica</alias>
-    <md5>d724fa4276cb5420ecc60d5371e4ceef</md5>
+    <md5>b7185b6351db3d2c351f83b1166c490d</md5>
   </timezone>
   <timezone>
     <tzid>America/Jujuy</tzid>
@@ -796,8 +796,8 @@
   </timezone>
   <timezone>
     <tzid>America/Marigot</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>5112b932cc80557d4e01190ab86f19de</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>219cf3ff91c93b07dc71298421f9d0de</md5>
   </timezone>
   <timezone>
     <tzid>America/Martinique</tzid>
@@ -868,8 +868,8 @@
   </timezone>
   <timezone>
     <tzid>America/Montserrat</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>1278c06be965a9444decd86efc81338d</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>33c697bb4f58afd1a902018247cd21e4</md5>
   </timezone>
   <timezone>
     <tzid>America/Nassau</tzid>
@@ -947,6 +947,19 @@
   <timezone>
     <tzid>America/Port_of_Spain</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <alias>America/Virgin</alias>
+    <alias>America/St_Lucia</alias>
+    <alias>America/Montserrat</alias>
+    <alias>America/Grenada</alias>
+    <alias>America/St_Barthelemy</alias>
+    <alias>America/St_Thomas</alias>
+    <alias>America/Guadeloupe</alias>
+    <alias>America/St_Vincent</alias>
+    <alias>America/Marigot</alias>
+    <alias>America/St_Kitts</alias>
+    <alias>America/Anguilla</alias>
+    <alias>America/Dominica</alias>
+    <alias>America/Tortola</alias>
     <md5>e0bb07b4ce7859ca493cb6bba549e114</md5>
   </timezone>
   <timezone>
@@ -1047,8 +1060,8 @@
   </timezone>
   <timezone>
     <tzid>America/St_Barthelemy</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>0df0f96dd6aee2faae600ea4bda5792f</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>282f73e528b10401ba322ab01a1c7bd3</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Johns</tzid>
@@ -1058,24 +1071,23 @@
   </timezone>
   <timezone>
     <tzid>America/St_Kitts</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>9b1065952186f4159a5aafe130eef8e2</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>40a657ac17ce9e12105d6895084ed655</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Lucia</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>7cc48ba354a2f44b1a516c388ea6ac6f</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>76cf7c0ae9c69e499de421ecb41ada4b</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Thomas</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <alias>America/Virgin</alias>
-    <md5>f35dd65d25337d2b67195a4000765881</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>0dac89af79b0fa3b1d67d7a6a63aaa11</md5>
   </timezone>
   <timezone>
     <tzid>America/St_Vincent</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>e34a65b69696732682902a6bba3abb29</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>6aae72797c8fea31921bfa1b996b1442</md5>
   </timezone>
   <timezone>
     <tzid>America/Swift_Current</tzid>
@@ -1112,8 +1124,8 @@
   </timezone>
   <timezone>
     <tzid>America/Tortola</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>36252a7ac5c1544d56691117fe4bedf0</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>c19dd4b8748b9ffeb5aa0cc21718d26e</md5>
   </timezone>
   <timezone>
     <tzid>America/Vancouver</tzid>
@@ -1123,8 +1135,8 @@
   </timezone>
   <timezone>
     <tzid>America/Virgin</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>302f38a85c5ed04952bed5372587578e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7f6b5b25ece02b385733e3a4a49f7167</md5>
   </timezone>
   <timezone>
     <tzid>America/Whitehorse</tzid>
@@ -1175,9 +1187,8 @@
   </timezone>
   <timezone>
     <tzid>Antarctica/McMurdo</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <alias>Antarctica/South_Pole</alias>
-    <md5>7866bc7215b5160ba92b9c0ff17f2567</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>3e1599b00f2814dec105fff3868e2232</md5>
   </timezone>
   <timezone>
     <tzid>Antarctica/Palmer</tzid>
@@ -1191,8 +1202,8 @@
   </timezone>
   <timezone>
     <tzid>Antarctica/South_Pole</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>ecbf324f6216e2aba53f2d333c26141e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>01586fbc05c637aed3ec1f6cf889b872</md5>
   </timezone>
   <timezone>
     <tzid>Antarctica/Syowa</tzid>
@@ -1221,8 +1232,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Amman</tzid>
-    <dtstamp>2013-01-14T15:32:16Z</dtstamp>
-    <md5>3d5145f59e99e4245ccca5484b38b271</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>00094f838d542836f35b1d3d0293512c</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Anadyr</tzid>
@@ -1329,8 +1340,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Dili</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>51ad0f3231ff8a47222ed92137ea4dc3</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>f846195e2b9f145c2a35abda88302238</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Dubai</tzid>
@@ -1344,8 +1355,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Gaza</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>17173f5c545937b19c7dba20cc4c7b97</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>656f56b232fb5ad6fb2e25a64086a44c</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Harbin</tzid>
@@ -1354,8 +1365,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Hebron</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>1909080f7bc3c9c602627b4123dd13a9</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>1198057afbbaf92ca0f34b8c16416d74</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Ho_Chi_Minh</tzid>
@@ -1386,13 +1397,13 @@
   </timezone>
   <timezone>
     <tzid>Asia/Jakarta</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>361f6e5683f19c99e1f024b3b80227be</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>37eb197c796a861a7817f06380623146</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Jayapura</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>292c823058149d8c8bee5398924bf64a</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>8fcec2bd8414e2cc845c807af45d1dce</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Jerusalem</tzid>
@@ -1481,9 +1492,9 @@
   </timezone>
   <timezone>
     <tzid>Asia/Makassar</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
     <alias>Asia/Ujung_Pandang</alias>
-    <md5>efbc6213ee5099feeafaeacd6bbbb797</md5>
+    <md5>d34ae21548d56ea2b62eb890559d46f0</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Manila</tzid>
@@ -1528,8 +1539,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Pontianak</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>5558eaba9bfdf39ef008593707cadcda</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>257fd7f7bf01752d97f04d4deaff03be</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Pyongyang</tzid>
@@ -1635,8 +1646,8 @@
   </timezone>
   <timezone>
     <tzid>Asia/Ujung_Pandang</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>d05b22df61dea5d57753440e8b5ef386</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>8a094c3a682a26dbdbb212bcc01e2a7e</md5>
   </timezone>
   <timezone>
     <tzid>Asia/Ulaanbaatar</tzid>
@@ -2234,8 +2245,8 @@
   </timezone>
   <timezone>
     <tzid>Europe/Busingen</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>3a97a0f0c013fde482c37540d3d105eb</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>7e93edc4d979424daf4521a8e39fc4df</md5>
   </timezone>
   <timezone>
     <tzid>Europe/Chisinau</tzid>
@@ -2452,8 +2463,8 @@
   </timezone>
   <timezone>
     <tzid>Europe/Vaduz</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>a8a4e48e0a06cd9b54304b82614447c1</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>f751185606fd0cdcd1d2cf2a1bfd7d4b</md5>
   </timezone>
   <timezone>
     <tzid>Europe/Vatican</tzid>
@@ -2493,9 +2504,10 @@
   </timezone>
   <timezone>
     <tzid>Europe/Zurich</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
     <alias>Europe/Busingen</alias>
-    <md5>189add82d7c3280b544ca70f5696e68c</md5>
+    <alias>Europe/Vaduz</alias>
+    <md5>f4cfe31d995ca98d545a03ef60ebbbee</md5>
   </timezone>
   <timezone>
     <tzid>GB</tzid>
@@ -2614,8 +2626,8 @@
   </timezone>
   <timezone>
     <tzid>Jamaica</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>1f8889ee038dede3ef4868055adf897a</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>b5f083a6081a40b4525e7c8e2da9e963</md5>
   </timezone>
   <timezone>
     <tzid>Japan</tzid>
@@ -2696,6 +2708,8 @@
     <tzid>Pacific/Auckland</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
     <alias>NZ</alias>
+    <alias>Antarctica/McMurdo</alias>
+    <alias>Antarctica/South_Pole</alias>
     <md5>31b52d15573225aff7940c24fbe45343</md5>
   </timezone>
   <timezone>
@@ -2734,8 +2748,8 @@
   </timezone>
   <timezone>
     <tzid>Pacific/Fiji</tzid>
-    <dtstamp>2013-05-08T18:04:04Z</dtstamp>
-    <md5>bdf37be1c81f84c63dcea56d21f02928</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>0cf1c77fa2dc0d8ea0383afddf501e17</md5>
   </timezone>
   <timezone>
     <tzid>Pacific/Funafuti</tzid>
@@ -2766,12 +2780,13 @@
     <tzid>Pacific/Honolulu</tzid>
     <dtstamp>2011-10-05T11:50:21Z</dtstamp>
     <alias>US/Hawaii</alias>
+    <alias>Pacific/Johnston</alias>
     <md5>be013195b929c48b73f0234a5226a763</md5>
   </timezone>
   <timezone>
     <tzid>Pacific/Johnston</tzid>
-    <dtstamp>2011-10-05T11:50:21Z</dtstamp>
-    <md5>fdd50497d420099a0f7faabcc47e967e</md5>
+    <dtstamp>2013-10-01T01:19:11Z</dtstamp>
+    <md5>82a4fca854a65c81f3c9548471270441</md5>
   </timezone>
   <timezone>
     <tzid>Pacific/Kiritimati</tzid>

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/version.txt
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/version.txt	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/twistedcaldav/zoneinfo/version.txt	2013-11-01 22:25:30 UTC (rev 11871)
@@ -1 +1 @@
-IANA Timezone Registry: 2013d
\ No newline at end of file
+IANA Timezone Registry: 2013f
\ No newline at end of file

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/subpostgres.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/subpostgres.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -454,6 +454,10 @@
             self.deactivateDelayedShutdown()
 
         def gotReady(result):
+            """
+            We started postgres; we're responsible for stopping it later.
+            Call pgCtl status to get the pid.
+            """
             log.warn("{cmd} exited", cmd=pgCtl)
             self.shouldStopDatabase = True
             d = Deferred()
@@ -463,15 +467,34 @@
                 env=self.env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
-            d.addCallback(gotStatus)
+            return d.addCallback(gotStatus)
 
-        def reportit(f):
-            log.failure("starting postgres", f)
+        def couldNotStart(f):
+            """
+            There was an error trying to start postgres.  Try to connect
+            because it might already be running.  In this case, we won't
+            be the one to stop it.
+            """
+            d = Deferred()
+            statusMonitor = CapturingProcessProtocol(d, None)
+            self.reactor.spawnProcess(
+                statusMonitor, pgCtl, [pgCtl, "status"],
+                env=self.env, path=self.workingDir.path,
+                uid=self.uid, gid=self.gid,
+            )
+            return d.addCallback(gotStatus).addErrback(giveUp)
+
+        def giveUp(f):
+            """
+            We can't start postgres or connect to a running instance.  Shut
+            down.
+            """
+            log.failure("Can't start or connect to postgres", f)
             self.deactivateDelayedShutdown()
             self.reactor.stop()
-            
+
         self.monitor.completionDeferred.addCallback(
-            gotReady).addErrback(reportit)
+            gotReady).addErrback(couldNotStart)
 
     shouldStopDatabase = False
 
@@ -549,6 +572,7 @@
 #        d.addCallback(maybeStopSubprocess)
 #        return d
 
+
     def hardStop(self):
         """
         Stop postgres quickly by sending it SIGQUIT
@@ -556,5 +580,5 @@
         if self._postgresPid is not None:
             try:
                 os.kill(self._postgresPid, signal.SIGQUIT)
-            except OSError: 
+            except OSError:
                 pass

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/test/test_subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/test/test_subpostgres.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/test/test_subpostgres.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -189,5 +189,3 @@
         cursor.execute("select * from import_test_table")
         values = cursor.fetchall()
         self.assertEquals(values, [["value1"], ["value2"]])
-
-

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/base/datastore/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -92,6 +92,12 @@
         return "objectWithName:%s:%s" % (homeResourceID, name)
 
 
+    # Home child objects by id
+
+    def keyForObjectWithResourceID(self, homeResourceID, resourceID):
+        return "objectWithName:%s:%s" % (homeResourceID, resourceID)
+
+
     # Home metadata (Created/Modified)
 
     def keyForHomeMetaData(self, homeResourceID):

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/file.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/file.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -128,7 +128,7 @@
     @inlineCallbacks
     def hasCalendarResourceUIDSomewhereElse(self, uid, ok_object, type):
 
-        objectResources = (yield self.objectResourcesWithUID(uid, ("inbox",)))
+        objectResources = (yield self.getCalendarResourcesForUID(uid))
         for objectResource in objectResources:
             if ok_object and objectResource._path == ok_object._path:
                 continue
@@ -140,14 +140,9 @@
 
 
     @inlineCallbacks
-    def getCalendarResourcesForUID(self, uid, allow_shared=False):
+    def getCalendarResourcesForUID(self, uid):
 
-        results = []
-        objectResources = (yield self.objectResourcesWithUID(uid, ("inbox",)))
-        for objectResource in objectResources:
-            if allow_shared or objectResource._parentCollection.owned():
-                results.append(objectResource)
-
+        results = (yield self.objectResourcesWithUID(uid, ("inbox",)))
         returnValue(results)
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/schedule.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/schedule.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/schedule.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -112,8 +112,8 @@
         return self._calendarHome.hasCalendarResourceUIDSomewhereElse(uid, ok_object, type)
 
 
-    def getCalendarResourcesForUID(self, uid, allow_shared=False):
-        return self._calendarHome.getCalendarResourcesForUID(uid, allow_shared)
+    def getCalendarResourcesForUID(self, uid):
+        return self._calendarHome.getCalendarResourcesForUID(uid)
 
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/inbound.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/inbound.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/inbound.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -49,11 +49,11 @@
 # specifically, "Unhandled unsolicited response" nonsense.
 #
 class IMAPLogger(LegacyLogger):
-    def emit(self, level, message=None, **kwargs):
-        if message is not None and message.startswith("Unhandled unsolicited response:"):
+    def msg(self, *message, **kwargs):
+        if message and message[0].startswith("Unhandled unsolicited response:"):
             return
 
-        super(IMAPLogger, self).emit(self, level, message, **kwargs)
+        super(IMAPLogger, self).msg(self, *message, **kwargs)
 
 imap4.log = IMAPLogger()
 
@@ -112,6 +112,11 @@
             from twisted.internet import reactor
         self.reactor = reactor
 
+        # If we're using our dedicated account on our local server, we're free
+        # to delete all messages that arrive in the inbox so as to not let
+        # cruft build up
+        self.deleteAllMail = shouldDeleteAllMail(config.ServerHostName,
+            settings.Server, settings.Username)
         self.mailReceiver = MailReceiver(store, directory)
         mailType = settings['Type']
         if mailType.lower().startswith('pop'):
@@ -127,7 +132,8 @@
 
 
     def fetchMail(self):
-        return self.point.connect(self.factory(self.settings, self.mailReceiver))
+        return self.point.connect(self.factory(self.settings, self.mailReceiver,
+            self.deleteAllMail))
 
 
     @inlineCallbacks
@@ -138,6 +144,29 @@
 
 
 
+def shouldDeleteAllMail(serverHostName, inboundServer, username):
+    """
+    Given the hostname of the calendar server, the hostname of the pop/imap
+    server, and the username we're using to access inbound mail, determine
+    whether we should delete all messages in the inbox or whether to leave
+    all unprocessed messages.
+
+    @param serverHostName: the calendar server hostname (config.ServerHostName)
+    @type serverHostName: C{str}
+    @param inboundServer: the pop/imap server hostname
+    @type inboundServer: C{str}
+    @param username: the name of the account we're using to retrieve mail
+    @type username: C{str}
+    @return: True if we should delete all messages from the inbox, False otherwise
+    @rtype: C{boolean}
+    """
+    return (
+        inboundServer in (serverHostName, "localhost") and
+        username == "com.apple.calendarserver"
+    )
+
+
+
 @inlineCallbacks
 def scheduleNextMailPoll(store, seconds):
     txn = store.newTransaction()
@@ -156,8 +185,9 @@
     NO_ORGANIZER_ADDRESS = 3
     REPLY_FORWARDED_TO_ORGANIZER = 4
     INJECTION_SUBMITTED = 5
+    INCOMPLETE_DSN = 6
+    UNKNOWN_FAILURE = 7
 
-    # What about purge( ) and lowercase( )
     def __init__(self, store, directory):
         self.store = store
         self.directory = directory
@@ -363,7 +393,23 @@
 
     # returns a deferred
     def inbound(self, message):
+        """
+        Given the text of an incoming message, parse and process it.
+        The possible return values are:
 
+        NO_TOKEN - there was no token in the To address
+        UNKNOWN_TOKEN - there was an unknown token in the To address
+        MALFORMED_TO_ADDRESS - we could not parse the To address at all
+        NO_ORGANIZER_ADDRESS - no ics attachment and no email to forward to
+        REPLY_FORWARDED_TO_ORGANIZER - no ics attachment, but reply forwarded
+        INJECTION_SUBMITTED - looks ok, was submitted as a work item
+        INCOMPLETE_DSN - not enough in the DSN to go on
+        UNKNOWN_FAILURE - any error we aren't specifically catching
+
+        @param message: The body of the email
+        @type message: C{str}
+        @return: Deferred firing with one of the above action codes
+        """
         try:
             msg = email.message_from_string(message)
 
@@ -376,7 +422,7 @@
                     # It's a DSN without enough to go on
                     log.error("Mail gateway can't process DSN %s"
                                    % (msg['Message-ID'],))
-                    return succeed(None)
+                    return succeed(self.INCOMPLETE_DSN)
 
             log.info("Mail gateway received message %s from %s to %s" %
                 (msg['Message-ID'], msg['From'], msg['To']))
@@ -386,7 +432,7 @@
         except Exception, e:
             # Don't let a failure of any kind stop us
             log.error("Failed to process message: %s" % (e,))
-        return succeed(None)
+        return succeed(self.UNKNOWN_FAILURE)
 
 
 
@@ -442,13 +488,22 @@
         return defer.DeferredList(downloads).addCallback(self.cbFinished)
 
 
+    @inlineCallbacks
     def cbDownloaded(self, lines, id):
         self.log.debug("POP downloaded message %d" % (id,))
-        self.factory.handleMessage("\r\n".join(lines))
-        self.log.debug("POP deleting message %d" % (id,))
-        self.delete(id)
+        actionTaken = (yield self.factory.handleMessage("\r\n".join(lines)))
 
+        if self.factory.deleteAllMail:
+            # Delete all mail we see
+            self.log.debug("POP deleting message %d" % (id,))
+            self.delete(id)
+        else:
+            # Delete only mail we've processed
+            if actionTaken == MailReceiver.INJECTION_SUBMITTED:
+                self.log.debug("POP deleting message %d" % (id,))
+                self.delete(id)
 
+
     def cbFinished(self, results):
         self.log.debug("POP finished")
         return self.quit()
@@ -460,8 +515,10 @@
 
     protocol = POP3DownloadProtocol
 
-    def __init__(self, settings, mailReceiver):
+    def __init__(self, settings, mailReceiver, deleteAllMail):
+        self.settings = settings
         self.mailReceiver = mailReceiver
+        self.deleteAllMail = deleteAllMail
         self.noisy = False
 
 
@@ -477,7 +534,7 @@
 
     def handleMessage(self, message):
         self.log.debug("POP factory handle message")
-        self.log.debug(message)
+        # self.log.debug(message)
         return self.mailReceiver.inbound(message)
 
 
@@ -498,12 +555,12 @@
 
 
     def ebLogError(self, error):
-        self.log.error("IMAP Error: %s" % (error,))
+        self.log.error("IMAP Error: {err}", err=error)
 
 
     def ebAuthenticateFailed(self, reason):
-        self.log.debug("IMAP authenticate failed for %s, trying login" %
-            (self.factory.settings["Username"],))
+        self.log.debug("IMAP authenticate failed for {name}, trying login",
+            name=self.factory.settings["Username"])
         return self.login(self.factory.settings["Username"],
             self.factory.settings["Password"]
             ).addCallback(self.cbLoggedIn
@@ -511,27 +568,34 @@
 
 
     def ebLoginFailed(self, reason):
-        self.log.error("IMAP login failed for %s" %
-            (self.factory.settings["Username"],))
+        self.log.error("IMAP login failed for {name}", name=self.factory.settings["Username"])
         self.transport.loseConnection()
 
 
     def cbLoggedIn(self, result):
-        self.log.debug("IMAP logged in [%s]" % (self.state,))
+        self.log.debug("IMAP logged in")
         self.select("Inbox").addCallback(self.cbInboxSelected)
 
 
     def cbInboxSelected(self, result):
-        self.log.debug("IMAP Inbox selected [%s]" % (self.state,))
-        allMessages = imap4.MessageSet(1, None)
-        self.fetchUID(allMessages, True).addCallback(self.cbGotUIDs)
+        self.log.debug("IMAP Inbox selected")
+        self.search(imap4.Query(unseen=True)).addCallback(self.cbGotSearch)
 
 
+    def cbGotSearch(self, results):
+        if results:
+            ms = imap4.MessageSet()
+            for n in results:
+                ms.add(n)
+            self.fetchUID(ms).addCallback(self.cbGotUIDs)
+        else:
+            self.cbClosed(None)
+
+
     def cbGotUIDs(self, results):
-        self.log.debug("IMAP got uids [%s]" % (self.state,))
         self.messageUIDs = [result['UID'] for result in results.values()]
         self.messageCount = len(self.messageUIDs)
-        self.log.debug("IMAP Inbox has %d messages" % (self.messageCount,))
+        self.log.debug("IMAP Inbox has {count} unseen messages", count=self.messageCount)
         if self.messageCount:
             self.fetchNextMessage()
         else:
@@ -540,7 +604,7 @@
 
 
     def fetchNextMessage(self):
-        self.log.debug("IMAP in fetchnextmessage [%s]" % (self.state,))
+        # self.log.debug("IMAP in fetchnextmessage")
         if self.messageUIDs:
             nextUID = self.messageUIDs.pop(0)
             messageListToFetch = imap4.MessageSet(nextUID)
@@ -556,8 +620,9 @@
             self.expunge().addCallback(self.cbInboxSelected)
 
 
+    @inlineCallbacks
     def cbGotMessage(self, results, messageList):
-        self.log.debug("IMAP in cbGotMessage [%s]" % (self.state,))
+        self.log.debug("IMAP in cbGotMessage")
         try:
             messageData = results.values()[0]['RFC822']
         except IndexError:
@@ -567,44 +632,46 @@
             self.fetchNextMessage()
             return
 
-        d = self.factory.handleMessage(messageData)
-        if isinstance(d, defer.Deferred):
-            d.addCallback(self.cbFlagDeleted, messageList)
+        actionTaken = (yield self.factory.handleMessage(messageData))
+        if self.factory.deleteAllMail:
+            # Delete all mail we see
+            yield self.cbFlagDeleted(messageList)
         else:
-            # No deferred returned, so no need for addCallback( )
-            self.cbFlagDeleted(None, messageList)
+            # Delete only mail we've processed; the rest are left flagged Seen
+            if actionTaken == MailReceiver.INJECTION_SUBMITTED:
+                yield self.cbFlagDeleted(messageList)
+            else:
+                self.fetchNextMessage()
 
 
-    def cbFlagDeleted(self, results, messageList):
+    def cbFlagDeleted(self, messageList):
         self.addFlags(messageList, ("\\Deleted",),
             uid=True).addCallback(self.cbMessageDeleted, messageList)
 
 
     def cbMessageDeleted(self, results, messageList):
-        self.log.debug("IMAP in cbMessageDeleted [%s]" % (self.state,))
         self.log.debug("Deleted message")
         self.fetchNextMessage()
 
 
     def cbClosed(self, results):
-        self.log.debug("IMAP in cbClosed [%s]" % (self.state,))
         self.log.debug("Mailbox closed")
         self.logout().addCallback(
             lambda _: self.transport.loseConnection())
 
 
     def rawDataReceived(self, data):
-        self.log.debug("RAW RECEIVED: %s" % (data,))
+        # self.log.debug("RAW RECEIVED: {data}", data=data)
         imap4.IMAP4Client.rawDataReceived(self, data)
 
 
     def lineReceived(self, line):
-        self.log.debug("RECEIVED: %s" % (line,))
+        # self.log.debug("RECEIVED: {line}", line=line)
         imap4.IMAP4Client.lineReceived(self, line)
 
 
     def sendLine(self, line):
-        self.log.debug("SENDING: %s" % (line,))
+        # self.log.debug("SENDING: {line}", line=line)
         imap4.IMAP4Client.sendLine(self, line)
 
 
@@ -614,11 +681,12 @@
 
     protocol = IMAP4DownloadProtocol
 
-    def __init__(self, settings, mailReceiver):
+    def __init__(self, settings, mailReceiver, deleteAllMail):
         self.log.debug("Setting up IMAPFactory")
 
         self.settings = settings
         self.mailReceiver = mailReceiver
+        self.deleteAllMail = deleteAllMail
         self.noisy = False
 
 
@@ -633,7 +701,7 @@
 
     def handleMessage(self, message):
         self.log.debug("IMAP factory handle message")
-        self.log.debug(message)
+        # self.log.debug(message)
         return self.mailReceiver.inbound(message)
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -15,7 +15,7 @@
 ##
 
 
-from twisted.internet.defer import inlineCallbacks
+from twisted.internet.defer import inlineCallbacks, succeed
 from twisted.python.modules import getModule
 
 from twistedcaldav.config import ConfigDict
@@ -25,6 +25,8 @@
 from txdav.caldav.datastore.scheduling.imip.inbound import MailReceiver
 from txdav.caldav.datastore.scheduling.imip.inbound import MailRetriever
 from txdav.caldav.datastore.scheduling.imip.inbound import injectMessage
+from txdav.caldav.datastore.scheduling.imip.inbound import shouldDeleteAllMail
+from txdav.caldav.datastore.scheduling.imip.inbound import IMAP4DownloadProtocol
 from txdav.caldav.datastore.scheduling.itip import iTIPRequestStatus
 from txdav.caldav.datastore.test.util import buildCalendarStore
 
@@ -47,6 +49,7 @@
                 "UseSSL" : False,
                 "Server" : "example.com",
                 "Port" : 123,
+                "Username" : "xyzzy",
             })
         )
 
@@ -359,3 +362,87 @@
         ))
         yield txn.commit()
         yield wp.whenExecuted()
+
+
+    def test_shouldDeleteAllMail(self):
+
+        # Delete if the mail server is on the same host and using our
+        # dedicated account:
+        self.assertTrue(shouldDeleteAllMail("calendar.example.com",
+            "calendar.example.com", "com.apple.calendarserver"))
+        self.assertTrue(shouldDeleteAllMail("calendar.example.com",
+            "localhost", "com.apple.calendarserver"))
+
+        # Don't delete all otherwise:
+        self.assertFalse(shouldDeleteAllMail("calendar.example.com",
+            "calendar.example.com", "not_ours"))
+        self.assertFalse(shouldDeleteAllMail("calendar.example.com",
+            "localhost", "not_ours"))
+        self.assertFalse(shouldDeleteAllMail("calendar.example.com",
+            "mail.example.com", "com.apple.calendarserver"))
+
+
+    @inlineCallbacks
+    def test_deletion(self):
+        """
+        Verify the IMAP protocol will delete messages only when the right
+        conditions are met.  Either:
+
+            A) We've been told to delete all mail
+            B) We've not been told to delete all mail, but it was a message
+                we processed
+        """
+
+        def stubFetchNextMessage():
+            pass
+
+        def stubCbFlagDeleted(result):
+            self.flagDeletedResult = result
+            return succeed(None)
+
+        proto = IMAP4DownloadProtocol()
+        self.patch(proto, "fetchNextMessage", stubFetchNextMessage)
+        self.patch(proto, "cbFlagDeleted", stubCbFlagDeleted)
+        results = {
+            "ignored" : (
+                {
+                    "RFC822" : "a message"
+                }
+            )
+        }
+
+        # Delete all mail = False; action taken = submitted; result = deletion
+        proto.factory = StubFactory(MailReceiver.INJECTION_SUBMITTED, False)
+        self.flagDeletedResult = None
+        yield proto.cbGotMessage(results, "xyzzy")
+        self.assertEquals(self.flagDeletedResult, "xyzzy")
+
+        # Delete all mail = False; action taken = not submitted; result = no deletion
+        proto.factory = StubFactory(MailReceiver.NO_TOKEN, False)
+        self.flagDeletedResult = None
+        yield proto.cbGotMessage(results, "xyzzy")
+        self.assertEquals(self.flagDeletedResult, None)
+
+        # Delete all mail = True; action taken = submitted; result = deletion
+        proto.factory = StubFactory(MailReceiver.INJECTION_SUBMITTED, True)
+        self.flagDeletedResult = None
+        yield proto.cbGotMessage(results, "xyzzy")
+        self.assertEquals(self.flagDeletedResult, "xyzzy")
+
+        # Delete all mail = True; action taken = not submitted; result = deletion
+        proto.factory = StubFactory(MailReceiver.NO_TOKEN, True)
+        self.flagDeletedResult = None
+        yield proto.cbGotMessage(results, "xyzzy")
+        self.assertEquals(self.flagDeletedResult, "xyzzy")
+
+
+
+class StubFactory(object):
+
+    def __init__(self, actionTaken, deleteAllMail):
+        self.actionTaken = actionTaken
+        self.deleteAllMail = deleteAllMail
+
+
+    def handleMessage(self, messageData):
+        return succeed(self.actionTaken)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/implicit.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/implicit.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -56,10 +56,10 @@
     STATUS_ORPHANED_CANCELLED_EVENT = 1
     STATUS_ORPHANED_EVENT = 2
 
-    def __init__(self):
+    def __init__(self, logItems=None):
 
         self.return_status = ImplicitScheduler.STATUS_OK
-        self.logItems = {}
+        self.logItems = logItems
         self.allowed_to_schedule = True
         self.suppress_refresh = False
 
@@ -383,7 +383,7 @@
             if self.txn.doing_attendee_refresh == 0:
                 delattr(self.txn, "doing_attendee_refresh")
 
-        if refreshCount:
+        if refreshCount and self.logItems is not None:
             self.logItems["itip.refreshes"] = refreshCount
 
 
@@ -925,7 +925,8 @@
         if self.action in ("create", "modify",):
             total += (yield self.processRequests())
 
-        self.logItems["itip.requests"] = total
+        if self.logItems is not None:
+            self.logItems["itip.requests"] = total
 
 
     @inlineCallbacks
@@ -1304,7 +1305,8 @@
         # First make sure we are allowed to schedule
         self.testSchedulingAllowed()
 
-        self.logItems["itip.reply"] = "reply"
+        if self.logItems is not None:
+            self.logItems["itip.reply"] = "reply"
 
         itipmsg = iTipGenerator.generateAttendeeReply(self.calendar, self.attendee, changedRids=changedRids)
 
@@ -1317,7 +1319,8 @@
         # First make sure we are allowed to schedule
         self.testSchedulingAllowed()
 
-        self.logItems["itip.reply"] = "cancel"
+        if self.logItems is not None:
+            self.logItems["itip.reply"] = "cancel"
 
         itipmsg = iTipGenerator.generateAttendeeReply(self.calendar, self.attendee, force_decline=True)
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/itip.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/itip.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/itip.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -471,16 +471,9 @@
                 pass
 
             elif attendee_comment is None and private_comment is not None:
-                # Remove all property parameters
-                private_comment.removeAllParameters()
+                # We now remove the private comment on the organizer's side if the attendee removed it
+                to_component.removeProperty(private_comment)
 
-                # Add default parameters
-                private_comment.setParameter("X-CALENDARSERVER-ATTENDEE-REF", attendee.value())
-                private_comment.setParameter("X-CALENDARSERVER-DTSTAMP", PyCalendarDateTime.getNowUTC().getText())
-
-                # Set value empty
-                private_comment.setValue("")
-
                 private_comment_changed = True
 
             elif attendee_comment is not None and private_comment is None:

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/processing.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/processing.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/processing.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -359,7 +359,7 @@
             # refresh them. To prevent a race we need a lock.
             yield NamedLock.acquire(txn, "ImplicitUIDLock:%s" % (hashlib.md5(self.uid).hexdigest(),))
 
-            organizer_home = (yield txn.calendarHomeForUID(self.organizer_uid))
+            organizer_home = (yield txn.calendarHomeWithUID(self.organizer_uid))
             organizer_resource = (yield organizer_home.objectResourceWithID(self.organizer_calendar_resource_id))
             if organizer_resource is not None:
                 yield self._doRefresh(organizer_resource, only_attendees=attendeesToProcess)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/test/test_implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/test/test_implicit.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/test/test_implicit.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -21,10 +21,12 @@
 from twext.web2 import responsecode
 from twext.web2.http import HTTPError
 
+from twisted.internet import reactor
 from twisted.internet.defer import succeed, inlineCallbacks, returnValue
+from twisted.internet.task import deferLater
 from twisted.trial.unittest import TestCase
+
 from twistedcaldav.config import config
-
 from twistedcaldav.ical import Component
 
 from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
@@ -1412,3 +1414,91 @@
 
         calendar3 = (yield self._getCalendarData("user03"))
         self.assertTrue("PARTSTAT=ACCEPTED" in calendar3)
+
+
+    @inlineCallbacks
+    def test_doImplicitScheduling_refreshAllAttendeesExceptSome_Batched(self):
+        """
+        Test that doImplicitScheduling delivers scheduling messages to attendees who can then reply.
+        Verify that batched refreshing is working.
+        """
+
+        data1 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890-attendee-reply
+DTSTAMP:20080601T120000Z
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user01 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE:mailto:user02 at example.com
+ATTENDEE:mailto:user03 at example.com
+END:VEVENT
+END:VCALENDAR
+"""
+        data2 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890-attendee-reply
+DTSTAMP:20080601T120000Z
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user01 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user02 at example.com
+ATTENDEE:mailto:user03 at example.com
+END:VEVENT
+END:VCALENDAR
+"""
+
+        # Need refreshes to occur immediately, not via reactor.callLater
+        self.patch(config.Scheduling.Options, "AttendeeRefreshBatch", 5)
+        self.patch(config.Scheduling.Options, "AttendeeRefreshBatchDelaySeconds", 1)
+
+        yield self._createCalendarObject(data1, "user01", "test.ics")
+
+        list1 = (yield self._listCalendarObjects("user01", "inbox"))
+        self.assertEqual(len(list1), 0)
+
+        calendar1 = (yield self._getCalendarData("user01", "test.ics"))
+        self.assertTrue("SCHEDULE-STATUS=1.2" in calendar1)
+
+        list2 = (yield self._listCalendarObjects("user02", "inbox"))
+        self.assertEqual(len(list2), 1)
+
+        calendar2 = (yield self._getCalendarData("user02"))
+        self.assertTrue("PARTSTAT=ACCEPTED" not in calendar2)
+
+        list3 = (yield self._listCalendarObjects("user03", "inbox"))
+        self.assertEqual(len(list3), 1)
+
+        calendar3 = (yield self._getCalendarData("user03"))
+        self.assertTrue("PARTSTAT=ACCEPTED" not in calendar3)
+
+        yield self._setCalendarData(data2, "user02")
+
+        list1 = (yield self._listCalendarObjects("user01", "inbox"))
+        self.assertEqual(len(list1), 1)
+
+        calendar1 = (yield self._getCalendarData("user01", "test.ics"))
+        self.assertTrue("SCHEDULE-STATUS=2.0" in calendar1)
+        self.assertTrue("PARTSTAT=ACCEPTED" in calendar1)
+
+        list2 = (yield self._listCalendarObjects("user02", "inbox"))
+        self.assertEqual(len(list2), 1)
+
+        calendar2 = (yield self._getCalendarData("user02"))
+        self.assertTrue("PARTSTAT=ACCEPTED" in calendar2)
+
+        @inlineCallbacks
+        def _test_user03_refresh():
+            list3 = (yield self._listCalendarObjects("user03", "inbox"))
+            self.assertEqual(len(list3), 1)
+
+            calendar3 = (yield self._getCalendarData("user03"))
+            self.assertTrue("PARTSTAT=ACCEPTED" in calendar3)
+
+        yield deferLater(reactor, 2.0, _test_user03_refresh)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/utils.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/utils.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/scheduling/utils.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -21,7 +21,7 @@
 log = Logger()
 
 @inlineCallbacks
-def getCalendarObjectForRecord(txn, record, uid, allow_shared=False):
+def getCalendarObjectForRecord(txn, record, uid):
     """
     Get a copy of the event for a calendar user identified by a directory record.
 
@@ -34,7 +34,7 @@
         calendar_home = yield txn.calendarHomeWithUID(record.uid)
 
         # Get matching newstore objects
-        objectResources = (yield calendar_home.getCalendarResourcesForUID(uid, allow_shared))
+        objectResources = (yield calendar_home.getCalendarResourcesForUID(uid))
 
         if len(objectResources) > 1:
             # Delete all but the first one

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/sql.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/sql.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -527,9 +527,7 @@
         # refer to calendar *object* UIDs, since calendar *resources* are an
         # HTTP protocol layer thing, not a data store thing.  (See also
         # objectResourcesWithUID.)
-        objectResources = (
-            yield self.objectResourcesWithUID(uid, ["inbox"], False)
-        )
+        objectResources = (yield self.getCalendarResourcesForUID(uid))
         for objectResource in objectResources:
             if ok_object and objectResource._resourceID == ok_object._resourceID:
                 continue
@@ -541,15 +539,22 @@
 
 
     @inlineCallbacks
-    def getCalendarResourcesForUID(self, uid, allow_shared=False):
+    def getCalendarResourcesForUID(self, uid):
+        """
+        Find all calendar object resources in the calendar home that are not in the "inbox" collection
+        and not in shared collections.
+        Cache the result of this query as it can happen multiple times during scheduling under slightly
+        different circumstances.
 
-        results = []
-        objectResources = (yield self.objectResourcesWithUID(uid, ["inbox"]))
-        for objectResource in objectResources:
-            if allow_shared or objectResource._parentCollection.owned():
-                results.append(objectResource)
+        @param uid: the UID of the calendar object resources to find
+        @type uid: C{str}
+        """
 
-        returnValue(results)
+        if not hasattr(self, "_cachedCalendarResourcesForUID"):
+            self._cachedCalendarResourcesForUID = {}
+        if uid not in self._cachedCalendarResourcesForUID:
+            self._cachedCalendarResourcesForUID[uid] = (yield self.objectResourcesWithUID(uid, ["inbox"], allowShared=False))
+        returnValue(self._cachedCalendarResourcesForUID[uid])
 
 
     @inlineCallbacks
@@ -1576,10 +1581,6 @@
                 if calsize > config.MaxResourceSize:
                     raise ObjectResourceTooBigError()
 
-        # Possible timezone stripping
-        if config.EnableTimezonesByReference:
-            component.stripKnownTimezones()
-
         # Do validation on external requests
         if internal_state == ComponentUpdateState.NORMAL:
 
@@ -1597,6 +1598,10 @@
             # calendar data
             component.normalizeCalendarUserAddresses(normalizationLookup, self.directoryService().recordWithCalendarUserAddress)
 
+        # Possible timezone stripping
+        if config.EnableTimezonesByReference:
+            component.stripKnownTimezones()
+
         # Check location/resource organizer requirement
         self.validLocationResourceOrganizer(component, inserting, internal_state)
 
@@ -1731,20 +1736,23 @@
 
         NB Do this before implicit scheduling as we don't want old clients to trigger scheduling when
         the X- property is missing.
+
+        We now only preserve the "X-CALENDARSERVER-ATTENDEE-COMMENT" property. We will now allow clients
+        to delete the "X-CALENDARSERVER-PRIVATE-COMMENT" and treat that as a removal of the attendee
+        comment (which will trigger scheduling with the organizer to remove the comment on the organizer's
+        side).
         """
         if config.Scheduling.CalDAV.get("EnablePrivateComments", True):
             old_has_private_comments = not inserting and self.hasPrivateComment
             new_has_private_comments = component.hasPropertyInAnyComponent((
-                "X-CALENDARSERVER-PRIVATE-COMMENT",
                 "X-CALENDARSERVER-ATTENDEE-COMMENT",
             ))
 
             if old_has_private_comments and not new_has_private_comments:
                 # Transfer old comments to new calendar
-                log.debug("Private Comments properties were entirely removed by the client. Restoring existing properties.")
+                log.debug("Organizer private comment properties were entirely removed by the client. Restoring existing properties.")
                 old_calendar = (yield self.componentForUser())
                 component.transferProperties(old_calendar, (
-                    "X-CALENDARSERVER-PRIVATE-COMMENT",
                     "X-CALENDARSERVER-ATTENDEE-COMMENT",
                 ))
 
@@ -1953,7 +1961,7 @@
                 user_uuid = self._parentCollection.viewerHome().uid()
                 component = PerUserDataFilter(user_uuid).filter(component.duplicate())
 
-            scheduler = ImplicitScheduler()
+            scheduler = ImplicitScheduler(logItems=self._txn.logItems)
 
             # PUT
             do_implicit_action, is_scheduling_resource = (yield scheduler.testImplicitSchedulingPUT(
@@ -2610,7 +2618,7 @@
         if not isinbox and internal_state == ComponentRemoveState.NORMAL:
             # Get data we need for implicit scheduling
             calendar = (yield self.componentForUser())
-            scheduler = ImplicitScheduler()
+            scheduler = ImplicitScheduler(logItems=self._txn.logItems)
             do_implicit_action, _ignore = (yield scheduler.testImplicitSchedulingDELETE(
                 self.calendar(),
                 self,
@@ -2929,7 +2937,7 @@
 
         # Only allow organizers to manipulate managed attachments for now
         calendar = (yield self.componentForUser())
-        scheduler = ImplicitScheduler()
+        scheduler = ImplicitScheduler(logItems=self._txn.logItems)
         is_attendee = (yield scheduler.testAttendeeEvent(self.calendar(), self, calendar,))
         if is_attendee:
             raise InvalidAttachmentOperation("Attendees are not allowed to manipulate managed attachments")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/common.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/common.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -84,73 +84,75 @@
 
 OTHER_HOME_UID = "home_splits"
 
-test_event_text = (
-    "BEGIN:VCALENDAR\r\n"
-      "VERSION:2.0\r\n"
-      "PRODID:-//Apple Inc.//iCal 4.0.1//EN\r\n"
-      "CALSCALE:GREGORIAN\r\n"
-      "BEGIN:VTIMEZONE\r\n"
-        "TZID:US/Pacific\r\n"
-        "BEGIN:DAYLIGHT\r\n"
-          "TZOFFSETFROM:-0800\r\n"
-          "RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU\r\n"
-          "DTSTART:20070311T020000\r\n"
-          "TZNAME:PDT\r\n"
-          "TZOFFSETTO:-0700\r\n"
-        "END:DAYLIGHT\r\n"
-        "BEGIN:STANDARD\r\n"
-          "TZOFFSETFROM:-0700\r\n"
-          "RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU\r\n"
-          "DTSTART:20071104T020000\r\n"
-          "TZNAME:PST\r\n"
-          "TZOFFSETTO:-0800\r\n"
-        "END:STANDARD\r\n"
-      "END:VTIMEZONE\r\n"
-      "BEGIN:VEVENT\r\n"
-        "CREATED:20100203T013849Z\r\n"
-        "UID:uid-test\r\n"
-        "DTEND;TZID=US/Pacific:20100207T173000\r\n"
-        "TRANSP:OPAQUE\r\n"
-        "SUMMARY:New Event\r\n"
-        "DTSTART;TZID=US/Pacific:20100207T170000\r\n"
-        "DTSTAMP:20100203T013909Z\r\n"
-        "SEQUENCE:3\r\n"
-        "X-APPLE-DROPBOX:/calendars/users/wsanchez/dropbox/uid-test.dropbox\r\n"
-        "BEGIN:VALARM\r\n"
-          "X-WR-ALARMUID:1377CCC7-F85C-4610-8583-9513D4B364E1\r\n"
-          "TRIGGER:-PT20M\r\n"
-          "ATTACH:Basso\r\n"
-          "ACTION:AUDIO\r\n"
-        "END:VALARM\r\n"
-      "END:VEVENT\r\n"
-    "END:VCALENDAR\r\n"
-)
+test_event_text = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//Apple Inc.//iCal 4.0.1//EN
+CALSCALE:GREGORIAN
+BEGIN:VTIMEZONE
+TZID:US/Pacific
+BEGIN:DAYLIGHT
+TZOFFSETFROM:-0800
+RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
+DTSTART:20070311T020000
+TZNAME:PDT
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:STANDARD
+TZOFFSETFROM:-0700
+RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
+DTSTART:20071104T020000
+TZNAME:PST
+TZOFFSETTO:-0800
+END:STANDARD
+END:VTIMEZONE
+BEGIN:VEVENT
+CREATED:20100203T013849Z
+UID:uid-test
+DTEND;TZID=US/Pacific:20100207T173000
+TRANSP:OPAQUE
+SUMMARY:New Event
+DTSTART;TZID=US/Pacific:20100207T170000
+DTSTAMP:20100203T013909Z
+SEQUENCE:3
+X-APPLE-DROPBOX:/calendars/users/wsanchez/dropbox/uid-test.dropbox
+BEGIN:VALARM
+X-WR-ALARMUID:1377CCC7-F85C-4610-8583-9513D4B364E1
+TRIGGER:-PT20M
+ATTACH:Basso
+ACTION:AUDIO
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
 
 
 
-test_event_notCalDAV_text = (
-    "BEGIN:VCALENDAR\r\n"
-      "VERSION:2.0\r\n"
-      "PRODID:-//Apple Inc.//iCal 4.0.1//EN\r\n"
-      "CALSCALE:GREGORIAN\r\n"
-      "BEGIN:VEVENT\r\n"
-        "CREATED:20100203T013849Z\r\n"
-        "UID:test\r\n"
-        "DTEND;TZID=US/Pacific:20100207T173000\r\n" # TZID without VTIMEZONE
-        "TRANSP:OPAQUE\r\n"
-        "SUMMARY:New Event\r\n"
-        "DTSTART;TZID=US/Pacific:20100207T170000\r\n"
-        "DTSTAMP:20100203T013909Z\r\n"
-        "SEQUENCE:3\r\n"
-        "BEGIN:VALARM\r\n"
-          "X-WR-ALARMUID:1377CCC7-F85C-4610-8583-9513D4B364E1\r\n"
-          "TRIGGER:-PT20M\r\n"
-          "ATTACH:Basso\r\n"
-          "ACTION:AUDIO\r\n"
-        "END:VALARM\r\n"
-      "END:VEVENT\r\n"
-    "END:VCALENDAR\r\n"
-)
+test_event_notCalDAV_text = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//Apple Inc.//iCal 4.0.1//EN
+CALSCALE:GREGORIAN
+BEGIN:VEVENT
+CREATED:20100203T013849Z
+UID:test-bad1
+DTEND:20100207T173000Z
+TRANSP:OPAQUE
+SUMMARY:New Event
+DTSTART:20100207T170000Z
+DTSTAMP:20100203T013909Z
+SEQUENCE:3
+END:VEVENT
+BEGIN:VEVENT
+CREATED:20100203T013849Z
+UID:test-bad2
+DTEND:20100207T173000Z
+TRANSP:OPAQUE
+SUMMARY:New Event
+DTSTART:20100207T170000Z
+DTSTAMP:20100203T013909Z
+SEQUENCE:3
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
 
 
 
@@ -450,9 +452,7 @@
         yield notifications.writeNotificationObject("abc", inviteNotification,
             inviteNotification.toxml())
 
-        yield self.commit()
-
-        # Make sure notification fired after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -460,6 +460,7 @@
                 "/CalDAV/example.com/home1/notification/",
             ])
         )
+        yield self.commit()
 
         notifications = yield self.transactionUnderTest().notificationsWithUID(
             "home1"
@@ -469,9 +470,7 @@
         abc = yield notifications.notificationObjectWithUID("abc")
         self.assertEquals(abc, None)
 
-        yield self.commit()
-
-        # Make sure notification fired after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -479,6 +478,7 @@
                 "/CalDAV/example.com/home1/notification/",
             ])
         )
+        yield self.commit()
 
 
     @inlineCallbacks
@@ -697,11 +697,10 @@
         self.assertNotIdentical((yield home.calendarWithName(name)), None)
         calendarProperties = (yield home.calendarWithName(name)).properties()
         self.assertEqual(len(calendarProperties), 0)
+        # notify is called prior to commit
+        self.assertTrue("/CalDAV/example.com/home1/" in self.notifierFactory.history)
         yield self.commit()
 
-        # Make sure notification fired after commit
-        self.assertTrue("/CalDAV/example.com/home1/" in self.notifierFactory.history)
-
         # Make sure it's available in a new transaction; i.e. test the commit.
         home = yield self.homeUnderTest()
         self.assertNotIdentical((yield home.calendarWithName(name)), None)
@@ -915,8 +914,7 @@
                 None
             )
 
-        # Make sure notifications are fired after commit
-        yield self.commit()
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -924,6 +922,7 @@
                 "/CalDAV/example.com/home1/calendar_1/",
             ])
         )
+        yield self.commit()
 
 
     @inlineCallbacks
@@ -1471,9 +1470,7 @@
         self.assertEquals((yield calendarObject.componentForUser()), component)
         self.assertEquals((yield calendarObject.getMetadata()), metadata)
 
-        yield self.commit()
-
-        # Make sure notifications fire after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -1481,6 +1478,7 @@
                 "/CalDAV/example.com/home1/calendar_1/",
             ])
         )
+        yield self.commit()
 
 
     @inlineCallbacks
@@ -1591,9 +1589,7 @@
         calendarObject = yield calendar1.calendarObjectWithName("1.ics")
         self.assertEquals((yield calendarObject.componentForUser()), component)
 
-        yield self.commit()
-
-        # Make sure notification fired after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -1601,6 +1597,7 @@
                 "/CalDAV/example.com/home1/calendar_1/",
             ])
         )
+        yield self.commit()
 
 
     def checkPropertiesMethod(self, thunk):

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_implicit.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_implicit.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -478,9 +478,9 @@
 
 
     @inlineCallbacks
-    def test_validation_preservePrivateComments(self):
+    def test_validation_noPreservePrivateComments(self):
         """
-        Test that resource private comments are restored.
+        Test that attendee private comments are no longer restored.
         """
 
         data1 = """BEGIN:VCALENDAR
@@ -524,12 +524,65 @@
         calendar_resource = (yield self.calendarObjectUnderTest(name="test.ics", home="user01",))
         calendar1 = (yield calendar_resource.component())
         calendar1 = str(calendar1).replace("\r\n ", "")
-        self.assertTrue("X-CALENDARSERVER-PRIVATE-COMMENT:My Comment" in calendar1)
+        self.assertFalse("X-CALENDARSERVER-PRIVATE-COMMENT:My Comment" in calendar1)
         self.assertTrue("SUMMARY:Changed" in calendar1)
         yield self.commit()
 
 
     @inlineCallbacks
+    def test_validation_preserveOrganizerPrivateComments(self):
+        """
+        Test that organizer private comments are restored.
+        """
+
+        data1 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890-organizer
+DTSTAMP:20080601T120000Z
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+X-CALENDARSERVER-ATTENDEE-COMMENT;X-CALENDARSERVER-ATTENDEE-REF="urn:uuid:user01";
+ X-CALENDARSERVER-DTSTAMP=20131101T100000Z:Someone else's comment
+END:VEVENT
+END:VCALENDAR
+"""
+
+        calendar_collection = (yield self.calendarUnderTest(home="user01"))
+        calendar = Component.fromString(data1)
+        yield calendar_collection.createCalendarObjectWithName("test.ics", calendar)
+        yield self.commit()
+
+        data2 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890-organizer
+DTSTAMP:20080601T120000Z
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+SUMMARY:Changed
+END:VEVENT
+END:VCALENDAR
+"""
+
+        calendar_resource = (yield self.calendarObjectUnderTest(name="test.ics", home="user01",))
+        calendar = Component.fromString(data2)
+        txn = self.transactionUnderTest()
+        txn._authz_uid = "user01"
+        yield calendar_resource.setComponent(calendar)
+        yield self.commit()
+
+        calendar_resource = (yield self.calendarObjectUnderTest(name="test.ics", home="user01",))
+        calendar1 = (yield calendar_resource.component())
+        calendar1 = str(calendar1).replace("\r\n ", "")
+        self.assertTrue("X-CALENDARSERVER-ATTENDEE-COMMENT;X-CALENDARSERVER-ATTENDEE-REF=\"urn:uuid:user01\";X-CALENDARSERVER-DTSTAMP=20131101T100000Z:Someone else's comment" in calendar1)
+        self.assertTrue("SUMMARY:Changed" in calendar1)
+        yield self.commit()
+
+
+    @inlineCallbacks
     def test_validation_replaceMissingToDoProperties_OrganizerAttendee(self):
         """
         Test that missing scheduling properties in VTODOs are recovered.

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_sql.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_sql.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -444,7 +444,7 @@
         )
         yield migrateHome(fromHome, toHome, lambda x: x.component())
         toCalendars = yield toHome.calendars()
-        self.assertEquals(set([c.name() for c in toCalendars]),
+        self.assertEquals(set([c.name() for c in toCalendars if c.name() != "inbox"]),
                           set([k for k in self.requirements['home1'].keys()
                                if self.requirements['home1'][k] is not None]))
         fromCalendars = yield fromHome.calendars()
@@ -474,7 +474,7 @@
             )
 
         supported_components = set()
-        self.assertEqual(len(toCalendars), 3)
+        self.assertEqual(len(toCalendars), 4)
         for calendar in toCalendars:
             if calendar.name() == "inbox":
                 continue
@@ -502,7 +502,7 @@
             )
 
         supported_components = set()
-        self.assertEqual(len(toCalendars), 2)
+        self.assertEqual(len(toCalendars), 3)
         for calendar in toCalendars:
             if calendar.name() == "inbox":
                 continue

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/test/test_util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -354,20 +354,19 @@
         }, self.storeUnderTest())
         txn = self.transactionUnderTest()
         emptyHome = yield txn.calendarHomeWithUID("empty_home")
-        self.assertIdentical((yield emptyHome.calendarWithName("calendar")),
-                             None)
+        self.assertIdentical((yield emptyHome.calendarWithName("calendar")), None)
         nonEmpty = yield txn.calendarHomeWithUID("non_empty_home")
         yield migrateHome(emptyHome, nonEmpty)
         yield self.commit()
         txn = self.transactionUnderTest()
         emptyHome = yield txn.calendarHomeWithUID("empty_home")
         nonEmpty = yield txn.calendarHomeWithUID("non_empty_home")
-        self.assertIdentical((yield nonEmpty.calendarWithName("inbox")),
-                             None)
-        self.assertIdentical((yield nonEmpty.calendarWithName("calendar")),
-                             None)
 
+        self.assertIdentical((yield nonEmpty.calendarWithName("calendar")), None)
+        self.assertNotIdentical((yield nonEmpty.calendarWithName("inbox")), None)
+        self.assertNotIdentical((yield nonEmpty.calendarWithName("other-default-calendar")), None)
 
+
     @staticmethod
     def sampleEvent(uid, summary=None):
         """
@@ -526,16 +525,25 @@
                 "different-name": self.sampleEvent("other-uid", "tgt other"),
             },
         )
+
         txn = self.transactionUnderTest()
-        c1 = yield txn.calendarHomeWithUID("conflict1")
         c2 = yield txn.calendarHomeWithUID("conflict2")
         otherCal = yield c2.createCalendarWithName("othercal")
-        otherCal.createCalendarObjectWithName(
+        yield otherCal.createCalendarObjectWithName(
             "some-name", Component.fromString(
                 self.sampleEvent("oc", "target calendar")[0]
             )
         )
+        yield self.commit()
+
+        txn = self.transactionUnderTest()
+        c1 = yield txn.calendarHomeWithUID("conflict1")
+        c2 = yield txn.calendarHomeWithUID("conflict2")
         yield migrateHome(c1, c2, merge=True)
+        yield self.commit()
+
+        txn = self.transactionUnderTest()
+        c2 = yield txn.calendarHomeWithUID("conflict2")
         targetCal = yield c2.calendarWithName("conflicted")
         yield self.checkSummary("same-name", "target", targetCal)
         yield self.checkSummary("different-name", "tgt other", targetCal)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/caldav/datastore/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -356,8 +356,7 @@
 
 
 @inlineCallbacks
-def migrateHome(inHome, outHome, getComponent=lambda x: x.component(),
-                merge=False):
+def migrateHome(inHome, outHome, getComponent=lambda x: x.component(), merge=False):
     """
     Copy all calendars and properties in the given input calendar home to the
     given output calendar home.
@@ -373,7 +372,7 @@
         a calendar in outHome).
 
     @param merge: a boolean indicating whether to raise an exception when
-        encounting a conflicting element of data (calendar or event), or to
+        encountering a conflicting element of data (calendar or event), or to
         attempt to merge them together.
 
     @return: a L{Deferred} that fires with C{None} when the migration is
@@ -398,8 +397,7 @@
         yield d
         outCalendar = yield outHome.calendarWithName(name)
         try:
-            yield _migrateCalendar(calendar, outCalendar, getComponent,
-                                   merge=merge)
+            yield _migrateCalendar(calendar, outCalendar, getComponent, merge=merge)
         except InternalDataStoreError:
             log.error(
                 "  Failed to migrate calendar: %s/%s" % (inHome.name(), name,)
@@ -408,6 +406,11 @@
     # No migration for notifications, since they weren't present in earlier
     # released versions of CalendarServer.
 
+    # May need to create inbox if it was not present in the original file store for some reason
+    inboxCalendar = yield outHome.calendarWithName("inbox")
+    if inboxCalendar is None:
+        yield outHome.createCalendarWithName("inbox")
+
     # May need to split calendars by component type
     if config.RestrictCalendarsToOneComponentType:
         yield outHome.splitCalendars()

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/sql.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/sql.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -337,6 +337,8 @@
 
 AddressBookHome._register(EADDRESSBOOKTYPE)
 
+
+
 class AddressBookSharingMixIn(SharingMixIn):
     """
         Sharing code shared between AddressBook and AddressBookObject
@@ -359,7 +361,7 @@
     @inlineCallbacks
     def _isSharedOrInvited(self):
         """
-        return a bool if this L{AddressBook} is shared or invited
+        return True if this L{AddressBook} is shared or invited
         """
         sharedRows = []
         if self.owned():
@@ -1103,7 +1105,7 @@
 
 
     @inlineCallbacks
-    def updateShare(self, shareeView, mode=None, status=None, message=None, name=None):
+    def updateShare(self, shareeView, mode=None, status=None, message=None):
         """
         Update share mode, status, and message for a home child shared with
         this (owned) L{CommonHomeChild}.
@@ -1124,9 +1126,6 @@
             will be used as the default display name, or None to not update
         @type message: L{str}
 
-        @param name: The bind resource name or None to not update
-        @type message: L{str}
-
         @return: the name of the shared item in the sharee's home.
         @rtype: a L{Deferred} which fires with a L{str}
         """
@@ -1138,8 +1137,7 @@
         columnMap = dict([(k, v if v != "" else None)
                           for k, v in {bind.BIND_MODE:mode,
                             bind.BIND_STATUS:status,
-                            bind.MESSAGE:message,
-                            bind.RESOURCE_NAME:name}.iteritems() if v is not None])
+                            bind.MESSAGE:message}.iteritems() if v is not None])
 
         if len(columnMap):
 
@@ -1481,11 +1479,6 @@
             self._initFromRow(tuple(rows[0]))
 
             if self._kind == _ABO_KIND_GROUP:
-                # generate "X-ADDRESSBOOKSERVER-MEMBER" properties
-                # calc md5 and set size
-                componentText = str((yield self.component()))
-                self._md5 = hashlib.md5(componentText).hexdigest()
-                self._size = len(componentText)
 
                 groupBindRows = yield AddressBookObject._bindForResourceIDAndHomeID.on(
                     self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
@@ -1791,6 +1784,7 @@
         uid = component.resourceUID()
         assert inserting or self._uid == uid  # can't change UID. Should be checked in upper layers
         self._uid = uid
+        originalComponentText = str(component)
 
         if self._kind == _ABO_KIND_GROUP:
             memberAddresses = set(component.resourceMemberAddresses())
@@ -1828,33 +1822,27 @@
             # missing uids and other cuaddrs e.g. user at example.com, are stored in same schema table
             foreignMemberAddrs.extend(["urn:uuid:" + missingUID for missingUID in missingUIDs])
 
-            # don't store group members in object text
-            orginialComponentText = str(component)
+            # sort unique members
             component.removeProperties("X-ADDRESSBOOKSERVER-MEMBER")
             for memberAddress in sorted(list(memberAddresses)): # sort unique
                 component.addProperty(Property("X-ADDRESSBOOKSERVER-MEMBER", memberAddress))
-
-            # use sorted for md5
             componentText = str(component)
-            self._md5 = hashlib.md5(componentText).hexdigest()
-            self._componentChanged = orginialComponentText != componentText
 
-            # remove members from component get new text
-            self._component = deepcopy(component)
-            component.removeProperties("X-ADDRESSBOOKSERVER-MEMBER")
-            componentText = str(component)
-            self._objectText = componentText
-
-            #size for quota does not include group members
-            self._size = len(componentText)
-
+            # remove unneeded fields to get stored _objectText
+            thinComponent = deepcopy(component)
+            thinComponent.removeProperties("X-ADDRESSBOOKSERVER-MEMBER")
+            thinComponent.removeProperties("X-ADDRESSBOOKSERVER-KIND")
+            thinComponent.removeProperties("UID")
+            self._objectText = str(thinComponent)
         else:
-            self._component = component
             componentText = str(component)
-            self._md5 = hashlib.md5(componentText).hexdigest()
-            self._size = len(componentText)
             self._objectText = componentText
 
+        self._size = len(self._objectText)
+        self._component = component
+        self._md5 = hashlib.md5(componentText).hexdigest()
+        self._componentChanged = originalComponentText != componentText
+
         # Special - if migrating we need to preserve the original md5
         if self._txn._migrating and hasattr(component, "md5"):
             self._md5 = component.md5
@@ -2031,6 +2019,8 @@
                     # now add the properties to the component
                     for memberAddress in sorted(memberAddresses + foreignMembers):
                         component.addProperty(Property("X-ADDRESSBOOKSERVER-MEMBER", memberAddress))
+                    component.addProperty(Property("X-ADDRESSBOOKSERVER-KIND", "group"))
+                    component.addProperty(Property("UID", self._uid))
 
             self._component = component
 
@@ -2284,7 +2274,7 @@
         else:
             if status == _BIND_STATUS_ACCEPTED:
                 shareeView = yield shareeHome.objectWithShareUID(bindName)
-                yield shareeView._initSyncToken()
+                yield shareeView.addressbook()._initSyncToken()
                 yield shareeView._initBindRevision()
 
         queryCacher = self._txn._queryCacher
@@ -2299,16 +2289,9 @@
 
 
     @inlineCallbacks
-    def _initSyncToken(self):
-        yield self.addressbook()._initSyncToken()
-
-
-    @inlineCallbacks
     def _initBindRevision(self):
         yield self.addressbook()._initBindRevision()
 
-        # almost works
-        # yield super(AddressBookObject, self)._initBindRevision()
         bind = self._bindSchema
         yield self._updateBindColumnsQuery(
             {bind.BIND_REVISION : Parameter("revision"), }).on(
@@ -2321,8 +2304,7 @@
 
 
     @inlineCallbacks
-    # TODO:  This is almost the same as AddressBook.updateShare(): combine
-    def updateShare(self, shareeView, mode=None, status=None, message=None, name=None):
+    def updateShare(self, shareeView, mode=None, status=None, message=None):
         """
         Update share mode, status, and message for a home child shared with
         this (owned) L{CommonHomeChild}.
@@ -2343,9 +2325,6 @@
             will be used as the default display name, or None to not update
         @type message: L{str}
 
-        @param name: The bind resource name or None to not update
-        @type message: L{str}
-
         @return: the name of the shared item in the sharee's home.
         @rtype: a L{Deferred} which fires with a L{str}
         """
@@ -2357,8 +2336,7 @@
         columnMap = dict([(k, v if v != "" else None)
                           for k, v in {bind.BIND_MODE:mode,
                             bind.BIND_STATUS:status,
-                            bind.MESSAGE:message,
-                            bind.RESOURCE_NAME:name}.iteritems() if v is not None])
+                            bind.MESSAGE:message}.iteritems() if v is not None])
 
         if len(columnMap):
 
@@ -2384,7 +2362,7 @@
                 shareeView._bindStatus = columnMap[bind.BIND_STATUS]
                 if shareeView._bindStatus == _BIND_STATUS_ACCEPTED:
                     if 0 == previouslyAcceptedBindCount:
-                        yield shareeView._initSyncToken()
+                        yield shareeView.addressbook()._initSyncToken()
                         yield shareeView._initBindRevision()
                         shareeView.viewerHome()._children[self.addressbook().shareeName()] = shareeView.addressbook()
                         shareeView.viewerHome()._children[shareeView._resourceID] = shareeView.addressbook()

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/test/common.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/carddav/datastore/test/common.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -371,11 +371,10 @@
         #self.assertIdentical((yield home.addressbookWithName(name)), None)
         yield home.removeAddressBookWithName(name)
         self.assertNotIdentical((yield home.addressbookWithName(name)), None)
+        # notify is called prior to commit
+        self.assertTrue("/CardDAV/example.com/home1/" in self.notifierFactory.history)
         yield self.commit()
 
-        # Make sure notification fired after commit
-        self.assertTrue("/CardDAV/example.com/home1/" in self.notifierFactory.history)
-
         # Make sure it's available in a new transaction; i.e. test the commit.
         home = yield self.homeUnderTest()
         self.assertNotIdentical((yield home.addressbookWithName(name)), None)
@@ -396,9 +395,7 @@
             ab = yield home.addressbookWithName(name)
             self.assertEquals((yield ab.listAddressBookObjects()), [])
 
-        yield self.commit()
-
-        # Make sure notification fired after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -407,7 +404,9 @@
             ])
         )
 
+        yield self.commit()
 
+
     @inlineCallbacks
     def test_removeAddressBookWithName_absent(self):
         """
@@ -530,8 +529,6 @@
                 (yield addressbook.addressbookObjectWithName(name)), None
             )
 
-        # Make sure notifications are fired after commit
-        yield self.commit()
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -692,9 +689,7 @@
         addressbookObject = yield addressbook1.addressbookObjectWithName(name)
         self.assertEquals((yield addressbookObject.component()), component)
 
-        yield self.commit()
-
-        # Make sure notifications fire after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -703,7 +698,9 @@
             ])
         )
 
+        yield self.commit()
 
+
     @inlineCallbacks
     def test_createAddressBookObjectWithName_exists(self):
         """
@@ -808,9 +805,7 @@
         addressbookObject = yield addressbook1.addressbookObjectWithName("1.vcf")
         self.assertEquals((yield addressbookObject.component()), component)
 
-        yield self.commit()
-
-        # Make sure notification fired after commit
+        # notify is called prior to commit
         self.assertEquals(
             set(self.notifierFactory.history),
             set([
@@ -819,7 +814,9 @@
             ])
         )
 
+        yield self.commit()
 
+
     def checkPropertiesMethod(self, thunk):
         """
         Verify that the given object has a properties method that returns an

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/file.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/file.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -926,6 +926,7 @@
         return (self._notifierPrefix, self.uid(),)
 
 
+    @inlineCallbacks
     def notifyChanged(self):
         """
         Trigger a notification of a change
@@ -933,8 +934,14 @@
 
         # Only send one set of change notifications per transaction
         if self._notifiers and not self._transaction.isNotifiedAlready(self):
-            for notifier in self._notifiers.values():
+            # cache notifiers run in post commit
+            notifier = self._notifiers.get("cache", None)
+            if notifier:
                 self._transaction.postCommit(notifier.notify)
+            # push notifiers add their work items immediately
+            notifier = self._notifiers.get("push", None)
+            if notifier:
+                yield notifier.notify(self._transaction)
             self._transaction.notificationAddedForObject(self)
 
 
@@ -1272,6 +1279,7 @@
         return self.ownerHome().notifierID()
 
 
+    @inlineCallbacks
     def notifyChanged(self):
         """
         Trigger a notification of a change
@@ -1279,8 +1287,14 @@
 
         # Only send one set of change notifications per transaction
         if self._notifiers and not self._transaction.isNotifiedAlready(self):
-            for notifier in self._notifiers.values():
+            # cache notifiers run in post commit
+            notifier = self._notifiers.get("cache", None)
+            if notifier:
                 self._transaction.postCommit(notifier.notify)
+            # push notifiers add their work items immediately
+            notifier = self._notifiers.get("push", None)
+            if notifier:
+                yield notifier.notify(self._transaction)
             self._transaction.notificationAddedForObject(self)
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -29,9 +29,10 @@
 
 from pycalendar.datetime import PyCalendarDateTime
 
-from twext.enterprise.dal.syntax import \
-    Delete, utcNowSQL, Union, Insert, Len, Max, Parameter, SavepointAction, \
-    Select, Update, ColumnSyntax, TableSyntax, Upper, Count, ALL_COLUMNS, Sum
+from twext.enterprise.dal.syntax import (
+    Delete, utcNowSQL, Union, Insert, Len, Max, Parameter, SavepointAction,
+    Select, Update, ColumnSyntax, TableSyntax, Upper, Count, ALL_COLUMNS, Sum,
+    DatabaseLock, DatabaseUnlock)
 from twext.enterprise.ienterprise import AlreadyFinishedError
 from twext.enterprise.queue import LocalQueuer
 from twext.enterprise.util import parseSQLTimestamp
@@ -314,6 +315,7 @@
         self.label = label
         self.logFileName = logFileName
         self.statements = []
+        self.startTime = time.time()
 
 
     def startStatement(self, sql, args):
@@ -329,7 +331,7 @@
         """
         args = ["%s" % (arg,) for arg in args]
         args = [((arg[:10] + "...") if len(arg) > 40 else arg) for arg in args]
-        self.statements.append(["%s %s" % (sql, args,), 0, 0])
+        self.statements.append(["%s %s" % (sql, args,), 0, 0, 0])
         return len(self.statements) - 1, time.time()
 
 
@@ -343,8 +345,10 @@
         @type rows: C{int}
         """
         index, tstamp = context
+        t = time.time()
         self.statements[index][1] = len(rows) if rows else 0
-        self.statements[index][2] = time.time() - tstamp
+        self.statements[index][2] = t - tstamp
+        self.statements[index][3] = t
 
 
     def printReport(self):
@@ -352,19 +356,28 @@
         Print a report of all the SQL statements executed to date.
         """
 
+        total_statements = len(self.statements)
+        total_rows = sum([statement[1] for statement in self.statements])
+        total_time = sum([statement[2] for statement in self.statements]) * 1000.0
+
         toFile = StringIO()
         toFile.write("*** SQL Stats ***\n")
         toFile.write("\n")
         toFile.write("Label: %s\n" % (self.label,))
         toFile.write("Unique statements: %d\n" % (len(set([statement[0] for statement in self.statements]),),))
-        toFile.write("Total statements: %d\n" % (len(self.statements),))
-        toFile.write("Total rows: %d\n" % (sum([statement[1] for statement in self.statements]),))
-        toFile.write("Total time (ms): %.3f\n" % (sum([statement[2] for statement in self.statements]) * 1000.0,))
-        for sql, rows, t in self.statements:
+        toFile.write("Total statements: %d\n" % (total_statements,))
+        toFile.write("Total rows: %d\n" % (total_rows,))
+        toFile.write("Total time (ms): %.3f\n" % (total_time,))
+        t_last_end = self.startTime
+        for sql, rows, t_taken, t_end in self.statements:
             toFile.write("\n")
             toFile.write("SQL: %s\n" % (sql,))
             toFile.write("Rows: %s\n" % (rows,))
-            toFile.write("Time (ms): %.3f\n" % (t * 1000.0,))
+            toFile.write("Time (ms): %.3f\n" % (t_taken * 1000.0,))
+            toFile.write("Idle (ms): %.3f\n" % ((t_end - t_taken - t_last_end) * 1000.0,))
+            toFile.write("Elapsed (ms): %.3f\n" % ((t_end - self.startTime) * 1000.0,))
+            t_last_end = t_end
+        toFile.write("Commit (ms): %.3f\n" % ((time.time() - t_last_end) * 1000.0,))
         toFile.write("***\n\n")
 
         if self.logFileName:
@@ -372,8 +385,10 @@
         else:
             log.error(toFile.getvalue())
 
+        return (total_statements, total_rows, total_time,)
 
 
+
 class CommonStoreTransactionMonitor(object):
     """
     Object that monitors the state of a transaction over time and logs or times out
@@ -483,7 +498,9 @@
         self.iudCount = 0
         self.currentStatement = None
 
+        self.logItems = {}
 
+
     def enqueue(self, workItem, **kw):
         """
         Enqueue a L{twext.enterprise.queue.WorkItem} for later execution.
@@ -550,14 +567,6 @@
         ).on(self)
 
 
-    def calendarHomeWithUID(self, uid, create=False):
-        return self.homeWithUID(ECALENDARTYPE, uid, create=create)
-
-
-    def addressbookHomeWithUID(self, uid, create=False):
-        return self.homeWithUID(EADDRESSBOOKTYPE, uid, create=create)
-
-
     def _determineMemo(self, storeType, uid, create=False): #@UnusedVariable
         """
         Determine the memo dictionary to use for homeWithUID.
@@ -591,6 +600,14 @@
         return self._homeClass[storeType].homeWithUID(self, uid, create)
 
 
+    def calendarHomeWithUID(self, uid, create=False):
+        return self.homeWithUID(ECALENDARTYPE, uid, create=create)
+
+
+    def addressbookHomeWithUID(self, uid, create=False):
+        return self.homeWithUID(EADDRESSBOOKTYPE, uid, create=create)
+
+
     @inlineCallbacks
     def homeWithResourceID(self, storeType, rid, create=False):
         """
@@ -1029,8 +1046,10 @@
         """
         Commit the transaction and execute any post-commit hooks.
         """
+
+        # Do stats logging as a postCommit because there might be some pending preCommit SQL we want to log
         if self._stats:
-            self._stats.printReport()
+            self.postCommit(self.statsReport)
         return self._sqlTxn.commit()
 
 
@@ -1041,6 +1060,16 @@
         return self._sqlTxn.abort()
 
 
+    def statsReport(self):
+        """
+        Print the stats report and record log items
+        """
+        sql_statements, sql_rows, sql_time = self._stats.printReport()
+        self.logItems["sql-s"] = str(sql_statements)
+        self.logItems["sql-r"] = str(sql_rows)
+        self.logItems["sql-t"] = "%.1f" % (sql_time,)
+
+
     def _oldEventsBase(self, limit):
         ch = schema.CALENDAR_HOME
         co = schema.CALENDAR_OBJECT
@@ -1373,11 +1402,11 @@
 
 
     def acquireUpgradeLock(self):
-        return self.execSQL("select pg_advisory_lock(1)")
+        return DatabaseLock().on(self)
 
 
     def releaseUpgradeLock(self):
-        return self.execSQL("select pg_advisory_unlock(1)")
+        return DatabaseUnlock().on(self)
 
 
 
@@ -1415,6 +1444,7 @@
         self._txn = transaction
         self._ownerUID = ownerUID
         self._resourceID = None
+        self._dataVersion = None
         self._childrenLoaded = False
         self._children = {}
         self._notifiers = None
@@ -1660,6 +1690,23 @@
             yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
 
 
+    @classproperty
+    def _dataVersionQuery(cls): #@NoSelf
+        ch = cls._homeSchema
+        return Select(
+            [ch.DATAVERSION], From=ch,
+            Where=ch.RESOURCE_ID == Parameter("resourceID")
+        )
+
+
+    @inlineCallbacks
+    def dataVersion(self):
+        if self._dataVersion is None:
+            self._dataVersion = (yield self._dataVersionQuery.on(
+                self._txn, resourceID=self._resourceID))[0][0]
+        returnValue(self._dataVersion)
+
+
     def name(self):
         """
         Implement L{IDataStoreObject.name} to return the uid.
@@ -2195,6 +2242,7 @@
         the resource has changed.  We ensure we only do this once per object
         per transaction.
         """
+
         if self._txn.isNotifiedAlready(self):
             returnValue(None)
         self._txn.notificationAddedForObject(self)
@@ -2205,8 +2253,14 @@
 
         # Send notifications
         if self._notifiers:
-            for notifier in self._notifiers.values():
+            # cache notifiers run in post commit
+            notifier = self._notifiers.get("cache", None)
+            if notifier:
                 self._txn.postCommit(notifier.notify)
+            # push notifiers add their work items immediately
+            notifier = self._notifiers.get("push", None)
+            if notifier:
+                yield notifier.notify(self._txn)
 
 
     @classproperty
@@ -2320,16 +2374,20 @@
         raise NotImplementedError()
 
 
-    @classproperty
-    def _objectNamesSinceRevisionQuery(cls): #@NoSelf
+    @classmethod
+    def _objectNamesSinceRevisionQuery(cls, deleted=True): #@NoSelf
         """
         DAL query for (resource, deleted-flag)
         """
         rev = cls._revisionsSchema
-        return Select([rev.RESOURCE_NAME, rev.DELETED],
-                      From=rev,
-                      Where=(rev.REVISION > Parameter("revision")).And(
-                          rev.RESOURCE_ID == Parameter("resourceID")))
+        where = (rev.REVISION > Parameter("revision")).And(rev.RESOURCE_ID == Parameter("resourceID"))
+        if not deleted:
+            where = where.And(rev.DELETED == False)
+        return Select(
+            [rev.RESOURCE_NAME, rev.DELETED],
+            From=rev,
+            Where=where,
+        )
 
 
     def resourceNamesSinceToken(self, token):
@@ -2354,10 +2412,10 @@
         """
 
         results = [
-            (name if name else "", deleted)
-            for name, deleted in
-            (yield self._objectNamesSinceRevisionQuery.on(
-                self._txn, revision=revision, resourceID=self._resourceID))
+            (name if name else "", deleted) for name, deleted in
+                (yield self._objectNamesSinceRevisionQuery(deleted=(revision != 0)).on(
+                    self._txn, revision=revision, resourceID=self._resourceID)
+                )
         ]
         results.sort(key=lambda x: x[1])
 
@@ -2435,14 +2493,14 @@
     @classproperty
     def _bumpSyncTokenQuery(cls): #@NoSelf
         """
-        DAL query to change collection sync token.
+        DAL query to change collection sync token. Note this can impact multiple rows if the
+        collection is shared.
         """
         rev = cls._revisionsSchema
         return Update(
             {rev.REVISION: schema.REVISION_SEQ, },
             Where=(rev.RESOURCE_ID == Parameter("resourceID")).And
-                  (rev.RESOURCE_NAME == None),
-            Return=rev.REVISION
+                  (rev.RESOURCE_NAME == None)
         )
 
 
@@ -2451,8 +2509,11 @@
 
         if not self._txn.isRevisionBumpedAlready(self):
             self._txn.bumpRevisionForObject(self)
-            self._syncTokenRevision = (yield self._bumpSyncTokenQuery.on(
-                self._txn, resourceID=self._resourceID))[0][0]
+            yield self._bumpSyncTokenQuery.on(
+                self._txn,
+                resourceID=self._resourceID,
+            )
+            self._syncTokenRevision = None
 
 
     @classproperty
@@ -2931,7 +2992,7 @@
 
 
     @inlineCallbacks
-    def updateShareFromSharingInvitation(self, invitation, mode=None, status=None, message=None, name=None):
+    def updateShareFromSharingInvitation(self, invitation, mode=None, status=None, message=None):
         """
         Like L{updateShare} except that the original invitation is provided. That is used
         to find the actual sharee L{CommonHomeChild} which is then passed to L{updateShare}.
@@ -2944,12 +3005,12 @@
         if shareeView is None:
             shareeView = yield shareeHome.invitedObjectWithShareUID(invitation.uid())
 
-        result = yield self.updateShare(shareeView, mode, status, message, name)
+        result = yield self.updateShare(shareeView, mode, status, message)
         returnValue(result)
 
 
     @inlineCallbacks
-    def updateShare(self, shareeView, mode=None, status=None, message=None, name=None):
+    def updateShare(self, shareeView, mode=None, status=None, message=None):
         """
         Update share mode, status, and message for a home child shared with
         this (owned) L{CommonHomeChild}.
@@ -2970,9 +3031,6 @@
             will be used as the default display name, or None to not update
         @type message: L{str}
 
-        @param name: The bind resource name or None to not update
-        @type message: L{str}
-
         @return: the name of the shared item in the sharee's home.
         @rtype: a L{Deferred} which fires with a L{str}
         """
@@ -2984,8 +3042,7 @@
         columnMap = dict([(k, v if v != "" else None)
                           for k, v in {bind.BIND_MODE:mode,
                             bind.BIND_STATUS:status,
-                            bind.MESSAGE:message,
-                            bind.RESOURCE_NAME:name}.iteritems() if v is not None])
+                            bind.MESSAGE:message}.iteritems() if v is not None])
 
         if len(columnMap):
 
@@ -3016,7 +3073,9 @@
             queryCacher = self._txn._queryCacher
             if queryCacher:
                 cacheKey = queryCacher.keyForObjectWithName(shareeView._home._resourceID, shareeView._name)
-                queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+                yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+                cacheKey = queryCacher.keyForObjectWithResourceID(shareeView._home._resourceID, shareeView._resourceID)
+                yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
 
             shareeView._name = sharedname[0][0]
 
@@ -3074,7 +3133,9 @@
             queryCacher = self._txn._queryCacher
             if queryCacher:
                 cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, shareeChild._name)
-                queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+                yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+                cacheKey = queryCacher.keyForObjectWithResourceID(shareeHome._resourceID, shareeChild._resourceID)
+                yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
         else:
             deletedBindName = None
 
@@ -3339,10 +3400,9 @@
     def invalidateQueryCache(self):
         queryCacher = self._txn._queryCacher
         if queryCacher is not None:
-            cacheKey = queryCacher.keyForHomeChildMetaData(self._resourceID)
-            yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
-            cacheKey = queryCacher.keyForObjectWithName(self._home._resourceID, self._name)
-            yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForHomeChildMetaData(self._resourceID))
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithName(self._home._resourceID, self._name))
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithResourceID(self._home._resourceID, self._resourceID))
 
 
 
@@ -3519,6 +3579,7 @@
             if rows and queryCacher:
                 # Cache the result
                 queryCacher.setAfterCommit(home._txn, cacheKey, rows)
+                queryCacher.setAfterCommit(home._txn, queryCacher.keyForObjectWithResourceID(home._resourceID, rows[0][2]), rows)
 
         if not rows:
             returnValue(None)
@@ -3559,8 +3620,24 @@
         @return: an L{CommonHomeChild} or C{None} if no such child
             exists.
         """
-        rows = yield cls._bindForResourceIDAndHomeID.on(
-            home._txn, resourceID=resourceID, homeID=home._resourceID)
+
+        rows = None
+        queryCacher = home._txn._queryCacher
+
+        if queryCacher:
+            # Retrieve data from cache
+            cacheKey = queryCacher.keyForObjectWithResourceID(home._resourceID, resourceID)
+            rows = yield queryCacher.get(cacheKey)
+
+        if rows is None:
+            # No cached copy
+            rows = yield cls._bindForResourceIDAndHomeID.on(home._txn, resourceID=resourceID, homeID=home._resourceID)
+
+            if rows and queryCacher:
+                # Cache the result (under both the ID and name values)
+                queryCacher.setAfterCommit(home._txn, cacheKey, rows)
+                queryCacher.setAfterCommit(home._txn, queryCacher.keyForObjectWithName(home._resourceID, rows[0][3]), rows)
+
         if not rows:
             returnValue(None)
 
@@ -3741,6 +3818,8 @@
         if queryCacher:
             cacheKey = queryCacher.keyForObjectWithName(self._home._resourceID, oldName)
             yield queryCacher.invalidateAfterCommit(self._home._txn, cacheKey)
+            cacheKey = queryCacher.keyForObjectWithResourceID(self._home._resourceID, self._resourceID)
+            yield queryCacher.invalidateAfterCommit(self._home._txn, cacheKey)
 
         yield self._renameQuery.on(self._txn, name=name,
                                    resourceID=self._resourceID,
@@ -3774,6 +3853,8 @@
         if queryCacher:
             cacheKey = queryCacher.keyForObjectWithName(self._home._resourceID, self._name)
             yield queryCacher.invalidateAfterCommit(self._home._txn, cacheKey)
+            cacheKey = queryCacher.keyForObjectWithResourceID(self._home._resourceID, self._resourceID)
+            yield queryCacher.invalidateAfterCommit(self._home._txn, cacheKey)
 
         yield self._deletedSyncToken()
         yield self._deleteQuery.on(self._txn, NoSuchHomeChildError,
@@ -4260,8 +4341,14 @@
 
         # Send notifications
         if self._notifiers:
-            for notifier in self._notifiers.values():
+            # cache notifiers run in post commit
+            notifier = self._notifiers.get("cache", None)
+            if notifier:
                 self._txn.postCommit(notifier.notify)
+            # push notifiers add their work items immediately
+            notifier = self._notifiers.get("push", None)
+            if notifier:
+                yield notifier.notify(self._txn)
 
 
     @classproperty
@@ -4484,7 +4571,7 @@
     @inlineCallbacks
     def create(cls, parent, name, component, options=None):
 
-        child = (yield cls.objectWithName(parent, name, None))
+        child = (yield parent.objectResourceWithName(name))
         if child:
             raise ObjectResourceNameAlreadyExistsError(name)
 
@@ -5081,15 +5168,21 @@
         the resource has changed.  We ensure we only do this once per object
         per transaction.
         """
-        yield
         if self._txn.isNotifiedAlready(self):
             returnValue(None)
         self._txn.notificationAddedForObject(self)
 
         # Send notifications
         if self._notifiers:
-            for notifier in self._notifiers.values():
+            # cache notifiers run in post commit
+            notifier = self._notifiers.get("cache", None)
+            if notifier:
                 self._txn.postCommit(notifier.notify)
+            # push notifiers add their work items immediately
+            notifier = self._notifiers.get("push", None)
+            if notifier:
+                yield notifier.notify(self._txn)
+
         returnValue(None)
 
 


Property changes on: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql.py
___________________________________________________________________
Deleted: svn:mergeinfo
   - /CalDAVTester/trunk/txdav/common/datastore/sql.py:11193-11198
/CalendarServer/branches/config-separation/txdav/common/datastore/sql.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/common/datastore/sql.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/common/datastore/sql.py:6167
/CalendarServer/branches/new-store/txdav/common/datastore/sql.py:5594-5934
/CalendarServer/branches/new-store-no-caldavfile/txdav/common/datastore/sql.py:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/common/datastore/sql.py:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev/txdav/common/datastore/sql.py:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/common/datastore/sql.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/common/datastore/sql.py:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/sql.py:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/common/datastore/sql.py:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/txdav/common/datastore/sql.py:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim/txdav/common/datastore/sql.py:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql.py:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/common/datastore/sql.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/common/datastore/sql.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/common/datastore/sql.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/common/datastore/sql.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/common/datastore/sql.py:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes/txdav/common/datastore/sql.py:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/common/datastore/sql.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/common/datastore/sql.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/common/datastore/sql.py:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging/txdav/common/datastore/sql.py:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3/txdav/common/datastore/sql.py:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error/txdav/common/datastore/sql.py:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid/txdav/common/datastore/sql.py:8772-8805
/CalendarServer/branches/users/glyph/conn-limit/txdav/common/datastore/sql.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/common/datastore/sql.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/common/datastore/sql.py:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect/txdav/common/datastore/sql.py:6824-6876
/CalendarServer/branches/users/glyph/deploybuild/txdav/common/datastore/sql.py:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux/txdav/common/datastore/sql.py:10624-10635
/CalendarServer/branches/users/glyph/disable-quota/txdav/common/datastore/sql.py:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/common/datastore/sql.py:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html/txdav/common/datastore/sql.py:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client/txdav/common/datastore/sql.py:9054-9105
/CalendarServer/branches/users/glyph/linux-tests/txdav/common/datastore/sql.py:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge/txdav/common/datastore/sql.py:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/common/datastore/sql.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/common/datastore/sql.py:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/common/datastore/sql.py:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete/txdav/common/datastore/sql.py:8321-8330
/CalendarServer/branches/users/glyph/new-export/txdav/common/datastore/sql.py:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api/txdav/common/datastore/sql.py:10048-10073
/CalendarServer/branches/users/glyph/oracle/txdav/common/datastore/sql.py:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/common/datastore/sql.py:7340-7351
/CalendarServer/branches/users/glyph/other-html/txdav/common/datastore/sql.py:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim/txdav/common/datastore/sql.py:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade/txdav/common/datastore/sql.py:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1/txdav/common/datastore/sql.py:8571-8583
/CalendarServer/branches/users/glyph/q/txdav/common/datastore/sql.py:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing/txdav/common/datastore/sql.py:10204-10289
/CalendarServer/branches/users/glyph/quota/txdav/common/datastore/sql.py:7604-7637
/CalendarServer/branches/users/glyph/sendfdport/txdav/common/datastore/sql.py:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes/txdav/common/datastore/sql.py:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2/txdav/common/datastore/sql.py:8155-8174
/CalendarServer/branches/users/glyph/sharedpool/txdav/common/datastore/sql.py:6490-6550
/CalendarServer/branches/users/glyph/sharing-api/txdav/common/datastore/sql.py:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones/txdav/common/datastore/sql.py:8524-8535
/CalendarServer/branches/users/glyph/sql-store/txdav/common/datastore/sql.py:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop/txdav/common/datastore/sql.py:11060-11065
/CalendarServer/branches/users/glyph/subtransactions/txdav/common/datastore/sql.py:7248-7258
/CalendarServer/branches/users/glyph/table-alias/txdav/common/datastore/sql.py:8651-8664
/CalendarServer/branches/users/glyph/uidexport/txdav/common/datastore/sql.py:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked/txdav/common/datastore/sql.py:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/common/datastore/sql.py:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize/txdav/common/datastore/sql.py:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups/txdav/common/datastore/sql.py:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files/txdav/common/datastore/sql.py:7757-7769
/CalendarServer/branches/users/sagen/applepush/txdav/common/datastore/sql.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/common/datastore/sql.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources/txdav/common/datastore/sql.py:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/common/datastore/sql.py:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events/txdav/common/datastore/sql.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/common/datastore/sql.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/common/datastore/sql.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/common/datastore/sql.py:5084-5093
/CalendarServer/branches/users/sagen/testing/txdav/common/datastore/sql.py:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations/txdav/common/datastore/sql.py:5515-5593

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current-oracle-dialect.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current-oracle-dialect.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -218,13 +218,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -268,13 +268,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -288,7 +288,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -365,7 +365,7 @@
     "VALUE" nvarchar2(255)
 );
 
-insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '24');
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '26');
 insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '5');
 insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
 create index CALENDAR_HOME_METADAT_3cb9049e on CALENDAR_HOME_METADATA (
@@ -423,7 +423,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ABO_MEMBERS_ADDRESSBO_4effa879 on ABO_MEMBERS (
@@ -447,9 +447,11 @@
     CALENDAR_RESOURCE_ID
 );
 
-create index CALENDAR_OBJECT_REVIS_2643d556 on CALENDAR_OBJECT_REVISIONS (
+create index CALENDAR_OBJECT_REVIS_6d9d929c on CALENDAR_OBJECT_REVISIONS (
     CALENDAR_RESOURCE_ID,
-    RESOURCE_NAME
+    RESOURCE_NAME,
+    DELETED,
+    REVISION
 );
 
 create index CALENDAR_OBJECT_REVIS_265c8acf on CALENDAR_OBJECT_REVISIONS (
@@ -457,18 +459,20 @@
     REVISION
 );
 
-create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
+create index ADDRESSBOOK_OBJECT_RE_2bfcf757 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
-create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
-    RESOURCE_NAME
+create index ADDRESSBOOK_OBJECT_RE_00fe8288 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    RESOURCE_NAME,
+    DELETED,
+    REVISION
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/current.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -398,19 +398,19 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
   ADDRESSBOOK_HOME_RESOURCE_ID			integer			not null references ADDRESSBOOK_HOME,
-  OWNER_ADDRESSBOOK_HOME_RESOURCE_ID    integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  OWNER_HOME_RESOURCE_ID    			integer      	not null references ADDRESSBOOK_HOME on delete cascade,
   ADDRESSBOOK_RESOURCE_NAME    			varchar(255) 	not null,
   BIND_MODE                    			integer      	not null,	-- enum CALENDAR_BIND_MODE
   BIND_STATUS                  			integer      	not null,	-- enum CALENDAR_BIND_STATUS
   BIND_REVISION				   			integer      	default 0 not null,
   MESSAGE                      			text,                  		-- FIXME: xml?
 
-  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_ADDRESSBOOK_HOME_RESOURCE_ID), -- implicit index
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID), -- implicit index
   unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
 );
 
 create index SHARED_ADDRESSBOOK_BIND_RESOURCE_ID on
-  SHARED_ADDRESSBOOK_BIND(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID);
+  SHARED_ADDRESSBOOK_BIND(OWNER_HOME_RESOURCE_ID);
 
 
 ------------------------
@@ -489,14 +489,14 @@
 create table SHARED_GROUP_BIND (	
   ADDRESSBOOK_HOME_RESOURCE_ID 		integer      not null references ADDRESSBOOK_HOME,
   GROUP_RESOURCE_ID      			integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
-  GROUP_ADDRESSBOOK_RESOURCE_NAME	varchar(255) not null,
+  GROUP_ADDRESSBOOK_NAME			varchar(255) not null,
   BIND_MODE                    		integer      not null, -- enum CALENDAR_BIND_MODE
   BIND_STATUS                  		integer      not null, -- enum CALENDAR_BIND_STATUS
   BIND_REVISION				   		integer      default 0 not null,
   MESSAGE                      		text,                  -- FIXME: xml?
 
   primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
-  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_NAME)     -- implicit index
 );
 
 create index SHARED_GROUP_BIND_RESOURCE_ID on
@@ -526,8 +526,8 @@
 create index CALENDAR_OBJECT_REVISIONS_HOME_RESOURCE_ID_CALENDAR_RESOURCE_ID
   on CALENDAR_OBJECT_REVISIONS(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID);
 
-create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME
-  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME);
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME_DELETED_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME, DELETED, REVISION);
 
 create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_REVISION
   on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, REVISION);
@@ -539,21 +539,21 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
   ADDRESSBOOK_HOME_RESOURCE_ID 			integer			not null references ADDRESSBOOK_HOME,
-  OWNER_ADDRESSBOOK_HOME_RESOURCE_ID    integer     	references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID    			integer     	references ADDRESSBOOK_HOME,
   ADDRESSBOOK_NAME             			varchar(255) 	default null,
   RESOURCE_NAME                			varchar(255),
   REVISION                     			integer     	default nextval('REVISION_SEQ') not null,
   DELETED                      			boolean      	not null
 );
 
-create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
-  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_ADDRESSBOOK_HOME_RESOURCE_ID);
+create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_HOME_RESOURCE_ID
+  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID);
 
-create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME
-  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME);
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME_DELETED_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, RESOURCE_NAME, DELETED, REVISION);
 
 create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_REVISION
-  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID, REVISION);
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, REVISION);
 
 
 -----------------------------------
@@ -695,6 +695,6 @@
   VALUE                         varchar(255)
 );
 
-insert into CALENDARSERVER values ('VERSION', '24');
+insert into CALENDARSERVER values ('VERSION', '26');
 insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '5');
 insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v20.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -216,13 +216,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -266,13 +266,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -286,7 +286,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -403,7 +403,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -427,16 +427,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v21.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -216,13 +216,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -266,13 +266,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -286,7 +286,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -403,7 +403,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -427,16 +427,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v22.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -218,13 +218,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -268,13 +268,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -288,7 +288,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -405,7 +405,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -429,16 +429,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v23.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -218,13 +218,13 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
@@ -268,13 +268,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create table CALENDAR_OBJECT_REVISIONS (
@@ -288,7 +288,7 @@
 
 create table ADDRESSBOOK_OBJECT_REVISIONS (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
     "ADDRESSBOOK_NAME" nvarchar2(255) default null,
     "RESOURCE_NAME" nvarchar2(255),
     "REVISION" integer not null,
@@ -411,7 +411,7 @@
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -435,16 +435,16 @@
 
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v24.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/old/oracle-dialect/v24.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v24.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v24.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,491 @@
+create sequence RESOURCE_ID_SEQ;
+create sequence INSTANCE_ID_SEQ;
+create sequence ATTACHMENT_ID_SEQ;
+create sequence REVISION_SEQ;
+create sequence WORKITEM_SEQ;
+create table NODE_INFO (
+    "HOSTNAME" nvarchar2(255),
+    "PID" integer not null,
+    "PORT" integer not null,
+    "TIME" timestamp default CURRENT_TIMESTAMP at time zone 'UTC' not null, 
+    primary key("HOSTNAME", "PORT")
+);
+
+create table NAMED_LOCK (
+    "LOCK_NAME" nvarchar2(255) primary key
+);
+
+create table CALENDAR_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255) unique,
+    "DATAVERSION" integer default 0 not null
+);
+
+create table CALENDAR (
+    "RESOURCE_ID" integer primary key
+);
+
+create table CALENDAR_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "DEFAULT_EVENTS" integer default null references CALENDAR on delete set null,
+    "DEFAULT_TASKS" integer default null references CALENDAR on delete set null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "AVAILABILITY" nclob default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR on delete cascade,
+    "SUPPORTED_COMPONENTS" nvarchar2(255) default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table NOTIFICATION_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255) unique
+);
+
+create table NOTIFICATION (
+    "RESOURCE_ID" integer primary key,
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME,
+    "NOTIFICATION_UID" nvarchar2(255),
+    "XML_TYPE" nvarchar2(255),
+    "XML_DATA" nclob,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("NOTIFICATION_UID", "NOTIFICATION_HOME_RESOURCE_ID")
+);
+
+create table CALENDAR_BIND (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob,
+    "TRANSP" integer default 0 not null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "TIMEZONE" nclob default null, 
+    primary key("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_ID"), 
+    unique("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_NAME")
+);
+
+create table CALENDAR_BIND_MODE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('own', 0);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('write', 2);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('direct', 3);
+create table CALENDAR_BIND_STATUS (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invited', 0);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('accepted', 1);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('declined', 2);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invalid', 3);
+create table CALENDAR_TRANSP (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('opaque', 0);
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('transparent', 1);
+create table CALENDAR_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob,
+    "ICALENDAR_UID" nvarchar2(255),
+    "ICALENDAR_TYPE" nvarchar2(255),
+    "ATTACHMENTS_MODE" integer default 0 not null,
+    "DROPBOX_ID" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "RECURRANCE_MIN" date,
+    "RECURRANCE_MAX" date,
+    "ACCESS" integer default 0 not null,
+    "SCHEDULE_OBJECT" integer default 0,
+    "SCHEDULE_TAG" nvarchar2(36) default null,
+    "SCHEDULE_ETAGS" nclob default null,
+    "PRIVATE_COMMENTS" integer default 0 not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("CALENDAR_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MO (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('none', 0);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('write', 2);
+create table CALENDAR_ACCESS_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(32) unique
+);
+
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('', 0);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('public', 1);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('private', 2);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('confidential', 3);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('restricted', 4);
+create table TIME_RANGE (
+    "INSTANCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "FLOATING" integer not null,
+    "START_DATE" timestamp not null,
+    "END_DATE" timestamp not null,
+    "FBTYPE" integer not null,
+    "TRANSPARENT" integer not null
+);
+
+create table FREE_BUSY_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('unknown', 0);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('free', 1);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy', 2);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-unavailable', 3);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-tentative', 4);
+create table TRANSPARENCY (
+    "TIME_RANGE_INSTANCE_ID" integer not null references TIME_RANGE on delete cascade,
+    "USER_ID" nvarchar2(255),
+    "TRANSPARENT" integer not null
+);
+
+create table ATTACHMENT (
+    "ATTACHMENT_ID" integer primary key,
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "DROPBOX_ID" nvarchar2(255),
+    "CONTENT_TYPE" nvarchar2(255),
+    "SIZE" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PATH" nvarchar2(1024)
+);
+
+create table ATTACHMENT_CALENDAR_OBJECT (
+    "ATTACHMENT_ID" integer not null references ATTACHMENT on delete cascade,
+    "MANAGED_ID" nvarchar2(255),
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade, 
+    primary key("ATTACHMENT_ID", "CALENDAR_OBJECT_RESOURCE_ID"), 
+    unique("MANAGED_ID", "CALENDAR_OBJECT_RESOURCE_ID")
+);
+
+create table RESOURCE_PROPERTY (
+    "RESOURCE_ID" integer not null,
+    "NAME" nvarchar2(255),
+    "VALUE" nclob,
+    "VIEWER_UID" nvarchar2(255), 
+    primary key("RESOURCE_ID", "NAME", "VIEWER_UID")
+);
+
+create table ADDRESSBOOK_HOME (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_PROPERTY_STORE_ID" integer not null,
+    "OWNER_UID" nvarchar2(255) unique,
+    "DATAVERSION" integer default 0 not null
+);
+
+create table ADDRESSBOOK_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references ADDRESSBOOK_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table SHARED_ADDRESSBOOK_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob, 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
+);
+
+create table ADDRESSBOOK_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "VCARD_TEXT" nclob,
+    "VCARD_UID" nvarchar2(255),
+    "KIND" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "RESOURCE_NAME"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID")
+);
+
+create table ADDRESSBOOK_OBJECT_KIND (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('person', 0);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('group', 1);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('resource', 2);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('location', 3);
+create table ABO_MEMBERS (
+    "GROUP_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ID" integer not null references ADDRESSBOOK_OBJECT, 
+    primary key("GROUP_ID", "MEMBER_ID")
+);
+
+create table ABO_FOREIGN_MEMBERS (
+    "GROUP_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ADDRESS" nvarchar2(255), 
+    primary key("GROUP_ID", "MEMBER_ADDRESS")
+);
+
+create table SHARED_GROUP_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob, 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
+);
+
+create table CALENDAR_OBJECT_REVISIONS (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer references CALENDAR,
+    "CALENDAR_NAME" nvarchar2(255) default null,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null
+);
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "ADDRESSBOOK_NAME" nvarchar2(255) default null,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null
+);
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null, 
+    unique("NOTIFICATION_HOME_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table APN_SUBSCRIPTIONS (
+    "TOKEN" nvarchar2(255),
+    "RESOURCE_KEY" nvarchar2(255),
+    "MODIFIED" integer not null,
+    "SUBSCRIBER_GUID" nvarchar2(255),
+    "USER_AGENT" nvarchar2(255) default null,
+    "IP_ADDR" nvarchar2(255) default null, 
+    primary key("TOKEN", "RESOURCE_KEY")
+);
+
+create table IMIP_TOKENS (
+    "TOKEN" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALUID" nvarchar2(255),
+    "ACCESSED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    primary key("ORGANIZER", "ATTENDEE", "ICALUID")
+);
+
+create table IMIP_INVITATION_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "FROM_ADDR" nvarchar2(255),
+    "TO_ADDR" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table IMIP_POLLING_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table IMIP_REPLY_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table PUSH_NOTIFICATION_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PUSH_ID" nvarchar2(255)
+);
+
+create table GROUP_CACHER_POLLING_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade
+);
+
+create table CALENDARSERVER (
+    "NAME" nvarchar2(255) primary key,
+    "VALUE" nvarchar2(255)
+);
+
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '24');
+insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '5');
+insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
+create index CALENDAR_HOME_METADAT_3cb9049e on CALENDAR_HOME_METADATA (
+    DEFAULT_EVENTS
+);
+
+create index CALENDAR_HOME_METADAT_d55e5548 on CALENDAR_HOME_METADATA (
+    DEFAULT_TASKS
+);
+
+create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
+    NOTIFICATION_HOME_RESOURCE_ID
+);
+
+create index CALENDAR_BIND_RESOURC_e57964d4 on CALENDAR_BIND (
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_CALEN_a9a453a9 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_CALEN_96e83b73 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    RECURRANCE_MAX
+);
+
+create index CALENDAR_OBJECT_ICALE_82e731d5 on CALENDAR_OBJECT (
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_DROPB_de041d80 on CALENDAR_OBJECT (
+    DROPBOX_ID
+);
+
+create index TIME_RANGE_CALENDAR_R_beb6e7eb on TIME_RANGE (
+    CALENDAR_RESOURCE_ID
+);
+
+create index TIME_RANGE_CALENDAR_O_acf37bd1 on TIME_RANGE (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index TRANSPARENCY_TIME_RAN_5f34467f on TRANSPARENCY (
+    TIME_RANGE_INSTANCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_H_0078845c on ATTACHMENT (
+    CALENDAR_HOME_RESOURCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_O_81508484 on ATTACHMENT_CALENDAR_OBJECT (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ABO_MEMBERS_ADDRESSBO_4effa879 on ABO_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index ABO_MEMBERS_MEMBER_ID_8d66adcf on ABO_MEMBERS (
+    MEMBER_ID
+);
+
+create index ABO_FOREIGN_MEMBERS_A_1fd2c5e9 on ABO_FOREIGN_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
+    GROUP_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_3a3956c4 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_HOME_RESOURCE_ID,
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_2643d556 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    RESOURCE_NAME
+);
+
+create index CALENDAR_OBJECT_REVIS_265c8acf on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    REVISION
+);
+
+create index ADDRESSBOOK_OBJECT_RE_2bfcf757 on ADDRESSBOOK_OBJECT_REVISIONS (
+    ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    RESOURCE_NAME
+);
+
+create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index NOTIFICATION_OBJECT_R_036a9cee on NOTIFICATION_OBJECT_REVISIONS (
+    NOTIFICATION_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index APN_SUBSCRIPTIONS_RES_9610d78e on APN_SUBSCRIPTIONS (
+    RESOURCE_KEY
+);
+
+create index IMIP_TOKENS_TOKEN_e94b918f on IMIP_TOKENS (
+    TOKEN
+);
+
+create index CALENDAR_OBJECT_SPLIT_af71dcda on CALENDAR_OBJECT_SPLITTER_WORK (
+    RESOURCE_ID
+);
+

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/oracle-dialect/v25.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,491 @@
+create sequence RESOURCE_ID_SEQ;
+create sequence INSTANCE_ID_SEQ;
+create sequence ATTACHMENT_ID_SEQ;
+create sequence REVISION_SEQ;
+create sequence WORKITEM_SEQ;
+create table NODE_INFO (
+    "HOSTNAME" nvarchar2(255),
+    "PID" integer not null,
+    "PORT" integer not null,
+    "TIME" timestamp default CURRENT_TIMESTAMP at time zone 'UTC' not null, 
+    primary key("HOSTNAME", "PORT")
+);
+
+create table NAMED_LOCK (
+    "LOCK_NAME" nvarchar2(255) primary key
+);
+
+create table CALENDAR_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255) unique,
+    "DATAVERSION" integer default 0 not null
+);
+
+create table CALENDAR (
+    "RESOURCE_ID" integer primary key
+);
+
+create table CALENDAR_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "DEFAULT_EVENTS" integer default null references CALENDAR on delete set null,
+    "DEFAULT_TASKS" integer default null references CALENDAR on delete set null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "AVAILABILITY" nclob default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR on delete cascade,
+    "SUPPORTED_COMPONENTS" nvarchar2(255) default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table NOTIFICATION_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255) unique
+);
+
+create table NOTIFICATION (
+    "RESOURCE_ID" integer primary key,
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME,
+    "NOTIFICATION_UID" nvarchar2(255),
+    "XML_TYPE" nvarchar2(255),
+    "XML_DATA" nclob,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("NOTIFICATION_UID", "NOTIFICATION_HOME_RESOURCE_ID")
+);
+
+create table CALENDAR_BIND (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob,
+    "TRANSP" integer default 0 not null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "TIMEZONE" nclob default null, 
+    primary key("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_ID"), 
+    unique("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_NAME")
+);
+
+create table CALENDAR_BIND_MODE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('own', 0);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('write', 2);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('direct', 3);
+create table CALENDAR_BIND_STATUS (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invited', 0);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('accepted', 1);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('declined', 2);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invalid', 3);
+create table CALENDAR_TRANSP (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('opaque', 0);
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('transparent', 1);
+create table CALENDAR_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob,
+    "ICALENDAR_UID" nvarchar2(255),
+    "ICALENDAR_TYPE" nvarchar2(255),
+    "ATTACHMENTS_MODE" integer default 0 not null,
+    "DROPBOX_ID" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "RECURRANCE_MIN" date,
+    "RECURRANCE_MAX" date,
+    "ACCESS" integer default 0 not null,
+    "SCHEDULE_OBJECT" integer default 0,
+    "SCHEDULE_TAG" nvarchar2(36) default null,
+    "SCHEDULE_ETAGS" nclob default null,
+    "PRIVATE_COMMENTS" integer default 0 not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("CALENDAR_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MO (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('none', 0);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('write', 2);
+create table CALENDAR_ACCESS_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(32) unique
+);
+
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('', 0);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('public', 1);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('private', 2);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('confidential', 3);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('restricted', 4);
+create table TIME_RANGE (
+    "INSTANCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "FLOATING" integer not null,
+    "START_DATE" timestamp not null,
+    "END_DATE" timestamp not null,
+    "FBTYPE" integer not null,
+    "TRANSPARENT" integer not null
+);
+
+create table FREE_BUSY_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('unknown', 0);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('free', 1);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy', 2);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-unavailable', 3);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-tentative', 4);
+create table TRANSPARENCY (
+    "TIME_RANGE_INSTANCE_ID" integer not null references TIME_RANGE on delete cascade,
+    "USER_ID" nvarchar2(255),
+    "TRANSPARENT" integer not null
+);
+
+create table ATTACHMENT (
+    "ATTACHMENT_ID" integer primary key,
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "DROPBOX_ID" nvarchar2(255),
+    "CONTENT_TYPE" nvarchar2(255),
+    "SIZE" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PATH" nvarchar2(1024)
+);
+
+create table ATTACHMENT_CALENDAR_OBJECT (
+    "ATTACHMENT_ID" integer not null references ATTACHMENT on delete cascade,
+    "MANAGED_ID" nvarchar2(255),
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade, 
+    primary key("ATTACHMENT_ID", "CALENDAR_OBJECT_RESOURCE_ID"), 
+    unique("MANAGED_ID", "CALENDAR_OBJECT_RESOURCE_ID")
+);
+
+create table RESOURCE_PROPERTY (
+    "RESOURCE_ID" integer not null,
+    "NAME" nvarchar2(255),
+    "VALUE" nclob,
+    "VIEWER_UID" nvarchar2(255), 
+    primary key("RESOURCE_ID", "NAME", "VIEWER_UID")
+);
+
+create table ADDRESSBOOK_HOME (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_PROPERTY_STORE_ID" integer not null,
+    "OWNER_UID" nvarchar2(255) unique,
+    "DATAVERSION" integer default 0 not null
+);
+
+create table ADDRESSBOOK_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references ADDRESSBOOK_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table SHARED_ADDRESSBOOK_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob, 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
+);
+
+create table ADDRESSBOOK_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "VCARD_TEXT" nclob,
+    "VCARD_UID" nvarchar2(255),
+    "KIND" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "RESOURCE_NAME"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID")
+);
+
+create table ADDRESSBOOK_OBJECT_KIND (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('person', 0);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('group', 1);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('resource', 2);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('location', 3);
+create table ABO_MEMBERS (
+    "GROUP_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ID" integer not null references ADDRESSBOOK_OBJECT, 
+    primary key("GROUP_ID", "MEMBER_ID")
+);
+
+create table ABO_FOREIGN_MEMBERS (
+    "GROUP_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ADDRESS" nvarchar2(255), 
+    primary key("GROUP_ID", "MEMBER_ADDRESS")
+);
+
+create table SHARED_GROUP_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "MESSAGE" nclob, 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
+);
+
+create table CALENDAR_OBJECT_REVISIONS (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer references CALENDAR,
+    "CALENDAR_NAME" nvarchar2(255) default null,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null
+);
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "ADDRESSBOOK_NAME" nvarchar2(255) default null,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null
+);
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null, 
+    unique("NOTIFICATION_HOME_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table APN_SUBSCRIPTIONS (
+    "TOKEN" nvarchar2(255),
+    "RESOURCE_KEY" nvarchar2(255),
+    "MODIFIED" integer not null,
+    "SUBSCRIBER_GUID" nvarchar2(255),
+    "USER_AGENT" nvarchar2(255) default null,
+    "IP_ADDR" nvarchar2(255) default null, 
+    primary key("TOKEN", "RESOURCE_KEY")
+);
+
+create table IMIP_TOKENS (
+    "TOKEN" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALUID" nvarchar2(255),
+    "ACCESSED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    primary key("ORGANIZER", "ATTENDEE", "ICALUID")
+);
+
+create table IMIP_INVITATION_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "FROM_ADDR" nvarchar2(255),
+    "TO_ADDR" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table IMIP_POLLING_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table IMIP_REPLY_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table PUSH_NOTIFICATION_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PUSH_ID" nvarchar2(255)
+);
+
+create table GROUP_CACHER_POLLING_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+    "WORK_ID" integer primary key not null,
+    "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade
+);
+
+create table CALENDARSERVER (
+    "NAME" nvarchar2(255) primary key,
+    "VALUE" nvarchar2(255)
+);
+
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '25');
+insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '5');
+insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
+create index CALENDAR_HOME_METADAT_3cb9049e on CALENDAR_HOME_METADATA (
+    DEFAULT_EVENTS
+);
+
+create index CALENDAR_HOME_METADAT_d55e5548 on CALENDAR_HOME_METADATA (
+    DEFAULT_TASKS
+);
+
+create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
+    NOTIFICATION_HOME_RESOURCE_ID
+);
+
+create index CALENDAR_BIND_RESOURC_e57964d4 on CALENDAR_BIND (
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_CALEN_a9a453a9 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_CALEN_96e83b73 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    RECURRANCE_MAX
+);
+
+create index CALENDAR_OBJECT_ICALE_82e731d5 on CALENDAR_OBJECT (
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_DROPB_de041d80 on CALENDAR_OBJECT (
+    DROPBOX_ID
+);
+
+create index TIME_RANGE_CALENDAR_R_beb6e7eb on TIME_RANGE (
+    CALENDAR_RESOURCE_ID
+);
+
+create index TIME_RANGE_CALENDAR_O_acf37bd1 on TIME_RANGE (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index TRANSPARENCY_TIME_RAN_5f34467f on TRANSPARENCY (
+    TIME_RANGE_INSTANCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_H_0078845c on ATTACHMENT (
+    CALENDAR_HOME_RESOURCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_O_81508484 on ATTACHMENT_CALENDAR_OBJECT (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ABO_MEMBERS_ADDRESSBO_4effa879 on ABO_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index ABO_MEMBERS_MEMBER_ID_8d66adcf on ABO_MEMBERS (
+    MEMBER_ID
+);
+
+create index ABO_FOREIGN_MEMBERS_A_1fd2c5e9 on ABO_FOREIGN_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
+    GROUP_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_3a3956c4 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_HOME_RESOURCE_ID,
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_2643d556 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    RESOURCE_NAME
+);
+
+create index CALENDAR_OBJECT_REVIS_265c8acf on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    REVISION
+);
+
+create index ADDRESSBOOK_OBJECT_RE_2bfcf757 on ADDRESSBOOK_OBJECT_REVISIONS (
+    ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    RESOURCE_NAME
+);
+
+create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index NOTIFICATION_OBJECT_R_036a9cee on NOTIFICATION_OBJECT_REVISIONS (
+    NOTIFICATION_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index APN_SUBSCRIPTIONS_RES_9610d78e on APN_SUBSCRIPTIONS (
+    RESOURCE_KEY
+);
+
+create index IMIP_TOKENS_TOKEN_e94b918f on IMIP_TOKENS (
+    TOKEN
+);
+
+create index CALENDAR_OBJECT_SPLIT_af71dcda on CALENDAR_OBJECT_SPLITTER_WORK (
+    RESOURCE_ID
+);
+

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v24.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/old/postgres-dialect/v24.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v24.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v24.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,700 @@
+-- -*- test-case-name: txdav.caldav.datastore.test.test_sql,txdav.carddav.datastore.test.test_sql -*-
+
+----
+-- Copyright (c) 2010-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+
+-----------------
+-- Resource ID --
+-----------------
+
+create sequence RESOURCE_ID_SEQ;
+
+
+-------------------------
+-- Cluster Bookkeeping --
+-------------------------
+
+-- Information about a process connected to this database.
+
+-- Note that this must match the node info schema in twext.enterprise.queue.
+create table NODE_INFO (
+  HOSTNAME  varchar(255) not null,
+  PID       integer      not null,
+  PORT      integer      not null,
+  TIME      timestamp    not null default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (HOSTNAME, PORT)
+);
+
+-- Unique named locks.  This table should always be empty, but rows are
+-- temporarily created in order to prevent undesirable concurrency.
+create table NAMED_LOCK (
+    LOCK_NAME varchar(255) primary key
+);
+
+
+-------------------
+-- Calendar Home --
+-------------------
+
+create table CALENDAR_HOME (
+  RESOURCE_ID      integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID        varchar(255) not null unique,                                 -- implicit index
+  DATAVERSION      integer      default 0 not null
+);
+
+--------------
+-- Calendar --
+--------------
+
+create table CALENDAR (
+  RESOURCE_ID integer   primary key default nextval('RESOURCE_ID_SEQ') -- implicit index
+);
+
+----------------------------
+-- Calendar Home Metadata --
+----------------------------
+
+create table CALENDAR_HOME_METADATA (
+  RESOURCE_ID              integer     primary key references CALENDAR_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES         integer     default 0 not null,
+  DEFAULT_EVENTS           integer     default null references CALENDAR on delete set null,
+  DEFAULT_TASKS            integer     default null references CALENDAR on delete set null,
+  ALARM_VEVENT_TIMED       text        default null,
+  ALARM_VEVENT_ALLDAY      text        default null,
+  ALARM_VTODO_TIMED        text        default null,
+  ALARM_VTODO_ALLDAY       text        default null,
+  AVAILABILITY             text        default null,
+  CREATED                  timestamp   default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                 timestamp   default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+create index CALENDAR_HOME_METADATA_DEFAULT_EVENTS on
+	CALENDAR_HOME_METADATA(DEFAULT_EVENTS);
+create index CALENDAR_HOME_METADATA_DEFAULT_TASKS on
+	CALENDAR_HOME_METADATA(DEFAULT_TASKS);
+
+-----------------------
+-- Calendar Metadata --
+-----------------------
+
+create table CALENDAR_METADATA (
+  RESOURCE_ID           integer      primary key references CALENDAR on delete cascade, -- implicit index
+  SUPPORTED_COMPONENTS  varchar(255) default null,
+  CREATED               timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------------
+-- Sharing Notifications --
+---------------------------
+
+create table NOTIFICATION_HOME (
+  RESOURCE_ID integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID   varchar(255) not null unique                                 -- implicit index
+);
+
+create table NOTIFICATION (
+  RESOURCE_ID                   integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME,
+  NOTIFICATION_UID              varchar(255) not null,
+  XML_TYPE                      varchar(255) not null,
+  XML_DATA                      text         not null,
+  MD5                           char(32)     not null,
+  CREATED                       timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique(NOTIFICATION_UID, NOTIFICATION_HOME_RESOURCE_ID) -- implicit index
+);
+
+create index NOTIFICATION_NOTIFICATION_HOME_RESOURCE_ID on
+	NOTIFICATION(NOTIFICATION_HOME_RESOURCE_ID);
+
+
+-------------------
+-- Calendar Bind --
+-------------------
+
+-- Joins CALENDAR_HOME and CALENDAR
+
+create table CALENDAR_BIND (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      not null references CALENDAR on delete cascade,
+  CALENDAR_RESOURCE_NAME    varchar(255) not null,
+  BIND_MODE                 integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS               integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				integer      default 0 not null,
+  MESSAGE                   text,
+  TRANSP                    integer      default 0 not null, -- enum CALENDAR_TRANSP
+  ALARM_VEVENT_TIMED        text         default null,
+  ALARM_VEVENT_ALLDAY       text         default null,
+  ALARM_VTODO_TIMED         text         default null,
+  ALARM_VTODO_ALLDAY        text         default null,
+  TIMEZONE                  text         default null,
+
+  primary key(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID), -- implicit index
+  unique(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_NAME)     -- implicit index
+);
+
+create index CALENDAR_BIND_RESOURCE_ID on
+	CALENDAR_BIND(CALENDAR_RESOURCE_ID);
+
+-- Enumeration of calendar bind modes
+
+create table CALENDAR_BIND_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_MODE values (0, 'own'  );
+insert into CALENDAR_BIND_MODE values (1, 'read' );
+insert into CALENDAR_BIND_MODE values (2, 'write');
+insert into CALENDAR_BIND_MODE values (3, 'direct');
+
+-- Enumeration of statuses
+
+create table CALENDAR_BIND_STATUS (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_STATUS values (0, 'invited' );
+insert into CALENDAR_BIND_STATUS values (1, 'accepted');
+insert into CALENDAR_BIND_STATUS values (2, 'declined');
+insert into CALENDAR_BIND_STATUS values (3, 'invalid');
+
+
+-- Enumeration of transparency
+
+create table CALENDAR_TRANSP (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_TRANSP values (0, 'opaque' );
+insert into CALENDAR_TRANSP values (1, 'transparent');
+
+
+---------------------
+-- Calendar Object --
+---------------------
+
+create table CALENDAR_OBJECT (
+  RESOURCE_ID          integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID integer      not null references CALENDAR on delete cascade,
+  RESOURCE_NAME        varchar(255) not null,
+  ICALENDAR_TEXT       text         not null,
+  ICALENDAR_UID        varchar(255) not null,
+  ICALENDAR_TYPE       varchar(255) not null,
+  ATTACHMENTS_MODE     integer      default 0 not null, -- enum CALENDAR_OBJECT_ATTACHMENTS_MODE
+  DROPBOX_ID           varchar(255),
+  ORGANIZER            varchar(255),
+  RECURRANCE_MIN       date,        -- minimum date that recurrences have been expanded to.
+  RECURRANCE_MAX       date,        -- maximum date that recurrences have been expanded to.
+  ACCESS               integer      default 0 not null,
+  SCHEDULE_OBJECT      boolean      default false,
+  SCHEDULE_TAG         varchar(36)  default null,
+  SCHEDULE_ETAGS       text         default null,
+  PRIVATE_COMMENTS     boolean      default false not null,
+  MD5                  char(32)     not null,
+  CREATED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED             timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+
+  -- since the 'inbox' is a 'calendar resource' for the purpose of storing
+  -- calendar objects, this constraint has to be selectively enforced by the
+  -- application layer.
+
+  -- unique(CALENDAR_RESOURCE_ID, ICALENDAR_UID)
+);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_AND_ICALENDAR_UID on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_RECURRANCE_MAX on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, RECURRANCE_MAX);
+
+create index CALENDAR_OBJECT_ICALENDAR_UID on
+  CALENDAR_OBJECT(ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_DROPBOX_ID on
+  CALENDAR_OBJECT(DROPBOX_ID);
+
+-- Enumeration of attachment modes
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (0, 'none' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (1, 'read' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (2, 'write');
+
+
+-- Enumeration of calendar access types
+
+create table CALENDAR_ACCESS_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(32) not null unique
+);
+
+insert into CALENDAR_ACCESS_TYPE values (0, ''             );
+insert into CALENDAR_ACCESS_TYPE values (1, 'public'       );
+insert into CALENDAR_ACCESS_TYPE values (2, 'private'      );
+insert into CALENDAR_ACCESS_TYPE values (3, 'confidential' );
+insert into CALENDAR_ACCESS_TYPE values (4, 'restricted'   );
+
+
+-----------------
+-- Instance ID --
+-----------------
+
+create sequence INSTANCE_ID_SEQ;
+
+
+----------------
+-- Time Range --
+----------------
+
+create table TIME_RANGE (
+  INSTANCE_ID                 integer        primary key default nextval('INSTANCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID        integer        not null references CALENDAR on delete cascade,
+  CALENDAR_OBJECT_RESOURCE_ID integer        not null references CALENDAR_OBJECT on delete cascade,
+  FLOATING                    boolean        not null,
+  START_DATE                  timestamp      not null,
+  END_DATE                    timestamp      not null,
+  FBTYPE                      integer        not null,
+  TRANSPARENT                 boolean        not null
+);
+
+create index TIME_RANGE_CALENDAR_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_RESOURCE_ID);
+create index TIME_RANGE_CALENDAR_OBJECT_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_OBJECT_RESOURCE_ID);
+
+
+-- Enumeration of free/busy types
+
+create table FREE_BUSY_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into FREE_BUSY_TYPE values (0, 'unknown'         );
+insert into FREE_BUSY_TYPE values (1, 'free'            );
+insert into FREE_BUSY_TYPE values (2, 'busy'            );
+insert into FREE_BUSY_TYPE values (3, 'busy-unavailable');
+insert into FREE_BUSY_TYPE values (4, 'busy-tentative'  );
+
+
+------------------
+-- Transparency --
+------------------
+
+create table TRANSPARENCY (
+  TIME_RANGE_INSTANCE_ID      integer      not null references TIME_RANGE on delete cascade,
+  USER_ID                     varchar(255) not null,
+  TRANSPARENT                 boolean      not null
+);
+
+create index TRANSPARENCY_TIME_RANGE_INSTANCE_ID on
+  TRANSPARENCY(TIME_RANGE_INSTANCE_ID);
+
+
+----------------
+-- Attachment --
+----------------
+
+create sequence ATTACHMENT_ID_SEQ;
+
+create table ATTACHMENT (
+  ATTACHMENT_ID               integer           primary key default nextval('ATTACHMENT_ID_SEQ'), -- implicit index
+  CALENDAR_HOME_RESOURCE_ID   integer           not null references CALENDAR_HOME,
+  DROPBOX_ID                  varchar(255),
+  CONTENT_TYPE                varchar(255)      not null,
+  SIZE                        integer           not null,
+  MD5                         char(32)          not null,
+  CREATED                     timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                    timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  PATH                        varchar(1024)     not null
+);
+
+create index ATTACHMENT_CALENDAR_HOME_RESOURCE_ID on
+  ATTACHMENT(CALENDAR_HOME_RESOURCE_ID);
+
+-- Many-to-many relationship between attachments and calendar objects
+create table ATTACHMENT_CALENDAR_OBJECT (
+  ATTACHMENT_ID                  integer      not null references ATTACHMENT on delete cascade,
+  MANAGED_ID                     varchar(255) not null,
+  CALENDAR_OBJECT_RESOURCE_ID    integer      not null references CALENDAR_OBJECT on delete cascade,
+
+  primary key (ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
+  unique (MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
+);
+
+create index ATTACHMENT_CALENDAR_OBJECT_CALENDAR_OBJECT_RESOURCE_ID on
+	ATTACHMENT_CALENDAR_OBJECT(CALENDAR_OBJECT_RESOURCE_ID);
+
+-----------------------
+-- Resource Property --
+-----------------------
+
+create table RESOURCE_PROPERTY (
+  RESOURCE_ID integer      not null, -- foreign key: *.RESOURCE_ID
+  NAME        varchar(255) not null,
+  VALUE       text         not null, -- FIXME: xml?
+  VIEWER_UID  varchar(255),
+
+  primary key (RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
+);
+
+
+----------------------
+-- AddressBook Home --
+----------------------
+
+create table ADDRESSBOOK_HOME (
+  RESOURCE_ID      				integer			primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  ADDRESSBOOK_PROPERTY_STORE_ID	integer      	default nextval('RESOURCE_ID_SEQ') not null, 	-- implicit index
+  OWNER_UID        				varchar(255) 	not null unique,                                -- implicit index
+  DATAVERSION      				integer      	default 0 not null
+);
+
+
+-------------------------------
+-- AddressBook Home Metadata --
+-------------------------------
+
+create table ADDRESSBOOK_HOME_METADATA (
+  RESOURCE_ID      integer      primary key references ADDRESSBOOK_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES integer      default 0 not null,
+  CREATED          timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED         timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+-----------------------------
+-- Shared AddressBook Bind --
+-----------------------------
+
+-- Joins sharee ADDRESSBOOK_HOME and owner ADDRESSBOOK_HOME
+
+create table SHARED_ADDRESSBOOK_BIND (
+  ADDRESSBOOK_HOME_RESOURCE_ID			integer			not null references ADDRESSBOOK_HOME,
+  OWNER_ADDRESSBOOK_HOME_RESOURCE_ID    integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  ADDRESSBOOK_RESOURCE_NAME    			varchar(255) 	not null,
+  BIND_MODE                    			integer      	not null,	-- enum CALENDAR_BIND_MODE
+  BIND_STATUS                  			integer      	not null,	-- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				   			integer      	default 0 not null,
+  MESSAGE                      			text,                  		-- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_ADDRESSBOOK_HOME_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+);
+
+create index SHARED_ADDRESSBOOK_BIND_RESOURCE_ID on
+  SHARED_ADDRESSBOOK_BIND(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID);
+
+
+------------------------
+-- AddressBook Object --
+------------------------
+
+create table ADDRESSBOOK_OBJECT (
+  RESOURCE_ID             		integer   		primary key default nextval('RESOURCE_ID_SEQ'),    -- implicit index
+  ADDRESSBOOK_HOME_RESOURCE_ID 	integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  RESOURCE_NAME           		varchar(255) 	not null,
+  VCARD_TEXT              		text         	not null,
+  VCARD_UID               		varchar(255) 	not null,
+  KIND 			  		  		integer      	not null,  -- enum ADDRESSBOOK_OBJECT_KIND
+  MD5                     		char(32)     	not null,
+  CREATED                 		timestamp    	default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                		timestamp    	default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, VCARD_UID)      -- implicit index
+);
+
+
+-----------------------------
+-- AddressBook Object kind --
+-----------------------------
+
+create table ADDRESSBOOK_OBJECT_KIND (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND values (0, 'person');
+insert into ADDRESSBOOK_OBJECT_KIND values (1, 'group' );
+insert into ADDRESSBOOK_OBJECT_KIND values (2, 'resource');
+insert into ADDRESSBOOK_OBJECT_KIND values (3, 'location');
+
+
+---------------------------------
+-- Address Book Object Members --
+---------------------------------
+
+create table ABO_MEMBERS (
+    GROUP_ID              integer      not null references ADDRESSBOOK_OBJECT on delete cascade,	-- AddressBook Object's (kind=='group') RESOURCE_ID
+ 	ADDRESSBOOK_ID		  integer      not null references ADDRESSBOOK_HOME on delete cascade,
+    MEMBER_ID             integer      not null references ADDRESSBOOK_OBJECT,						-- member AddressBook Object's RESOURCE_ID
+
+    primary key (GROUP_ID, MEMBER_ID) -- implicit index
+);
+
+create index ABO_MEMBERS_ADDRESSBOOK_ID on
+	ABO_MEMBERS(ADDRESSBOOK_ID);
+create index ABO_MEMBERS_MEMBER_ID on
+	ABO_MEMBERS(MEMBER_ID);
+
+------------------------------------------
+-- Address Book Object Foreign Members  --
+------------------------------------------
+
+create table ABO_FOREIGN_MEMBERS (
+    GROUP_ID              integer      not null references ADDRESSBOOK_OBJECT on delete cascade,	-- AddressBook Object's (kind=='group') RESOURCE_ID
+ 	ADDRESSBOOK_ID		  integer      not null references ADDRESSBOOK_HOME on delete cascade,
+    MEMBER_ADDRESS  	  varchar(255) not null, 													-- member AddressBook Object's 'calendar' address
+
+    primary key (GROUP_ID, MEMBER_ADDRESS) -- implicit index
+);
+
+create index ABO_FOREIGN_MEMBERS_ADDRESSBOOK_ID on
+	ABO_FOREIGN_MEMBERS(ADDRESSBOOK_ID);
+
+-----------------------
+-- Shared Group Bind --
+-----------------------
+
+-- Joins ADDRESSBOOK_HOME and ADDRESSBOOK_OBJECT (kind == group)
+
+create table SHARED_GROUP_BIND (	
+  ADDRESSBOOK_HOME_RESOURCE_ID 		integer      not null references ADDRESSBOOK_HOME,
+  GROUP_RESOURCE_ID      			integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
+  GROUP_ADDRESSBOOK_RESOURCE_NAME	varchar(255) not null,
+  BIND_MODE                    		integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS                  		integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				   		integer      default 0 not null,
+  MESSAGE                      		text,                  -- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+);
+
+create index SHARED_GROUP_BIND_RESOURCE_ID on
+  SHARED_GROUP_BIND(GROUP_RESOURCE_ID);
+
+
+---------------
+-- Revisions --
+---------------
+
+create sequence REVISION_SEQ;
+
+
+-------------------------------
+-- Calendar Object Revisions --
+-------------------------------
+
+create table CALENDAR_OBJECT_REVISIONS (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      references CALENDAR,
+  CALENDAR_NAME             varchar(255) default null,
+  RESOURCE_NAME             varchar(255),
+  REVISION                  integer      default nextval('REVISION_SEQ') not null,
+  DELETED                   boolean      not null
+);
+
+create index CALENDAR_OBJECT_REVISIONS_HOME_RESOURCE_ID_CALENDAR_RESOURCE_ID
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, REVISION);
+
+
+----------------------------------
+-- AddressBook Object Revisions --
+----------------------------------
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+  ADDRESSBOOK_HOME_RESOURCE_ID 			integer			not null references ADDRESSBOOK_HOME,
+  OWNER_ADDRESSBOOK_HOME_RESOURCE_ID    integer     	references ADDRESSBOOK_HOME,
+  ADDRESSBOOK_NAME             			varchar(255) 	default null,
+  RESOURCE_NAME                			varchar(255),
+  REVISION                     			integer     	default nextval('REVISION_SEQ') not null,
+  DELETED                      			boolean      	not null
+);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_ADDRESSBOOK_HOME_RESOURCE_ID);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_ADDRESSBOOK_HOME_RESOURCE_ID, REVISION);
+
+
+-----------------------------------
+-- Notification Object Revisions --
+-----------------------------------
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME on delete cascade,
+  RESOURCE_NAME                 varchar(255),
+  REVISION                      integer      default nextval('REVISION_SEQ') not null,
+  DELETED                       boolean      not null,
+
+  unique(NOTIFICATION_HOME_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+);
+
+create index NOTIFICATION_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on NOTIFICATION_OBJECT_REVISIONS(NOTIFICATION_HOME_RESOURCE_ID, REVISION);
+
+
+-------------------------------------------
+-- Apple Push Notification Subscriptions --
+-------------------------------------------
+
+create table APN_SUBSCRIPTIONS (
+  TOKEN                         varchar(255) not null,
+  RESOURCE_KEY                  varchar(255) not null,
+  MODIFIED                      integer      not null,
+  SUBSCRIBER_GUID               varchar(255) not null,
+  USER_AGENT                    varchar(255) default null,
+  IP_ADDR                       varchar(255) default null,
+
+  primary key (TOKEN, RESOURCE_KEY) -- implicit index
+);
+
+create index APN_SUBSCRIPTIONS_RESOURCE_KEY
+   on APN_SUBSCRIPTIONS(RESOURCE_KEY);
+
+   
+-----------------
+-- IMIP Tokens --
+-----------------
+
+create table IMIP_TOKENS (
+  TOKEN                         varchar(255) not null,
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALUID                       varchar(255) not null,
+  ACCESSED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (ORGANIZER, ATTENDEE, ICALUID) -- implicit index
+);
+
+create index IMIP_TOKENS_TOKEN
+   on IMIP_TOKENS(TOKEN);
+
+   
+----------------
+-- Work Items --
+----------------
+
+create sequence WORKITEM_SEQ;
+
+
+---------------------------
+-- IMIP Inivitation Work --
+---------------------------
+
+create table IMIP_INVITATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  FROM_ADDR                     varchar(255) not null,
+  TO_ADDR                       varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+
+-----------------------
+-- IMIP Polling Work --
+-----------------------
+
+create table IMIP_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------
+-- IMIP Reply Work --
+---------------------
+
+create table IMIP_REPLY_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+
+------------------------
+-- Push Notifications --
+------------------------
+
+create table PUSH_NOTIFICATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  PUSH_ID                       varchar(255) not null
+);
+
+-----------------
+-- GroupCacher --
+-----------------
+
+create table GROUP_CACHER_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+--------------------------
+-- Object Splitter Work --
+--------------------------
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade
+);
+
+create index CALENDAR_OBJECT_SPLITTER_WORK_RESOURCE_ID on
+	CALENDAR_OBJECT_SPLITTER_WORK(RESOURCE_ID);
+
+--------------------
+-- Schema Version --
+--------------------
+
+create table CALENDARSERVER (
+  NAME                          varchar(255) primary key, -- implicit index
+  VALUE                         varchar(255)
+);
+
+insert into CALENDARSERVER values ('VERSION', '24');
+insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '5');
+insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/old/postgres-dialect/v25.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,700 @@
+-- -*- test-case-name: txdav.caldav.datastore.test.test_sql,txdav.carddav.datastore.test.test_sql -*-
+
+----
+-- Copyright (c) 2010-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+
+-----------------
+-- Resource ID --
+-----------------
+
+create sequence RESOURCE_ID_SEQ;
+
+
+-------------------------
+-- Cluster Bookkeeping --
+-------------------------
+
+-- Information about a process connected to this database.
+
+-- Note that this must match the node info schema in twext.enterprise.queue.
+create table NODE_INFO (
+  HOSTNAME  varchar(255) not null,
+  PID       integer      not null,
+  PORT      integer      not null,
+  TIME      timestamp    not null default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (HOSTNAME, PORT)
+);
+
+-- Unique named locks.  This table should always be empty, but rows are
+-- temporarily created in order to prevent undesirable concurrency.
+create table NAMED_LOCK (
+    LOCK_NAME varchar(255) primary key
+);
+
+
+-------------------
+-- Calendar Home --
+-------------------
+
+create table CALENDAR_HOME (
+  RESOURCE_ID      integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID        varchar(255) not null unique,                                 -- implicit index
+  DATAVERSION      integer      default 0 not null
+);
+
+--------------
+-- Calendar --
+--------------
+
+create table CALENDAR (
+  RESOURCE_ID integer   primary key default nextval('RESOURCE_ID_SEQ') -- implicit index
+);
+
+----------------------------
+-- Calendar Home Metadata --
+----------------------------
+
+create table CALENDAR_HOME_METADATA (
+  RESOURCE_ID              integer     primary key references CALENDAR_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES         integer     default 0 not null,
+  DEFAULT_EVENTS           integer     default null references CALENDAR on delete set null,
+  DEFAULT_TASKS            integer     default null references CALENDAR on delete set null,
+  ALARM_VEVENT_TIMED       text        default null,
+  ALARM_VEVENT_ALLDAY      text        default null,
+  ALARM_VTODO_TIMED        text        default null,
+  ALARM_VTODO_ALLDAY       text        default null,
+  AVAILABILITY             text        default null,
+  CREATED                  timestamp   default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                 timestamp   default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+create index CALENDAR_HOME_METADATA_DEFAULT_EVENTS on
+	CALENDAR_HOME_METADATA(DEFAULT_EVENTS);
+create index CALENDAR_HOME_METADATA_DEFAULT_TASKS on
+	CALENDAR_HOME_METADATA(DEFAULT_TASKS);
+
+-----------------------
+-- Calendar Metadata --
+-----------------------
+
+create table CALENDAR_METADATA (
+  RESOURCE_ID           integer      primary key references CALENDAR on delete cascade, -- implicit index
+  SUPPORTED_COMPONENTS  varchar(255) default null,
+  CREATED               timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------------
+-- Sharing Notifications --
+---------------------------
+
+create table NOTIFICATION_HOME (
+  RESOURCE_ID integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID   varchar(255) not null unique                                 -- implicit index
+);
+
+create table NOTIFICATION (
+  RESOURCE_ID                   integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME,
+  NOTIFICATION_UID              varchar(255) not null,
+  XML_TYPE                      varchar(255) not null,
+  XML_DATA                      text         not null,
+  MD5                           char(32)     not null,
+  CREATED                       timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique(NOTIFICATION_UID, NOTIFICATION_HOME_RESOURCE_ID) -- implicit index
+);
+
+create index NOTIFICATION_NOTIFICATION_HOME_RESOURCE_ID on
+	NOTIFICATION(NOTIFICATION_HOME_RESOURCE_ID);
+
+
+-------------------
+-- Calendar Bind --
+-------------------
+
+-- Joins CALENDAR_HOME and CALENDAR
+
+create table CALENDAR_BIND (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      not null references CALENDAR on delete cascade,
+  CALENDAR_RESOURCE_NAME    varchar(255) not null,
+  BIND_MODE                 integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS               integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				integer      default 0 not null,
+  MESSAGE                   text,
+  TRANSP                    integer      default 0 not null, -- enum CALENDAR_TRANSP
+  ALARM_VEVENT_TIMED        text         default null,
+  ALARM_VEVENT_ALLDAY       text         default null,
+  ALARM_VTODO_TIMED         text         default null,
+  ALARM_VTODO_ALLDAY        text         default null,
+  TIMEZONE                  text         default null,
+
+  primary key(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID), -- implicit index
+  unique(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_NAME)     -- implicit index
+);
+
+create index CALENDAR_BIND_RESOURCE_ID on
+	CALENDAR_BIND(CALENDAR_RESOURCE_ID);
+
+-- Enumeration of calendar bind modes
+
+create table CALENDAR_BIND_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_MODE values (0, 'own'  );
+insert into CALENDAR_BIND_MODE values (1, 'read' );
+insert into CALENDAR_BIND_MODE values (2, 'write');
+insert into CALENDAR_BIND_MODE values (3, 'direct');
+
+-- Enumeration of statuses
+
+create table CALENDAR_BIND_STATUS (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_STATUS values (0, 'invited' );
+insert into CALENDAR_BIND_STATUS values (1, 'accepted');
+insert into CALENDAR_BIND_STATUS values (2, 'declined');
+insert into CALENDAR_BIND_STATUS values (3, 'invalid');
+
+
+-- Enumeration of transparency
+
+create table CALENDAR_TRANSP (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_TRANSP values (0, 'opaque' );
+insert into CALENDAR_TRANSP values (1, 'transparent');
+
+
+---------------------
+-- Calendar Object --
+---------------------
+
+create table CALENDAR_OBJECT (
+  RESOURCE_ID          integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID integer      not null references CALENDAR on delete cascade,
+  RESOURCE_NAME        varchar(255) not null,
+  ICALENDAR_TEXT       text         not null,
+  ICALENDAR_UID        varchar(255) not null,
+  ICALENDAR_TYPE       varchar(255) not null,
+  ATTACHMENTS_MODE     integer      default 0 not null, -- enum CALENDAR_OBJECT_ATTACHMENTS_MODE
+  DROPBOX_ID           varchar(255),
+  ORGANIZER            varchar(255),
+  RECURRANCE_MIN       date,        -- minimum date that recurrences have been expanded to.
+  RECURRANCE_MAX       date,        -- maximum date that recurrences have been expanded to.
+  ACCESS               integer      default 0 not null,
+  SCHEDULE_OBJECT      boolean      default false,
+  SCHEDULE_TAG         varchar(36)  default null,
+  SCHEDULE_ETAGS       text         default null,
+  PRIVATE_COMMENTS     boolean      default false not null,
+  MD5                  char(32)     not null,
+  CREATED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED             timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+
+  -- since the 'inbox' is a 'calendar resource' for the purpose of storing
+  -- calendar objects, this constraint has to be selectively enforced by the
+  -- application layer.
+
+  -- unique(CALENDAR_RESOURCE_ID, ICALENDAR_UID)
+);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_AND_ICALENDAR_UID on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_RECURRANCE_MAX on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, RECURRANCE_MAX);
+
+create index CALENDAR_OBJECT_ICALENDAR_UID on
+  CALENDAR_OBJECT(ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_DROPBOX_ID on
+  CALENDAR_OBJECT(DROPBOX_ID);
+
+-- Enumeration of attachment modes
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (0, 'none' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (1, 'read' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (2, 'write');
+
+
+-- Enumeration of calendar access types
+
+create table CALENDAR_ACCESS_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(32) not null unique
+);
+
+insert into CALENDAR_ACCESS_TYPE values (0, ''             );
+insert into CALENDAR_ACCESS_TYPE values (1, 'public'       );
+insert into CALENDAR_ACCESS_TYPE values (2, 'private'      );
+insert into CALENDAR_ACCESS_TYPE values (3, 'confidential' );
+insert into CALENDAR_ACCESS_TYPE values (4, 'restricted'   );
+
+
+-----------------
+-- Instance ID --
+-----------------
+
+create sequence INSTANCE_ID_SEQ;
+
+
+----------------
+-- Time Range --
+----------------
+
+create table TIME_RANGE (
+  INSTANCE_ID                 integer        primary key default nextval('INSTANCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID        integer        not null references CALENDAR on delete cascade,
+  CALENDAR_OBJECT_RESOURCE_ID integer        not null references CALENDAR_OBJECT on delete cascade,
+  FLOATING                    boolean        not null,
+  START_DATE                  timestamp      not null,
+  END_DATE                    timestamp      not null,
+  FBTYPE                      integer        not null,
+  TRANSPARENT                 boolean        not null
+);
+
+create index TIME_RANGE_CALENDAR_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_RESOURCE_ID);
+create index TIME_RANGE_CALENDAR_OBJECT_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_OBJECT_RESOURCE_ID);
+
+
+-- Enumeration of free/busy types
+
+create table FREE_BUSY_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into FREE_BUSY_TYPE values (0, 'unknown'         );
+insert into FREE_BUSY_TYPE values (1, 'free'            );
+insert into FREE_BUSY_TYPE values (2, 'busy'            );
+insert into FREE_BUSY_TYPE values (3, 'busy-unavailable');
+insert into FREE_BUSY_TYPE values (4, 'busy-tentative'  );
+
+
+------------------
+-- Transparency --
+------------------
+
+create table TRANSPARENCY (
+  TIME_RANGE_INSTANCE_ID      integer      not null references TIME_RANGE on delete cascade,
+  USER_ID                     varchar(255) not null,
+  TRANSPARENT                 boolean      not null
+);
+
+create index TRANSPARENCY_TIME_RANGE_INSTANCE_ID on
+  TRANSPARENCY(TIME_RANGE_INSTANCE_ID);
+
+
+----------------
+-- Attachment --
+----------------
+
+create sequence ATTACHMENT_ID_SEQ;
+
+create table ATTACHMENT (
+  ATTACHMENT_ID               integer           primary key default nextval('ATTACHMENT_ID_SEQ'), -- implicit index
+  CALENDAR_HOME_RESOURCE_ID   integer           not null references CALENDAR_HOME,
+  DROPBOX_ID                  varchar(255),
+  CONTENT_TYPE                varchar(255)      not null,
+  SIZE                        integer           not null,
+  MD5                         char(32)          not null,
+  CREATED                     timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                    timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  PATH                        varchar(1024)     not null
+);
+
+create index ATTACHMENT_CALENDAR_HOME_RESOURCE_ID on
+  ATTACHMENT(CALENDAR_HOME_RESOURCE_ID);
+
+-- Many-to-many relationship between attachments and calendar objects
+create table ATTACHMENT_CALENDAR_OBJECT (
+  ATTACHMENT_ID                  integer      not null references ATTACHMENT on delete cascade,
+  MANAGED_ID                     varchar(255) not null,
+  CALENDAR_OBJECT_RESOURCE_ID    integer      not null references CALENDAR_OBJECT on delete cascade,
+
+  primary key (ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
+  unique (MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
+);
+
+create index ATTACHMENT_CALENDAR_OBJECT_CALENDAR_OBJECT_RESOURCE_ID on
+	ATTACHMENT_CALENDAR_OBJECT(CALENDAR_OBJECT_RESOURCE_ID);
+
+-----------------------
+-- Resource Property --
+-----------------------
+
+create table RESOURCE_PROPERTY (
+  RESOURCE_ID integer      not null, -- foreign key: *.RESOURCE_ID
+  NAME        varchar(255) not null,
+  VALUE       text         not null, -- FIXME: xml?
+  VIEWER_UID  varchar(255),
+
+  primary key (RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
+);
+
+
+----------------------
+-- AddressBook Home --
+----------------------
+
+create table ADDRESSBOOK_HOME (
+  RESOURCE_ID      				integer			primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  ADDRESSBOOK_PROPERTY_STORE_ID	integer      	default nextval('RESOURCE_ID_SEQ') not null, 	-- implicit index
+  OWNER_UID        				varchar(255) 	not null unique,                                -- implicit index
+  DATAVERSION      				integer      	default 0 not null
+);
+
+
+-------------------------------
+-- AddressBook Home Metadata --
+-------------------------------
+
+create table ADDRESSBOOK_HOME_METADATA (
+  RESOURCE_ID      integer      primary key references ADDRESSBOOK_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES integer      default 0 not null,
+  CREATED          timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED         timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+-----------------------------
+-- Shared AddressBook Bind --
+-----------------------------
+
+-- Joins sharee ADDRESSBOOK_HOME and owner ADDRESSBOOK_HOME
+
+create table SHARED_ADDRESSBOOK_BIND (
+  ADDRESSBOOK_HOME_RESOURCE_ID			integer			not null references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID    			integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  ADDRESSBOOK_RESOURCE_NAME    			varchar(255) 	not null,
+  BIND_MODE                    			integer      	not null,	-- enum CALENDAR_BIND_MODE
+  BIND_STATUS                  			integer      	not null,	-- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				   			integer      	default 0 not null,
+  MESSAGE                      			text,                  		-- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+);
+
+create index SHARED_ADDRESSBOOK_BIND_RESOURCE_ID on
+  SHARED_ADDRESSBOOK_BIND(OWNER_HOME_RESOURCE_ID);
+
+
+------------------------
+-- AddressBook Object --
+------------------------
+
+create table ADDRESSBOOK_OBJECT (
+  RESOURCE_ID             		integer   		primary key default nextval('RESOURCE_ID_SEQ'),    -- implicit index
+  ADDRESSBOOK_HOME_RESOURCE_ID 	integer      	not null references ADDRESSBOOK_HOME on delete cascade,
+  RESOURCE_NAME           		varchar(255) 	not null,
+  VCARD_TEXT              		text         	not null,
+  VCARD_UID               		varchar(255) 	not null,
+  KIND 			  		  		integer      	not null,  -- enum ADDRESSBOOK_OBJECT_KIND
+  MD5                     		char(32)     	not null,
+  CREATED                 		timestamp    	default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                		timestamp    	default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, VCARD_UID)      -- implicit index
+);
+
+
+-----------------------------
+-- AddressBook Object kind --
+-----------------------------
+
+create table ADDRESSBOOK_OBJECT_KIND (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND values (0, 'person');
+insert into ADDRESSBOOK_OBJECT_KIND values (1, 'group' );
+insert into ADDRESSBOOK_OBJECT_KIND values (2, 'resource');
+insert into ADDRESSBOOK_OBJECT_KIND values (3, 'location');
+
+
+---------------------------------
+-- Address Book Object Members --
+---------------------------------
+
+create table ABO_MEMBERS (
+    GROUP_ID              integer      not null references ADDRESSBOOK_OBJECT on delete cascade,	-- AddressBook Object's (kind=='group') RESOURCE_ID
+ 	ADDRESSBOOK_ID		  integer      not null references ADDRESSBOOK_HOME on delete cascade,
+    MEMBER_ID             integer      not null references ADDRESSBOOK_OBJECT,						-- member AddressBook Object's RESOURCE_ID
+
+    primary key (GROUP_ID, MEMBER_ID) -- implicit index
+);
+
+create index ABO_MEMBERS_ADDRESSBOOK_ID on
+	ABO_MEMBERS(ADDRESSBOOK_ID);
+create index ABO_MEMBERS_MEMBER_ID on
+	ABO_MEMBERS(MEMBER_ID);
+
+------------------------------------------
+-- Address Book Object Foreign Members  --
+------------------------------------------
+
+create table ABO_FOREIGN_MEMBERS (
+    GROUP_ID              integer      not null references ADDRESSBOOK_OBJECT on delete cascade,	-- AddressBook Object's (kind=='group') RESOURCE_ID
+ 	ADDRESSBOOK_ID		  integer      not null references ADDRESSBOOK_HOME on delete cascade,
+    MEMBER_ADDRESS  	  varchar(255) not null, 													-- member AddressBook Object's 'calendar' address
+
+    primary key (GROUP_ID, MEMBER_ADDRESS) -- implicit index
+);
+
+create index ABO_FOREIGN_MEMBERS_ADDRESSBOOK_ID on
+	ABO_FOREIGN_MEMBERS(ADDRESSBOOK_ID);
+
+-----------------------
+-- Shared Group Bind --
+-----------------------
+
+-- Joins ADDRESSBOOK_HOME and ADDRESSBOOK_OBJECT (kind == group)
+
+create table SHARED_GROUP_BIND (	
+  ADDRESSBOOK_HOME_RESOURCE_ID 		integer      not null references ADDRESSBOOK_HOME,
+  GROUP_RESOURCE_ID      			integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
+  GROUP_ADDRESSBOOK_NAME			varchar(255) not null,
+  BIND_MODE                    		integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS                  		integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION				   		integer      default 0 not null,
+  MESSAGE                      		text,                  -- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_NAME)     -- implicit index
+);
+
+create index SHARED_GROUP_BIND_RESOURCE_ID on
+  SHARED_GROUP_BIND(GROUP_RESOURCE_ID);
+
+
+---------------
+-- Revisions --
+---------------
+
+create sequence REVISION_SEQ;
+
+
+-------------------------------
+-- Calendar Object Revisions --
+-------------------------------
+
+create table CALENDAR_OBJECT_REVISIONS (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      references CALENDAR,
+  CALENDAR_NAME             varchar(255) default null,
+  RESOURCE_NAME             varchar(255),
+  REVISION                  integer      default nextval('REVISION_SEQ') not null,
+  DELETED                   boolean      not null
+);
+
+create index CALENDAR_OBJECT_REVISIONS_HOME_RESOURCE_ID_CALENDAR_RESOURCE_ID
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, REVISION);
+
+
+----------------------------------
+-- AddressBook Object Revisions --
+----------------------------------
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+  ADDRESSBOOK_HOME_RESOURCE_ID 			integer			not null references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID    			integer     	references ADDRESSBOOK_HOME,
+  ADDRESSBOOK_NAME             			varchar(255) 	default null,
+  RESOURCE_NAME                			varchar(255),
+  REVISION                     			integer     	default nextval('REVISION_SEQ') not null,
+  DELETED                      			boolean      	not null
+);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_HOME_RESOURCE_ID
+  on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, RESOURCE_NAME);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, REVISION);
+
+
+-----------------------------------
+-- Notification Object Revisions --
+-----------------------------------
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME on delete cascade,
+  RESOURCE_NAME                 varchar(255),
+  REVISION                      integer      default nextval('REVISION_SEQ') not null,
+  DELETED                       boolean      not null,
+
+  unique(NOTIFICATION_HOME_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+);
+
+create index NOTIFICATION_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on NOTIFICATION_OBJECT_REVISIONS(NOTIFICATION_HOME_RESOURCE_ID, REVISION);
+
+
+-------------------------------------------
+-- Apple Push Notification Subscriptions --
+-------------------------------------------
+
+create table APN_SUBSCRIPTIONS (
+  TOKEN                         varchar(255) not null,
+  RESOURCE_KEY                  varchar(255) not null,
+  MODIFIED                      integer      not null,
+  SUBSCRIBER_GUID               varchar(255) not null,
+  USER_AGENT                    varchar(255) default null,
+  IP_ADDR                       varchar(255) default null,
+
+  primary key (TOKEN, RESOURCE_KEY) -- implicit index
+);
+
+create index APN_SUBSCRIPTIONS_RESOURCE_KEY
+   on APN_SUBSCRIPTIONS(RESOURCE_KEY);
+
+   
+-----------------
+-- IMIP Tokens --
+-----------------
+
+create table IMIP_TOKENS (
+  TOKEN                         varchar(255) not null,
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALUID                       varchar(255) not null,
+  ACCESSED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (ORGANIZER, ATTENDEE, ICALUID) -- implicit index
+);
+
+create index IMIP_TOKENS_TOKEN
+   on IMIP_TOKENS(TOKEN);
+
+   
+----------------
+-- Work Items --
+----------------
+
+create sequence WORKITEM_SEQ;
+
+
+---------------------------
+-- IMIP Inivitation Work --
+---------------------------
+
+create table IMIP_INVITATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  FROM_ADDR                     varchar(255) not null,
+  TO_ADDR                       varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+
+-----------------------
+-- IMIP Polling Work --
+-----------------------
+
+create table IMIP_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------
+-- IMIP Reply Work --
+---------------------
+
+create table IMIP_REPLY_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+
+------------------------
+-- Push Notifications --
+------------------------
+
+create table PUSH_NOTIFICATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  PUSH_ID                       varchar(255) not null
+);
+
+-----------------
+-- GroupCacher --
+-----------------
+
+create table GROUP_CACHER_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+--------------------------
+-- Object Splitter Work --
+--------------------------
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ') not null, -- implicit index
+  NOT_BEFORE                    timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade
+);
+
+create index CALENDAR_OBJECT_SPLITTER_WORK_RESOURCE_ID on
+	CALENDAR_OBJECT_SPLITTER_WORK(RESOURCE_ID);
+
+--------------------
+-- Schema Version --
+--------------------
+
+create table CALENDARSERVER (
+  NAME                          varchar(255) primary key, -- implicit index
+  VALUE                         varchar(255)
+);
+
+insert into CALENDARSERVER values ('VERSION', '25');
+insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '5');
+insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_19_to_20.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -31,18 +31,18 @@
 
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
-    "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
-    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"), 
+    primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
 );
 
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 
@@ -55,13 +55,13 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "GROUP_ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
     "MESSAGE" nclob, 
     primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
-    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_RESOURCE_NAME")
+    unique("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
 );
 
 create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
@@ -113,8 +113,12 @@
 -- Alter  ADDRESSBOOK_HOME --
 -----------------------------
 
+-- This is tricky as we have to create a new not null column and populate it, but we can't do
+-- not null immediately without a default - which we do not want. So we create the column without not null,
+-- do the updates, then add the constraint.
+
 alter table ADDRESSBOOK_HOME
-	add ("ADDRESSBOOK_PROPERTY_STORE_ID" integer not null);
+	add ("ADDRESSBOOK_PROPERTY_STORE_ID" integer);
 
 update ADDRESSBOOK_HOME
 	set	ADDRESSBOOK_PROPERTY_STORE_ID = (
@@ -133,14 +137,17 @@
 			ADDRESSBOOK_BIND.BIND_MODE = 0 and 	-- CALENDAR_BIND_MODE 'own'
 			ADDRESSBOOK_BIND.ADDRESSBOOK_RESOURCE_NAME = 'addressbook'
   	);
-	
 
+alter table ADDRESSBOOK_HOME
+	modify ("ADDRESSBOOK_PROPERTY_STORE_ID" not null);
+
+
 --------------------------------
 -- change  ADDRESSBOOK_OBJECT --
 --------------------------------
 
 alter table ADDRESSBOOK_OBJECT
-	add ("KIND"	integer);  -- enum ADDRESSBOOK_OBJECT_KIND
+	add ("KIND"	integer)  -- enum ADDRESSBOOK_OBJECT_KIND
 	add ("ADDRESSBOOK_HOME_RESOURCE_ID"	integer	references ADDRESSBOOK_HOME on delete cascade);
 
 update ADDRESSBOOK_OBJECT
@@ -176,24 +183,25 @@
   	
 -- add non null constraints after update and delete are complete
 alter table ADDRESSBOOK_OBJECT
-	modify ("KIND" not null,
-            "ADDRESSBOOK_HOME_RESOURCE_ID" not null)
-	drop ("ADDRESSBOOK_RESOURCE_ID");
+        modify ("KIND" not null)
+        modify ("ADDRESSBOOK_HOME_RESOURCE_ID" not null);
 
+alter table ADDRESSBOOK_OBJECT
+        drop column ADDRESSBOOK_RESOURCE_ID cascade constraints;
 
 alter table ADDRESSBOOK_OBJECT
 	add unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "RESOURCE_NAME")
-	    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID");
+	add unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID");
 
 ------------------------------------------
 -- change  ADDRESSBOOK_OBJECT_REVISIONS --
 ------------------------------------------
 
 alter table ADDRESSBOOK_OBJECT_REVISIONS
-	add ("OWNER_ADDRESSBOOK_HOME_RESOURCE_ID"	integer	references ADDRESSBOOK_HOME);
+	add ("OWNER_HOME_RESOURCE_ID"	integer	references ADDRESSBOOK_HOME);
 
 update ADDRESSBOOK_OBJECT_REVISIONS
-	set	OWNER_ADDRESSBOOK_HOME_RESOURCE_ID = (
+	set	OWNER_HOME_RESOURCE_ID = (
 		select ADDRESSBOOK_HOME_RESOURCE_ID
 			from ADDRESSBOOK_BIND
 		where 
@@ -229,16 +237,16 @@
 -- New indexes
 create index ADDRESSBOOK_OBJECT_RE_40cc2d73 on ADDRESSBOOK_OBJECT_REVISIONS (
     ADDRESSBOOK_HOME_RESOURCE_ID,
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
+    OWNER_HOME_RESOURCE_ID
 );
 
 create index ADDRESSBOOK_OBJECT_RE_980b9872 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     RESOURCE_NAME
 );
 
 create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
-    OWNER_ADDRESSBOOK_HOME_RESOURCE_ID,
+    OWNER_HOME_RESOURCE_ID,
     REVISION
 );
 

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_24_to_25.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_24_to_25.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_24_to_25.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_24_to_25.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,26 @@
+----
+-- Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 24 to 25 --
+---------------------------------------------------
+
+-- This is actually a noop for Oracle as we had some invalid names in the v20 schema that
+-- were corrected in v20 (but not corrected in postgres which is being updated for v25).
+
+-- Now update the version
+-- No data upgrades
+update CALENDARSERVER set VALUE = '25' where NAME = 'VERSION';

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_25_to_26.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,43 @@
+----
+-- Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 25 to 26 --
+---------------------------------------------------
+
+-- Replace index
+
+drop index CALENDAR_OBJECT_REVIS_2643d556;
+create index CALENDAR_OBJECT_REVIS_6d9d929c on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    RESOURCE_NAME,
+    DELETED,
+    REVISION
+);
+
+
+drop index ADDRESSBOOK_OBJECT_RE_980b9872;
+create index ADDRESSBOOK_OBJECT_RE_00fe8288 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    RESOURCE_NAME,
+    DELETED,
+    REVISION
+);
+
+
+-- Now update the version
+-- No data upgrades
+update CALENDARSERVER set VALUE = '26' where NAME = 'VERSION';

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_13_to_14.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_13_to_14.sql	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_13_to_14.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -26,6 +26,11 @@
  drop column SEEN_BY_OWNER;
 alter table CALENDAR_BIND
  drop column SEEN_BY_SHAREE;
+
+-- Don't allow nulls in the column we are about to constrain
+update CALENDAR_BIND
+	set CALENDAR_RESOURCE_NAME = 'Shared_' || CALENDAR_RESOURCE_ID || '_' || CALENDAR_HOME_RESOURCE_ID
+	where CALENDAR_RESOURCE_NAME is null;
 alter table CALENDAR_BIND
  alter column CALENDAR_RESOURCE_NAME 
   set not null;
@@ -34,6 +39,11 @@
  drop column SEEN_BY_OWNER;
 alter table ADDRESSBOOK_BIND
  drop column SEEN_BY_SHAREE;
+
+-- Don't allow nulls in the column we are about to constrain
+update ADDRESSBOOK_BIND
+	set ADDRESSBOOK_RESOURCE_NAME = 'Shared_' || ADDRESSBOOK_RESOURCE_ID || '_' || ADDRESSBOOK_HOME_RESOURCE_ID
+	where ADDRESSBOOK_RESOURCE_NAME is null;
 alter table ADDRESSBOOK_BIND
  alter column ADDRESSBOOK_RESOURCE_NAME
   set not null;

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_24_to_25.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_24_to_25.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_24_to_25.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_24_to_25.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,35 @@
+----
+-- Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 24 to 25 --
+---------------------------------------------------
+
+-- Rename columns and indexes
+alter table SHARED_ADDRESSBOOK_BIND
+	rename column OWNER_ADDRESSBOOK_HOME_RESOURCE_ID to OWNER_HOME_RESOURCE_ID;
+
+alter table SHARED_GROUP_BIND
+	rename column GROUP_ADDRESSBOOK_RESOURCE_NAME to GROUP_ADDRESSBOOK_NAME;
+
+alter table ADDRESSBOOK_OBJECT_REVISIONS
+	rename column OWNER_ADDRESSBOOK_HOME_RESOURCE_ID to OWNER_HOME_RESOURCE_ID;
+
+alter index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_ADDRESSBOOK_HOME_RESOURCE_ID rename to ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_OWNER_HOME_RESOURCE_ID;
+
+-- Now update the version
+-- No data upgrades
+update CALENDARSERVER set VALUE = '25' where NAME = 'VERSION';

Copied: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql (from rev 11870, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_25_to_26.sql	2013-11-01 22:25:30 UTC (rev 11871)
@@ -0,0 +1,32 @@
+----
+-- Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 25 to 26 --
+---------------------------------------------------
+
+-- Replace index
+drop index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME;
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME_DELETED_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME, DELETED, REVISION);
+
+drop index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME;
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME_DELETED_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, RESOURCE_NAME, DELETED, REVISION);
+
+-- Now update the version
+-- No data upgrades
+update CALENDARSERVER set VALUE = '26' where NAME = 'VERSION';

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_tables.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_tables.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/sql_tables.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -52,55 +52,39 @@
 # Column aliases, defined so that similar tables (such as CALENDAR_OBJECT and
 # ADDRESSBOOK_OBJECT) can be used according to a polymorphic interface.
 
-schema.CALENDAR_BIND.RESOURCE_NAME = \
-    schema.CALENDAR_BIND.CALENDAR_RESOURCE_NAME
-schema.CALENDAR_BIND.RESOURCE_ID = \
-    schema.CALENDAR_BIND.CALENDAR_RESOURCE_ID
-schema.CALENDAR_BIND.HOME_RESOURCE_ID = \
-    schema.CALENDAR_BIND.CALENDAR_HOME_RESOURCE_ID
-schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_NAME = \
-    schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_RESOURCE_NAME
-schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_ID = \
-    schema.SHARED_ADDRESSBOOK_BIND.OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
-schema.SHARED_ADDRESSBOOK_BIND.HOME_RESOURCE_ID = \
-    schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
-schema.SHARED_GROUP_BIND.RESOURCE_NAME = \
-    schema.SHARED_GROUP_BIND.GROUP_ADDRESSBOOK_RESOURCE_NAME
-schema.SHARED_GROUP_BIND.RESOURCE_ID = \
-    schema.SHARED_GROUP_BIND.GROUP_RESOURCE_ID
-schema.SHARED_GROUP_BIND.HOME_RESOURCE_ID = \
-    schema.SHARED_GROUP_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
-schema.CALENDAR_OBJECT_REVISIONS.RESOURCE_ID = \
-    schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_RESOURCE_ID
-schema.CALENDAR_OBJECT_REVISIONS.HOME_RESOURCE_ID = \
-    schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_HOME_RESOURCE_ID
-schema.CALENDAR_OBJECT_REVISIONS.COLLECTION_NAME = \
-    schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_NAME
-schema.ADDRESSBOOK_OBJECT_REVISIONS.RESOURCE_ID = \
-    schema.ADDRESSBOOK_OBJECT_REVISIONS.OWNER_ADDRESSBOOK_HOME_RESOURCE_ID
-schema.ADDRESSBOOK_OBJECT_REVISIONS.HOME_RESOURCE_ID = \
-    schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_HOME_RESOURCE_ID
-schema.ADDRESSBOOK_OBJECT_REVISIONS.COLLECTION_NAME = \
-    schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_NAME
-schema.NOTIFICATION_OBJECT_REVISIONS.HOME_RESOURCE_ID = \
-    schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
-schema.NOTIFICATION_OBJECT_REVISIONS.RESOURCE_ID = \
-    schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
-schema.CALENDAR_OBJECT.TEXT = \
-    schema.CALENDAR_OBJECT.ICALENDAR_TEXT
-schema.CALENDAR_OBJECT.UID = \
-    schema.CALENDAR_OBJECT.ICALENDAR_UID
-schema.CALENDAR_OBJECT.PARENT_RESOURCE_ID = \
-    schema.CALENDAR_OBJECT.CALENDAR_RESOURCE_ID
-schema.ADDRESSBOOK_OBJECT.TEXT = \
-    schema.ADDRESSBOOK_OBJECT.VCARD_TEXT
-schema.ADDRESSBOOK_OBJECT.UID = \
-    schema.ADDRESSBOOK_OBJECT.VCARD_UID
-schema.ADDRESSBOOK_OBJECT.PARENT_RESOURCE_ID = \
-    schema.ADDRESSBOOK_OBJECT.ADDRESSBOOK_HOME_RESOURCE_ID
+schema.CALENDAR_BIND.RESOURCE_NAME = schema.CALENDAR_BIND.CALENDAR_RESOURCE_NAME
+schema.CALENDAR_BIND.RESOURCE_ID = schema.CALENDAR_BIND.CALENDAR_RESOURCE_ID
+schema.CALENDAR_BIND.HOME_RESOURCE_ID = schema.CALENDAR_BIND.CALENDAR_HOME_RESOURCE_ID
 
+schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_NAME = schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_RESOURCE_NAME
+schema.SHARED_ADDRESSBOOK_BIND.RESOURCE_ID = schema.SHARED_ADDRESSBOOK_BIND.OWNER_HOME_RESOURCE_ID
+schema.SHARED_ADDRESSBOOK_BIND.HOME_RESOURCE_ID = schema.SHARED_ADDRESSBOOK_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
 
+schema.SHARED_GROUP_BIND.RESOURCE_NAME = schema.SHARED_GROUP_BIND.GROUP_ADDRESSBOOK_NAME
+schema.SHARED_GROUP_BIND.RESOURCE_ID = schema.SHARED_GROUP_BIND.GROUP_RESOURCE_ID
+schema.SHARED_GROUP_BIND.HOME_RESOURCE_ID = schema.SHARED_GROUP_BIND.ADDRESSBOOK_HOME_RESOURCE_ID
 
+schema.CALENDAR_OBJECT_REVISIONS.RESOURCE_ID = schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_RESOURCE_ID
+schema.CALENDAR_OBJECT_REVISIONS.HOME_RESOURCE_ID = schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_HOME_RESOURCE_ID
+schema.CALENDAR_OBJECT_REVISIONS.COLLECTION_NAME = schema.CALENDAR_OBJECT_REVISIONS.CALENDAR_NAME
+
+schema.ADDRESSBOOK_OBJECT_REVISIONS.RESOURCE_ID = schema.ADDRESSBOOK_OBJECT_REVISIONS.OWNER_HOME_RESOURCE_ID
+schema.ADDRESSBOOK_OBJECT_REVISIONS.HOME_RESOURCE_ID = schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_HOME_RESOURCE_ID
+schema.ADDRESSBOOK_OBJECT_REVISIONS.COLLECTION_NAME = schema.ADDRESSBOOK_OBJECT_REVISIONS.ADDRESSBOOK_NAME
+
+schema.NOTIFICATION_OBJECT_REVISIONS.HOME_RESOURCE_ID = schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
+schema.NOTIFICATION_OBJECT_REVISIONS.RESOURCE_ID = schema.NOTIFICATION_OBJECT_REVISIONS.NOTIFICATION_HOME_RESOURCE_ID
+
+schema.CALENDAR_OBJECT.TEXT = schema.CALENDAR_OBJECT.ICALENDAR_TEXT
+schema.CALENDAR_OBJECT.UID = schema.CALENDAR_OBJECT.ICALENDAR_UID
+schema.CALENDAR_OBJECT.PARENT_RESOURCE_ID = schema.CALENDAR_OBJECT.CALENDAR_RESOURCE_ID
+
+schema.ADDRESSBOOK_OBJECT.TEXT = schema.ADDRESSBOOK_OBJECT.VCARD_TEXT
+schema.ADDRESSBOOK_OBJECT.UID = schema.ADDRESSBOOK_OBJECT.VCARD_UID
+schema.ADDRESSBOOK_OBJECT.PARENT_RESOURCE_ID = schema.ADDRESSBOOK_OBJECT.ADDRESSBOOK_HOME_RESOURCE_ID
+
+
+
 def _combine(**kw):
     """
     Combine two table dictionaries used in a join to produce a single dictionary
@@ -291,6 +275,10 @@
                 first = False
             else:
                 out.write(",\n")
+
+            if len(column.model.name) > ORACLE_TABLE_NAME_MAX:
+                raise SchemaBroken("Column name too long: %s" % (column.model.name,))
+
             typeName = column.model.type.name
             typeName = _translatedTypes.get(typeName, typeName)
             out.write('    "%s" %s' % (column.model.name, typeName))

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/test/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/test/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -726,7 +726,7 @@
         return "/%s/%s/%s/" % (prefix, self.hostname, id)
 
 
-    def send(self, prefix, id):
+    def send(self, prefix, id, txn):
         self.history.append(self.pushKeyForId(prefix, id))
 
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/test/test_upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/test/test_upgrade.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/test/test_upgrade.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -25,8 +25,8 @@
 from twisted.trial.unittest import TestCase
 from txdav.common.datastore.sql_dump import dumpSchema
 from txdav.common.datastore.test.util import theStoreBuilder, StubNotifierFactory
-from txdav.common.datastore.upgrade.sql.upgrade import UpgradeDatabaseSchemaStep, \
-    UpgradeDatabaseAddressBookDataStep, UpgradeDatabaseCalendarDataStep
+from txdav.common.datastore.upgrade.sql.upgrade import (
+    UpgradeDatabaseSchemaStep, UpgradeDatabaseAddressBookDataStep, UpgradeDatabaseCalendarDataStep, NotAllowedToUpgrade)
 import re
 
 class SchemaUpgradeTests(TestCase):
@@ -215,12 +215,12 @@
         old_version = yield _loadVersion()
         try:
             yield upgrader.databaseUpgrade()
-        except RuntimeError:
+        except NotAllowedToUpgrade:
             pass
         except Exception:
-            self.fail("RuntimeError not raised")
+            self.fail("NotAllowedToUpgrade not raised")
         else:
-            self.fail("RuntimeError not raised")
+            self.fail("NotAllowedToUpgrade not raised")
         new_version = yield _loadVersion()
         yield _unloadOldSchema()
 

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrade.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrade.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -74,11 +74,15 @@
         yield sqlTxn.commit()
 
 
-    def stepWithFailure(self, failure):
-        return self.stepWithResult(None)
 
+class NotAllowedToUpgrade(Exception):
+    """
+    Exception indicating an upgrade is needed but we're not configured to
+    perform it.
+    """
 
 
+
 class UpgradeDatabaseCoreStep(object):
     """
     Base class for either schema or data upgrades on the database.
@@ -136,8 +140,7 @@
             self.log.error(msg)
             raise RuntimeError(msg)
         elif self.failIfUpgradeNeeded:
-                # TODO: change this exception to be upgrade-specific
-            raise RuntimeError("Database upgrade is needed but not allowed.")
+            raise NotAllowedToUpgrade()
         else:
             self.sqlStore.setUpgrading(True)
             yield self.upgradeVersion(actual_version, required_version, dialect)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/addressbook_upgrade_from_1_to_2.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/addressbook_upgrade_from_1_to_2.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/addressbook_upgrade_from_1_to_2.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -22,7 +22,8 @@
 from txdav.base.propertystore.base import PropertyName
 from txdav.common.datastore.sql_tables import _ABO_KIND_GROUP, schema
 from txdav.common.datastore.upgrade.sql.upgrades.util import updateAddressBookDataVersion, \
-    doToEachHomeNotAtVersion, removeProperty, cleanPropertyStore
+    doToEachHomeNotAtVersion, removeProperty, cleanPropertyStore, \
+    logUpgradeStatus
 from txdav.xml import element
 
 """
@@ -73,14 +74,20 @@
                 #update rest
                 yield abObject.setComponent(component)
 
+    logUpgradeStatus("Starting Addressbook Populate Members")
+
     # Do this to each calendar home not already at version 2
-    yield doToEachHomeNotAtVersion(sqlStore, schema.ADDRESSBOOK_HOME, UPGRADE_TO_VERSION, doIt)
+    yield doToEachHomeNotAtVersion(sqlStore, schema.ADDRESSBOOK_HOME, UPGRADE_TO_VERSION, doIt, "Populate Members")
 
 
 
 @inlineCallbacks
 def removeResourceType(sqlStore):
+    logUpgradeStatus("Starting Addressbook Remove Resource Type")
+
     sqlTxn = sqlStore.newTransaction()
     yield removeProperty(sqlTxn, PropertyName.fromElement(element.ResourceType))
     yield sqlTxn.commit()
     yield cleanPropertyStore()
+
+    logUpgradeStatus("End Addressbook Remove Resource Type")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_1_to_2.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_1_to_2.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_1_to_2.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -16,12 +16,16 @@
 ##
 
 from twext.enterprise.dal.syntax import Update
-from txdav.xml.parser import WebDAVDocument
+
 from twisted.internet.defer import inlineCallbacks
+
 from twistedcaldav import caldavxml
+
 from txdav.common.datastore.sql_tables import schema
 from txdav.common.datastore.upgrade.sql.upgrades.util import rowsForProperty,\
-    removeProperty, updateCalendarDataVersion, doToEachHomeNotAtVersion
+    removeProperty, updateCalendarDataVersion, doToEachHomeNotAtVersion, \
+    logUpgradeStatus, logUpgradeError
+from txdav.xml.parser import WebDAVDocument
 
 """
 Calendar data upgrade from database version 1 to 2
@@ -50,9 +54,14 @@
     extracting the new format value from the XML property.
     """
 
+    logUpgradeStatus("Starting Move supported-component-set")
+
     sqlTxn = sqlStore.newTransaction()
     try:
+        calendar_rid = None
         rows = (yield rowsForProperty(sqlTxn, caldavxml.SupportedCalendarComponentSet))
+        total = len(rows)
+        count = 0
         for calendar_rid, value in rows:
             prop = WebDAVDocument.fromString(value).root_element
             supported_components = ",".join(sorted([comp.attributes["name"].upper() for comp in prop.children]))
@@ -63,11 +72,19 @@
                 },
                 Where=(meta.RESOURCE_ID == calendar_rid)
             ).on(sqlTxn)
+            count += 1
+            logUpgradeStatus("Move supported-component-set", count, total)
 
         yield removeProperty(sqlTxn, caldavxml.SupportedCalendarComponentSet)
         yield sqlTxn.commit()
+
+        logUpgradeStatus("End Move supported-component-set")
     except RuntimeError:
         yield sqlTxn.abort()
+        logUpgradeError(
+            "Move supported-component-set",
+            "Last calendar: {}".format(calendar_rid)
+        )
         raise
 
 
@@ -86,5 +103,7 @@
         home = yield txn.calendarHomeWithResourceID(homeResourceID)
         yield home.splitCalendars()
 
+    logUpgradeStatus("Starting Split Calendars")
+
     # Do this to each calendar home not already at version 2
-    yield doToEachHomeNotAtVersion(sqlStore, schema.CALENDAR_HOME, UPGRADE_TO_VERSION, doIt)
+    yield doToEachHomeNotAtVersion(sqlStore, schema.CALENDAR_HOME, UPGRADE_TO_VERSION, doIt, "Split Calendars")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_3_to_4.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_3_to_4.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_3_to_4.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -15,19 +15,17 @@
 # limitations under the License.
 ##
 
-from twext.enterprise.dal.syntax import Select, Delete, Parameter
-
 from twisted.internet.defer import inlineCallbacks
 
 from twistedcaldav import caldavxml, customxml
 
 from txdav.base.propertystore.base import PropertyName
-from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
-from txdav.common.datastore.upgrade.sql.upgrades.util import rowsForProperty, updateCalendarDataVersion, \
-    updateAllCalendarHomeDataVersions, removeProperty, cleanPropertyStore
-from txdav.xml.parser import WebDAVDocument
+from txdav.caldav.icalendarstore import InvalidDefaultCalendar
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.datastore.upgrade.sql.upgrades.util import updateCalendarDataVersion, \
+    removeProperty, cleanPropertyStore, logUpgradeStatus, doToEachHomeNotAtVersion
 from txdav.xml import element
-from twisted.python.failure import Failure
+from twistedcaldav.config import config
 
 """
 Data upgrade from database version 3 to 4
@@ -41,165 +39,111 @@
     """
     Do the required upgrade steps.
     """
-    yield moveDefaultCalendarProperties(sqlStore)
-    yield moveCalendarTranspProperties(sqlStore)
-    yield moveDefaultAlarmProperties(sqlStore)
-    yield removeResourceType(sqlStore)
+    yield updateCalendarHomes(sqlStore, config.UpgradeHomePrefix)
 
-    # Always bump the DB value
-    yield updateCalendarDataVersion(sqlStore, UPGRADE_TO_VERSION)
-    yield updateAllCalendarHomeDataVersions(sqlStore, UPGRADE_TO_VERSION)
+    # Don't do remaining upgrade if we are only process a subset of the homes
+    if not config.UpgradeHomePrefix:
+        yield removeResourceType(sqlStore)
 
+        # Always bump the DB value
+        yield updateCalendarDataVersion(sqlStore, UPGRADE_TO_VERSION)
 
 
+
 @inlineCallbacks
-def moveDefaultCalendarProperties(sqlStore):
+def updateCalendarHomes(sqlStore, prefix=None):
     """
-    Need to move all the CalDAV:default-calendar and CS:default-tasks properties in the
-    RESOURCE_PROPERTY table to the new CALENDAR_HOME_METADATA table columns, extracting
-    the new value from the XML property.
+    For each calendar home, update the associated properties on the home or its owned calendars.
     """
 
-    meta = schema.CALENDAR_HOME_METADATA
-    yield _processDefaultCalendarProperty(sqlStore, caldavxml.ScheduleDefaultCalendarURL, meta.DEFAULT_EVENTS)
-    yield _processDefaultCalendarProperty(sqlStore, customxml.ScheduleDefaultTasksURL, meta.DEFAULT_TASKS)
+    yield doToEachHomeNotAtVersion(sqlStore, schema.CALENDAR_HOME, UPGRADE_TO_VERSION, updateCalendarHome, "Update Calendar Home", filterOwnerUID=prefix)
 
 
 
 @inlineCallbacks
-def _processDefaultCalendarProperty(sqlStore, propname, colname):
+def updateCalendarHome(txn, homeResourceID):
     """
-    Move the specified property value to the matching CALENDAR_HOME_METADATA table column.
-
-    Since the number of calendar homes may well be large, we need to do this in batches.
+    For this calendar home, update the associated properties on the home or its owned calendars.
     """
 
-    cb = schema.CALENDAR_BIND
-    rp = schema.RESOURCE_PROPERTY
+    home = yield txn.calendarHomeWithResourceID(homeResourceID)
+    yield moveDefaultCalendarProperties(home)
+    yield moveCalendarTranspProperties(home)
+    yield moveDefaultAlarmProperties(home)
+    yield cleanPropertyStore()
 
-    try:
-        while True:
-            sqlTxn = sqlStore.newTransaction()
-            rows = (yield rowsForProperty(sqlTxn, propname, batch=BATCH_SIZE))
-            if len(rows) == 0:
-                yield sqlTxn.commit()
-                break
-            delete_ids = []
-            for inbox_rid, value in rows:
-                delete_ids.append(inbox_rid)
-                ids = yield Select(
-                    [cb.CALENDAR_HOME_RESOURCE_ID, ],
-                    From=cb,
-                    Where=cb.CALENDAR_RESOURCE_ID == inbox_rid,
-                ).on(sqlTxn)
-                if len(ids) > 0:
 
-                    calendarHome = (yield sqlTxn.calendarHomeWithResourceID(ids[0][0]))
-                    if calendarHome is not None:
 
-                        prop = WebDAVDocument.fromString(value).root_element
-                        defaultCalendar = str(prop.children[0])
-                        parts = defaultCalendar.split("/")
-                        if len(parts) == 5:
+ at inlineCallbacks
+def moveDefaultCalendarProperties(home):
+    """
+    Need to move any the CalDAV:default-calendar and CS:default-tasks properties in the
+    RESOURCE_PROPERTY table to the new CALENDAR_HOME_METADATA table columns, extracting
+    the new value from the XML property.
+    """
 
-                            calendarName = parts[-1]
-                            calendarHomeUID = parts[-2]
-                            expectedHome = (yield sqlTxn.calendarHomeWithUID(calendarHomeUID))
-                            if expectedHome is not None and expectedHome.id() == calendarHome.id():
+    yield _processDefaultCalendarProperty(home, caldavxml.ScheduleDefaultCalendarURL)
+    yield _processDefaultCalendarProperty(home, customxml.ScheduleDefaultTasksURL)
 
-                                calendar = (yield calendarHome.calendarWithName(calendarName))
-                                if calendar is not None:
-                                    yield calendarHome.setDefaultCalendar(
-                                        calendar, tasks=(propname == customxml.ScheduleDefaultTasksURL)
-                                    )
 
-            # Always delete the rows so that batch processing works correctly
-            yield Delete(
-                From=rp,
-                Where=(rp.RESOURCE_ID.In(Parameter("ids", len(delete_ids)))).And
-                      (rp.NAME == PropertyName.fromElement(propname).toString()),
-            ).on(sqlTxn, ids=delete_ids)
 
-            yield sqlTxn.commit()
+ at inlineCallbacks
+def _processDefaultCalendarProperty(home, propname):
+    """
+    Move the specified property value to the matching CALENDAR_HOME_METADATA table column.
+    """
 
-        yield cleanPropertyStore()
+    inbox = (yield home.calendarWithName("inbox"))
+    prop = inbox.properties().get(PropertyName.fromElement(propname))
+    if prop is not None:
+        defaultCalendar = str(prop.children[0])
+        parts = defaultCalendar.split("/")
+        if len(parts) == 5:
 
-    except RuntimeError:
-        f = Failure()
-        yield sqlTxn.abort()
-        f.raiseException()
+            calendarName = parts[-1]
+            calendarHomeUID = parts[-2]
+            if calendarHomeUID == home.uid():
 
+                calendar = (yield home.calendarWithName(calendarName))
+                if calendar is not None:
+                    try:
+                        yield home.setDefaultCalendar(
+                            calendar, tasks=(propname == customxml.ScheduleDefaultTasksURL)
+                        )
+                    except InvalidDefaultCalendar:
+                        # Ignore these - the server will recover
+                        pass
 
+        del inbox.properties()[PropertyName.fromElement(propname)]
 
+
+
 @inlineCallbacks
-def moveCalendarTranspProperties(sqlStore):
+def moveCalendarTranspProperties(home):
     """
     Need to move all the CalDAV:schedule-calendar-transp properties in the
     RESOURCE_PROPERTY table to the new CALENDAR_BIND table columns, extracting
     the new value from the XML property.
     """
 
-    cb = schema.CALENDAR_BIND
-    rp = schema.RESOURCE_PROPERTY
+    # Iterate over each calendar (both owned and shared)
+    calendars = (yield home.loadChildren())
+    for calendar in calendars:
+        if calendar.isInbox():
+            continue
+        prop = calendar.properties().get(PropertyName.fromElement(caldavxml.ScheduleCalendarTransp))
+        if prop is not None:
+            yield calendar.setUsedForFreeBusy(prop == caldavxml.ScheduleCalendarTransp(caldavxml.Opaque()))
+            del calendar.properties()[PropertyName.fromElement(caldavxml.ScheduleCalendarTransp)]
+    inbox = (yield home.calendarWithName("inbox"))
+    prop = inbox.properties().get(PropertyName.fromElement(caldavxml.CalendarFreeBusySet))
+    if prop is not None:
+        del inbox.properties()[PropertyName.fromElement(caldavxml.CalendarFreeBusySet)]
 
-    try:
-        calendars_for_id = {}
-        while True:
-            sqlTxn = sqlStore.newTransaction()
-            rows = (yield rowsForProperty(sqlTxn, caldavxml.ScheduleCalendarTransp, with_uid=True, batch=BATCH_SIZE))
-            if len(rows) == 0:
-                yield sqlTxn.commit()
-                break
-            delete_ids = []
-            for calendar_rid, value, viewer in rows:
-                delete_ids.append(calendar_rid)
-                if calendar_rid not in calendars_for_id:
-                    ids = yield Select(
-                        [cb.CALENDAR_HOME_RESOURCE_ID, cb.BIND_MODE, ],
-                        From=cb,
-                        Where=cb.CALENDAR_RESOURCE_ID == calendar_rid,
-                    ).on(sqlTxn)
-                    calendars_for_id[calendar_rid] = ids
 
-                if viewer:
-                    calendarHome = (yield sqlTxn.calendarHomeWithUID(viewer))
-                else:
-                    calendarHome = None
-                    for row in calendars_for_id[calendar_rid]:
-                        home_id, bind_mode = row
-                        if bind_mode == _BIND_MODE_OWN:
-                            calendarHome = (yield sqlTxn.calendarHomeWithResourceID(home_id))
-                            break
 
-                if calendarHome is not None:
-                    prop = WebDAVDocument.fromString(value).root_element
-                    calendar = (yield calendarHome.childWithID(calendar_rid))
-                    if calendar is not None:
-                        yield calendar.setUsedForFreeBusy(prop == caldavxml.ScheduleCalendarTransp(caldavxml.Opaque()))
-
-            # Always delete the rows so that batch processing works correctly
-            yield Delete(
-                From=rp,
-                Where=(rp.RESOURCE_ID.In(Parameter("ids", len(delete_ids)))).And
-                      (rp.NAME == PropertyName.fromElement(caldavxml.ScheduleCalendarTransp).toString()),
-            ).on(sqlTxn, ids=delete_ids)
-
-            yield sqlTxn.commit()
-
-        sqlTxn = sqlStore.newTransaction()
-        yield removeProperty(sqlTxn, PropertyName.fromElement(caldavxml.CalendarFreeBusySet))
-        yield sqlTxn.commit()
-        yield cleanPropertyStore()
-
-    except RuntimeError:
-        f = Failure()
-        yield sqlTxn.abort()
-        f.raiseException()
-
-
-
 @inlineCallbacks
-def moveDefaultAlarmProperties(sqlStore):
+def moveDefaultAlarmProperties(home):
     """
     Need to move all the CalDAV:default-calendar and CS:default-tasks properties in the
     RESOURCE_PROPERTY table to the new CALENDAR_HOME_METADATA table columns, extracting
@@ -207,25 +151,25 @@
     """
 
     yield _processDefaultAlarmProperty(
-        sqlStore,
+        home,
         caldavxml.DefaultAlarmVEventDateTime,
         True,
         True,
     )
     yield _processDefaultAlarmProperty(
-        sqlStore,
+        home,
         caldavxml.DefaultAlarmVEventDate,
         True,
         False,
     )
     yield _processDefaultAlarmProperty(
-        sqlStore,
+        home,
         caldavxml.DefaultAlarmVToDoDateTime,
         False,
         True,
     )
     yield _processDefaultAlarmProperty(
-        sqlStore,
+        home,
         caldavxml.DefaultAlarmVToDoDate,
         False,
         False,
@@ -234,90 +178,40 @@
 
 
 @inlineCallbacks
-def _processDefaultAlarmProperty(sqlStore, propname, vevent, timed):
+def _processDefaultAlarmProperty(home, propname, vevent, timed):
     """
     Move the specified property value to the matching CALENDAR_HOME_METADATA or CALENDAR_BIND table column.
 
     Since the number of properties may well be large, we need to do this in batches.
     """
 
-    hm = schema.CALENDAR_HOME_METADATA
-    cb = schema.CALENDAR_BIND
-    rp = schema.RESOURCE_PROPERTY
+    # Check the home first
+    prop = home.properties().get(PropertyName.fromElement(propname))
+    if prop is not None:
+        alarm = str(prop.children[0]) if prop.children else None
+        yield home.setDefaultAlarm(alarm, vevent, timed)
+        del home.properties()[PropertyName.fromElement(propname)]
 
-    try:
-        calendars_for_id = {}
-        while True:
-            sqlTxn = sqlStore.newTransaction()
-            rows = (yield rowsForProperty(sqlTxn, propname, with_uid=True, batch=BATCH_SIZE))
-            if len(rows) == 0:
-                yield sqlTxn.commit()
-                break
-            delete_ids = []
-            for rid, value, viewer in rows:
-                delete_ids.append(rid)
+    # Now each child
+    calendars = (yield home.loadChildren())
+    for calendar in calendars:
+        if calendar.isInbox():
+            continue
+        prop = calendar.properties().get(PropertyName.fromElement(propname))
+        if prop is not None:
+            alarm = str(prop.children[0]) if prop.children else None
+            yield calendar.setDefaultAlarm(alarm, vevent, timed)
+            del calendar.properties()[PropertyName.fromElement(propname)]
 
-                prop = WebDAVDocument.fromString(value).root_element
-                alarm = str(prop.children[0]) if prop.children else None
 
-                # First check if the rid is a home - this is the most common case
-                ids = yield Select(
-                    [hm.RESOURCE_ID, ],
-                    From=hm,
-                    Where=hm.RESOURCE_ID == rid,
-                ).on(sqlTxn)
 
-                if len(ids) > 0:
-                    # Home object
-                    calendarHome = (yield sqlTxn.calendarHomeWithResourceID(ids[0][0]))
-                    if calendarHome is not None:
-                        yield calendarHome.setDefaultAlarm(alarm, vevent, timed)
-                else:
-                    # rid is a calendar - we need to find the per-user calendar for the resource viewer
-                    if rid not in calendars_for_id:
-                        ids = yield Select(
-                            [cb.CALENDAR_HOME_RESOURCE_ID, cb.BIND_MODE, ],
-                            From=cb,
-                            Where=cb.CALENDAR_RESOURCE_ID == rid,
-                        ).on(sqlTxn)
-                        calendars_for_id[rid] = ids
-
-                    if viewer:
-                        calendarHome = (yield sqlTxn.calendarHomeWithUID(viewer))
-                    else:
-                        calendarHome = None
-                        for row in calendars_for_id[rid]:
-                            home_id, bind_mode = row
-                            if bind_mode == _BIND_MODE_OWN:
-                                calendarHome = (yield sqlTxn.calendarHomeWithResourceID(home_id))
-                                break
-
-                    if calendarHome is not None:
-                        calendar = yield calendarHome.childWithID(rid)
-                        if calendar is not None:
-                            yield calendar.setDefaultAlarm(alarm, vevent, timed)
-
-            # Always delete the rows so that batch processing works correctly
-            yield Delete(
-                From=rp,
-                Where=(rp.RESOURCE_ID.In(Parameter("ids", len(delete_ids)))).And
-                      (rp.NAME == PropertyName.fromElement(propname).toString()),
-            ).on(sqlTxn, ids=delete_ids)
-
-            yield sqlTxn.commit()
-
-        yield cleanPropertyStore()
-
-    except RuntimeError:
-        f = Failure()
-        yield sqlTxn.abort()
-        f.raiseException()
-
-
-
 @inlineCallbacks
 def removeResourceType(sqlStore):
+    logUpgradeStatus("Starting Calendar Remove Resource Type")
+
     sqlTxn = sqlStore.newTransaction()
     yield removeProperty(sqlTxn, PropertyName.fromElement(element.ResourceType))
     yield sqlTxn.commit()
     yield cleanPropertyStore()
+
+    logUpgradeStatus("End Calendar Remove Resource Type")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_4_to_5.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_4_to_5.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_4_to_5.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -15,21 +15,18 @@
 # limitations under the License.
 ##
 
-from twext.enterprise.dal.syntax import Select, Delete, Parameter
+from twext.web2.dav.resource import TwistedQuotaUsedProperty, TwistedGETContentMD5
 
 from twisted.internet.defer import inlineCallbacks
-from twisted.python.failure import Failure
 
 from twistedcaldav import caldavxml, customxml
+from twistedcaldav.config import config
 
 from txdav.base.propertystore.base import PropertyName
-from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
-from txdav.common.datastore.upgrade.sql.upgrades.util import rowsForProperty, updateCalendarDataVersion, \
-    updateAllCalendarHomeDataVersions, removeProperty, cleanPropertyStore
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.datastore.upgrade.sql.upgrades.util import updateCalendarDataVersion, \
+    removeProperty, cleanPropertyStore, logUpgradeStatus, doToEachHomeNotAtVersion
 from txdav.xml import element
-from txdav.xml.parser import WebDAVDocument
-from twext.web2.dav.resource import TwistedQuotaUsedProperty, \
-    TwistedGETContentMD5
 
 """
 Data upgrade from database version 4 to 5
@@ -43,136 +40,75 @@
     """
     Do the required upgrade steps.
     """
-    yield moveCalendarTimezoneProperties(sqlStore)
-    yield moveCalendarAvailabilityProperties(sqlStore)
-    yield removeOtherProperties(sqlStore)
+    yield updateCalendarHomes(sqlStore, config.UpgradeHomePrefix)
 
-    # Always bump the DB value
-    yield updateCalendarDataVersion(sqlStore, UPGRADE_TO_VERSION)
-    yield updateAllCalendarHomeDataVersions(sqlStore, UPGRADE_TO_VERSION)
+    # Don't do remaining upgrade if we are only process a subset of the homes
+    if not config.UpgradeHomePrefix:
+        yield removeOtherProperties(sqlStore)
 
+        # Always bump the DB value
+        yield updateCalendarDataVersion(sqlStore, UPGRADE_TO_VERSION)
 
 
+
 @inlineCallbacks
-def moveCalendarTimezoneProperties(sqlStore):
+def updateCalendarHomes(sqlStore, prefix=None):
     """
-    Need to move all the CalDAV:calendar-timezone properties in the
-    RESOURCE_PROPERTY table to the new CALENDAR_BIND table columns, extracting
-    the new value from the XML property.
+    For each calendar home, update the associated properties on the home or its owned calendars.
     """
 
-    cb = schema.CALENDAR_BIND
-    rp = schema.RESOURCE_PROPERTY
+    yield doToEachHomeNotAtVersion(sqlStore, schema.CALENDAR_HOME, UPGRADE_TO_VERSION, updateCalendarHome, "Update Calendar Home", filterOwnerUID=prefix)
 
-    try:
-        calendars_for_id = {}
-        while True:
-            sqlTxn = sqlStore.newTransaction()
-            rows = (yield rowsForProperty(sqlTxn, caldavxml.CalendarTimeZone, with_uid=True, batch=BATCH_SIZE))
-            if len(rows) == 0:
-                yield sqlTxn.commit()
-                break
-            delete_ids = []
-            for calendar_rid, value, viewer in rows:
-                delete_ids.append(calendar_rid)
-                if calendar_rid not in calendars_for_id:
-                    ids = yield Select(
-                        [cb.CALENDAR_HOME_RESOURCE_ID, cb.BIND_MODE, ],
-                        From=cb,
-                        Where=cb.CALENDAR_RESOURCE_ID == calendar_rid,
-                    ).on(sqlTxn)
-                    calendars_for_id[calendar_rid] = ids
 
-                if viewer:
-                    calendarHome = (yield sqlTxn.calendarHomeWithUID(viewer))
-                else:
-                    calendarHome = None
-                    for row in calendars_for_id[calendar_rid]:
-                        home_id, bind_mode = row
-                        if bind_mode == _BIND_MODE_OWN:
-                            calendarHome = (yield sqlTxn.calendarHomeWithResourceID(home_id))
-                            break
 
-                if calendarHome is not None:
-                    prop = WebDAVDocument.fromString(value).root_element
-                    calendar = (yield calendarHome.childWithID(calendar_rid))
-                    if calendar is not None:
-                        yield calendar.setTimezone(prop.calendar())
+ at inlineCallbacks
+def updateCalendarHome(txn, homeResourceID):
+    """
+    For this calendar home, update the associated properties on the home or its owned calendars.
+    """
 
-            # Always delete the rows so that batch processing works correctly
-            yield Delete(
-                From=rp,
-                Where=(rp.RESOURCE_ID.In(Parameter("ids", len(delete_ids)))).And
-                      (rp.NAME == PropertyName.fromElement(caldavxml.CalendarTimeZone).toString()),
-            ).on(sqlTxn, ids=delete_ids)
+    home = yield txn.calendarHomeWithResourceID(homeResourceID)
+    yield moveCalendarTimezoneProperties(home)
+    yield moveCalendarAvailabilityProperties(home)
+    yield cleanPropertyStore()
 
-            yield sqlTxn.commit()
 
-        yield cleanPropertyStore()
 
-    except RuntimeError:
-        f = Failure()
-        yield sqlTxn.abort()
-        f.raiseException()
+ at inlineCallbacks
+def moveCalendarTimezoneProperties(home):
+    """
+    Need to move all the CalDAV:calendar-timezone properties in the
+    RESOURCE_PROPERTY table to the new CALENDAR_BIND table columns, extracting
+    the new value from the XML property.
+    """
 
+    # Iterate over each calendar (both owned and shared)
+    calendars = (yield home.loadChildren())
+    for calendar in calendars:
+        if calendar.isInbox():
+            continue
+        prop = calendar.properties().get(PropertyName.fromElement(caldavxml.CalendarTimeZone))
+        if prop is not None:
+            yield calendar.setTimezone(prop.calendar())
+            del calendar.properties()[PropertyName.fromElement(caldavxml.CalendarTimeZone)]
 
 
+
 @inlineCallbacks
-def moveCalendarAvailabilityProperties(sqlStore):
+def moveCalendarAvailabilityProperties(home):
     """
     Need to move all the CS:calendar-availability properties in the
     RESOURCE_PROPERTY table to the new CALENDAR_BIND table columns, extracting
     the new value from the XML property.
     """
+    inbox = (yield home.calendarWithName("inbox"))
+    prop = inbox.properties().get(PropertyName.fromElement(customxml.CalendarAvailability))
+    if prop is not None:
+        yield home.setAvailability(prop.calendar())
+        del inbox.properties()[PropertyName.fromElement(customxml.CalendarAvailability)]
 
-    cb = schema.CALENDAR_BIND
-    rp = schema.RESOURCE_PROPERTY
 
-    try:
-        while True:
-            sqlTxn = sqlStore.newTransaction()
-            rows = (yield rowsForProperty(sqlTxn, customxml.CalendarAvailability, batch=BATCH_SIZE))
-            if len(rows) == 0:
-                yield sqlTxn.commit()
-                break
 
-            # Map each calendar to a home id using a single query for efficiency
-            calendar_ids = [row[0] for row in rows]
-
-            home_map = yield Select(
-                [cb.CALENDAR_RESOURCE_ID, cb.CALENDAR_HOME_RESOURCE_ID, ],
-                From=cb,
-                Where=(cb.CALENDAR_RESOURCE_ID.In(Parameter("ids", len(calendar_ids)))).And(cb.BIND_MODE == _BIND_MODE_OWN),
-            ).on(sqlTxn, ids=calendar_ids)
-            calendar_to_home = dict(home_map)
-
-            # Move property to each home
-            for calendar_rid, value in rows:
-                if calendar_rid in calendar_to_home:
-                    calendarHome = (yield sqlTxn.calendarHomeWithResourceID(calendar_to_home[calendar_rid]))
-
-                    if calendarHome is not None:
-                        prop = WebDAVDocument.fromString(value).root_element
-                        yield calendarHome.setAvailability(prop.calendar())
-
-            # Always delete the rows so that batch processing works correctly
-            yield Delete(
-                From=rp,
-                Where=(rp.RESOURCE_ID.In(Parameter("ids", len(calendar_ids)))).And
-                      (rp.NAME == PropertyName.fromElement(customxml.CalendarAvailability).toString()),
-            ).on(sqlTxn, ids=calendar_ids)
-
-            yield sqlTxn.commit()
-
-        yield cleanPropertyStore()
-
-    except RuntimeError:
-        f = Failure()
-        yield sqlTxn.abort()
-        f.raiseException()
-
-
-
 @inlineCallbacks
 def removeOtherProperties(sqlStore):
     """
@@ -190,6 +126,8 @@
     {http://twistedmatrix.com/xml_namespace/dav/}schedule-auto-respond
 
     """
+    logUpgradeStatus("Starting Calendar Remove Other Properties")
+
     sqlTxn = sqlStore.newTransaction()
 
     yield removeProperty(sqlTxn, PropertyName.fromElement(element.ACL))
@@ -205,3 +143,5 @@
 
     yield sqlTxn.commit()
     yield cleanPropertyStore()
+
+    logUpgradeStatus("End Calendar Remove Other Properties")

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_3_to_4.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_3_to_4.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_3_to_4.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -13,23 +13,27 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 ##
+
+from twext.enterprise.dal.syntax import Update, Insert
+
+from twistedcaldav import caldavxml
 from twistedcaldav.caldavxml import ScheduleDefaultCalendarURL, \
-    CalendarFreeBusySet, Opaque, ScheduleCalendarTransp
+    CalendarFreeBusySet, Opaque, ScheduleCalendarTransp, Transparent
+
 from txdav.base.propertystore.base import PropertyName
 from txdav.caldav.datastore.test.util import CommonStoreTests
+from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE, schema
+from txdav.common.datastore.upgrade.sql.upgrades.calendar_upgrade_from_3_to_4 import updateCalendarHomes, \
+    doUpgrade
+from txdav.xml import element
 from txdav.xml.element import HRef
-from twext.enterprise.dal.syntax import Update, Insert
-from txdav.common.datastore.upgrade.sql.upgrades.calendar_upgrade_from_3_to_4 import moveDefaultCalendarProperties, \
-    moveCalendarTranspProperties, removeResourceType, moveDefaultAlarmProperties
-from txdav.xml import element
-from twistedcaldav import caldavxml
-from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE, schema
+from twistedcaldav.config import config
 
 """
 Tests for L{txdav.common.datastore.upgrade.sql.upgrade}.
 """
 
-from twisted.internet.defer import inlineCallbacks
+from twisted.internet.defer import inlineCallbacks, returnValue
 
 class Upgrade_from_3_to_4(CommonStoreTests):
     """
@@ -37,7 +41,7 @@
     """
 
     @inlineCallbacks
-    def test_defaultCalendarUpgrade(self):
+    def _defaultCalendarUpgrade_setup(self):
 
         # Set dead property on inbox
         for user in ("user01", "user02",):
@@ -52,39 +56,132 @@
                 Where=chm.RESOURCE_ID == home._resourceID,
             ).on(self.transactionUnderTest())
 
-        # Force data version to previous
-        ch = home._homeSchema
-        yield Update(
-            {ch.DATAVERSION: 3},
-            Where=ch.RESOURCE_ID == home._resourceID,
-        ).on(self.transactionUnderTest())
+            # Force data version to previous
+            ch = home._homeSchema
+            yield Update(
+                {ch.DATAVERSION: 3},
+                Where=ch.RESOURCE_ID == home._resourceID,
+            ).on(self.transactionUnderTest())
 
         yield self.commit()
 
-        # Trigger upgrade
-        yield moveDefaultCalendarProperties(self._sqlCalendarStore)
 
+    @inlineCallbacks
+    def _defaultCalendarUpgrade_check(self, changed_users, unchanged_users):
+
         # Test results
-        for user in ("user01", "user02",):
+        for user in changed_users:
             home = (yield self.homeUnderTest(name=user))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 4)
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
             self.assertTrue(home.isDefaultCalendar(calendar))
             inbox = (yield self.calendarUnderTest(name="inbox", home=user))
             self.assertTrue(PropertyName.fromElement(ScheduleDefaultCalendarURL) not in inbox.properties())
 
+        for user in unchanged_users:
+            home = (yield self.homeUnderTest(name=user))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 3)
+            calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
+            self.assertFalse(home.isDefaultCalendar(calendar))
+            inbox = (yield self.calendarUnderTest(name="inbox", home=user))
+            self.assertTrue(PropertyName.fromElement(ScheduleDefaultCalendarURL) in inbox.properties())
 
+
     @inlineCallbacks
-    def test_calendarTranspUpgrade(self):
+    def test_defaultCalendarUpgrade(self):
+        yield self._defaultCalendarUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._defaultCalendarUpgrade_check(("user01", "user02",), ())
 
+
+    @inlineCallbacks
+    def test_partialDefaultCalendarUpgrade(self):
+        yield self._defaultCalendarUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._defaultCalendarUpgrade_check(("user01",), ("user02",))
+
+
+    @inlineCallbacks
+    def _invalidDefaultCalendarUpgrade_setup(self):
+
         # Set dead property on inbox
         for user in ("user01", "user02",):
             inbox = (yield self.calendarUnderTest(name="inbox", home=user))
+            inbox.properties()[PropertyName.fromElement(ScheduleDefaultCalendarURL)] = ScheduleDefaultCalendarURL(HRef.fromString("/calendars/__uids__/%s/tasks_1" % (user,)))
+
+            # Force current default to null
+            home = (yield self.homeUnderTest(name=user))
+            chm = home._homeMetaDataSchema
+            yield Update(
+                {chm.DEFAULT_EVENTS: None},
+                Where=chm.RESOURCE_ID == home._resourceID,
+            ).on(self.transactionUnderTest())
+
+            # Create tasks only calendar
+            tasks = (yield home.createCalendarWithName("tasks_1"))
+            yield tasks.setSupportedComponents("VTODO")
+
+            # Force data version to previous
+            ch = home._homeSchema
+            yield Update(
+                {ch.DATAVERSION: 3},
+                Where=ch.RESOURCE_ID == home._resourceID,
+            ).on(self.transactionUnderTest())
+
+        yield self.commit()
+
+
+    @inlineCallbacks
+    def _invalidDefaultCalendarUpgrade_check(self, changed_users, unchanged_users):
+
+        # Test results
+        for user in changed_users:
+            home = (yield self.homeUnderTest(name=user))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 4)
+            calendar = (yield self.calendarUnderTest(name="tasks_1", home=user))
+            self.assertFalse(home.isDefaultCalendar(calendar))
+            inbox = (yield self.calendarUnderTest(name="inbox", home=user))
+            self.assertTrue(PropertyName.fromElement(ScheduleDefaultCalendarURL) not in inbox.properties())
+
+        for user in unchanged_users:
+            home = (yield self.homeUnderTest(name=user))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 3)
+            calendar = (yield self.calendarUnderTest(name="tasks_1", home=user))
+            self.assertFalse(home.isDefaultCalendar(calendar))
+            inbox = (yield self.calendarUnderTest(name="inbox", home=user))
+            self.assertTrue(PropertyName.fromElement(ScheduleDefaultCalendarURL) in inbox.properties())
+
+
+    @inlineCallbacks
+    def test_invalidDefaultCalendarUpgrade(self):
+        yield self._invalidDefaultCalendarUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._invalidDefaultCalendarUpgrade_check(("user01", "user02",), ())
+
+
+    @inlineCallbacks
+    def test_partialInvalidDefaultCalendarUpgrade(self):
+        yield self._invalidDefaultCalendarUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._invalidDefaultCalendarUpgrade_check(("user01",), ("user02",))
+
+
+    @inlineCallbacks
+    def _calendarTranspUpgrade_setup(self):
+
+        # Set dead property on inbox
+        for user in ("user01", "user02",):
+            inbox = (yield self.calendarUnderTest(name="inbox", home=user))
             inbox.properties()[PropertyName.fromElement(CalendarFreeBusySet)] = CalendarFreeBusySet(HRef.fromString("/calendars/__uids__/%s/calendar_1" % (user,)))
 
             # Force current to transparent
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
             yield calendar.setUsedForFreeBusy(False)
-            calendar.properties()[PropertyName.fromElement(ScheduleCalendarTransp)] = ScheduleCalendarTransp(Opaque())
+            calendar.properties()[PropertyName.fromElement(ScheduleCalendarTransp)] = ScheduleCalendarTransp(Opaque() if user == "user01" else Transparent())
 
             # Force data version to previous
             home = (yield self.homeUnderTest(name=user))
@@ -118,21 +215,55 @@
         ).on(txn)
         yield self.commit()
 
-        # Trigger upgrade
-        yield moveCalendarTranspProperties(self._sqlCalendarStore)
 
+    @inlineCallbacks
+    def _calendarTranspUpgrade_check(self, changed_users, unchanged_users):
+
         # Test results
-        for user in ("user01", "user02",):
+        for user in changed_users:
             home = (yield self.homeUnderTest(name=user))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 4)
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
-            self.assertTrue(calendar.isUsedForFreeBusy())
+            if user == "user01":
+                self.assertTrue(calendar.isUsedForFreeBusy())
+            else:
+                self.assertFalse(calendar.isUsedForFreeBusy())
+            self.assertTrue(PropertyName.fromElement(caldavxml.ScheduleCalendarTransp) not in calendar.properties())
             inbox = (yield self.calendarUnderTest(name="inbox", home=user))
             self.assertTrue(PropertyName.fromElement(CalendarFreeBusySet) not in inbox.properties())
 
+        for user in unchanged_users:
+            home = (yield self.homeUnderTest(name=user))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 3)
+            calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
+            if user == "user01":
+                self.assertFalse(calendar.isUsedForFreeBusy())
+            else:
+                self.assertFalse(calendar.isUsedForFreeBusy())
+            self.assertTrue(PropertyName.fromElement(caldavxml.ScheduleCalendarTransp) in calendar.properties())
+            inbox = (yield self.calendarUnderTest(name="inbox", home=user))
+            self.assertTrue(PropertyName.fromElement(CalendarFreeBusySet) in inbox.properties())
 
+
     @inlineCallbacks
-    def test_defaultAlarmUpgrade(self):
+    def test_calendarTranspUpgrade(self):
+        yield self._calendarTranspUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._calendarTranspUpgrade_check(("user01", "user02",), ())
 
+
+    @inlineCallbacks
+    def test_partialCalendarTranspUpgrade(self):
+        yield self._calendarTranspUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._calendarTranspUpgrade_check(("user01",), ("user02",))
+
+
+    @inlineCallbacks
+    def _defaultAlarmUpgrade_setup(self):
+
         alarmhome1 = """BEGIN:VALARM
 ACTION:AUDIO
 TRIGGER;RELATED=START:-PT1M
@@ -236,13 +367,28 @@
         shared = yield self.calendarUnderTest(name=shared_name, home="user02")
         for _ignore_vevent, _ignore_timed, alarm, prop in detailsshared:
             shared.properties()[PropertyName.fromElement(prop)] = prop(alarm)
+
+        for user in ("user01", "user02",):
+            # Force data version to previous
+            home = (yield self.homeUnderTest(name=user))
+            ch = home._homeSchema
+            yield Update(
+                {ch.DATAVERSION: 3},
+                Where=ch.RESOURCE_ID == home._resourceID,
+            ).on(self.transactionUnderTest())
+
         yield self.commit()
 
-        # Trigger upgrade
-        yield moveDefaultAlarmProperties(self._sqlCalendarStore)
+        returnValue((detailshome, detailscalendar, detailsshared, shared_name,))
 
+
+    @inlineCallbacks
+    def _defaultAlarmUpgrade_check(self, changed_users, unchanged_users, detailshome, detailscalendar, detailsshared, shared_name):
+
         # Check each type of collection
         home = yield self.homeUnderTest(name="user01")
+        version = (yield home.dataVersion())
+        self.assertEqual(version, 4)
         for vevent, timed, alarm, prop in detailshome:
             alarm_result = (yield home.getDefaultAlarm(vevent, timed))
             self.assertEquals(alarm_result, alarm)
@@ -252,18 +398,67 @@
         for vevent, timed, alarm, prop in detailscalendar:
             alarm_result = (yield calendar.getDefaultAlarm(vevent, timed))
             self.assertEquals(alarm_result, alarm)
-            self.assertTrue(PropertyName.fromElement(prop) not in home.properties())
+            self.assertTrue(PropertyName.fromElement(prop) not in calendar.properties())
 
-        shared = yield self.calendarUnderTest(name=shared_name, home="user02")
-        for vevent, timed, alarm, prop in detailsshared:
-            alarm_result = (yield shared.getDefaultAlarm(vevent, timed))
-            self.assertEquals(alarm_result, alarm)
-            self.assertTrue(PropertyName.fromElement(prop) not in home.properties())
+        if "user02" in changed_users:
+            home = (yield self.homeUnderTest(name="user02"))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 4)
+            shared = yield self.calendarUnderTest(name=shared_name, home="user02")
+            for vevent, timed, alarm, prop in detailsshared:
+                alarm_result = (yield shared.getDefaultAlarm(vevent, timed))
+                self.assertEquals(alarm_result, alarm)
+                self.assertTrue(PropertyName.fromElement(prop) not in shared.properties())
+        else:
+            home = (yield self.homeUnderTest(name="user02"))
+            version = (yield home.dataVersion())
+            self.assertEqual(version, 3)
+            shared = yield self.calendarUnderTest(name=shared_name, home="user02")
+            for vevent, timed, alarm, prop in detailsshared:
+                alarm_result = (yield shared.getDefaultAlarm(vevent, timed))
+                self.assertEquals(alarm_result, None)
+                self.assertTrue(PropertyName.fromElement(prop) in shared.properties())
 
 
     @inlineCallbacks
-    def test_resourceTypeUpgrade(self):
+    def test_defaultAlarmUpgrade(self):
+        detailshome, detailscalendar, detailsshared, shared_name = (yield self._defaultAlarmUpgrade_setup())
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._defaultAlarmUpgrade_check(("user01", "user02",), (), detailshome, detailscalendar, detailsshared, shared_name)
 
+
+    @inlineCallbacks
+    def test_partialDefaultAlarmUpgrade(self):
+        detailshome, detailscalendar, detailsshared, shared_name = (yield self._defaultAlarmUpgrade_setup())
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._defaultAlarmUpgrade_check(("user01",), ("user02",), detailshome, detailscalendar, detailsshared, shared_name)
+
+
+    @inlineCallbacks
+    def test_combinedUpgrade(self):
+        yield self._defaultCalendarUpgrade_setup()
+        yield self._calendarTranspUpgrade_setup()
+        detailshome, detailscalendar, detailsshared, shared_name = (yield self._defaultAlarmUpgrade_setup())
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._defaultCalendarUpgrade_check(("user01", "user02",), ())
+        yield self._calendarTranspUpgrade_check(("user01", "user02",), ())
+        yield self._defaultAlarmUpgrade_check(("user01", "user02",), (), detailshome, detailscalendar, detailsshared, shared_name)
+
+
+    @inlineCallbacks
+    def test_partialCombinedUpgrade(self):
+        yield self._defaultCalendarUpgrade_setup()
+        yield self._calendarTranspUpgrade_setup()
+        detailshome, detailscalendar, detailsshared, shared_name = (yield self._defaultAlarmUpgrade_setup())
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._defaultCalendarUpgrade_check(("user01",), ("user02",))
+        yield self._calendarTranspUpgrade_check(("user01",), ("user02",))
+        yield self._defaultAlarmUpgrade_check(("user01",), ("user02",), detailshome, detailscalendar, detailsshared, shared_name)
+
+
+    @inlineCallbacks
+    def _resourceTypeUpgrade_setup(self):
+
         # Set dead property on calendar
         for user in ("user01", "user02",):
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
@@ -273,12 +468,60 @@
         for user in ("user01", "user02",):
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
             self.assertTrue(PropertyName.fromElement(element.ResourceType) in calendar.properties())
+
+        yield self.transactionUnderTest().updateCalendarserverValue("CALENDAR-DATAVERSION", "3")
+
         yield self.commit()
 
-        # Trigger upgrade
-        yield removeResourceType(self._sqlCalendarStore)
 
+    @inlineCallbacks
+    def _resourceTypeUpgrade_check(self, full=True):
+
         # Test results
-        for user in ("user01", "user02",):
-            calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
-            self.assertTrue(PropertyName.fromElement(element.ResourceType) not in calendar.properties())
+        if full:
+            for user in ("user01", "user02",):
+                calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
+                self.assertTrue(PropertyName.fromElement(element.ResourceType) not in calendar.properties())
+            version = yield self.transactionUnderTest().calendarserverValue("CALENDAR-DATAVERSION")
+            self.assertEqual(int(version), 4)
+        else:
+            for user in ("user01", "user02",):
+                calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
+                self.assertTrue(PropertyName.fromElement(element.ResourceType) in calendar.properties())
+            version = yield self.transactionUnderTest().calendarserverValue("CALENDAR-DATAVERSION")
+            self.assertEqual(int(version), 3)
+
+
+    @inlineCallbacks
+    def test_resourceTypeUpgrade(self):
+        yield self._resourceTypeUpgrade_setup()
+        yield doUpgrade(self._sqlCalendarStore)
+        yield self._resourceTypeUpgrade_check()
+
+
+    @inlineCallbacks
+    def test_fullUpgrade(self):
+        self.patch(config, "UpgradeHomePrefix", "")
+        yield self._defaultCalendarUpgrade_setup()
+        yield self._calendarTranspUpgrade_setup()
+        detailshome, detailscalendar, detailsshared, shared_name = (yield self._defaultAlarmUpgrade_setup())
+        yield self._resourceTypeUpgrade_setup()
+        yield doUpgrade(self._sqlCalendarStore)
+        yield self._defaultCalendarUpgrade_check(("user01", "user02",), ())
+        yield self._calendarTranspUpgrade_check(("user01", "user02",), ())
+        yield self._defaultAlarmUpgrade_check(("user01", "user02",), (), detailshome, detailscalendar, detailsshared, shared_name)
+        yield self._resourceTypeUpgrade_check()
+
+
+    @inlineCallbacks
+    def test_partialFullUpgrade(self):
+        self.patch(config, "UpgradeHomePrefix", "user01")
+        yield self._defaultCalendarUpgrade_setup()
+        yield self._calendarTranspUpgrade_setup()
+        yield self._resourceTypeUpgrade_setup()
+        detailshome, detailscalendar, detailsshared, shared_name = (yield self._defaultAlarmUpgrade_setup())
+        yield doUpgrade(self._sqlCalendarStore)
+        yield self._defaultCalendarUpgrade_check(("user01",), ("user02",))
+        yield self._calendarTranspUpgrade_check(("user01",), ("user02",))
+        yield self._defaultAlarmUpgrade_check(("user01",), ("user02",), detailshome, detailscalendar, detailsshared, shared_name)
+        yield self._resourceTypeUpgrade_check(False)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_4_to_5.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_4_to_5.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/test/test_upgrade_from_4_to_5.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -13,21 +13,24 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 ##
-from twistedcaldav import caldavxml, customxml
-from txdav.common.datastore.upgrade.sql.upgrades.calendar_upgrade_from_4_to_5 import moveCalendarTimezoneProperties, \
-    removeOtherProperties, moveCalendarAvailabilityProperties
-from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE, schema
-from txdav.xml import element
 
 """
 Tests for L{txdav.common.datastore.upgrade.sql.upgrade}.
 """
 
 from twext.enterprise.dal.syntax import Update, Insert
-from twisted.internet.defer import inlineCallbacks
+
+from twisted.internet.defer import inlineCallbacks, returnValue
+
+from twistedcaldav import caldavxml, customxml
+from twistedcaldav.config import config
 from twistedcaldav.ical import Component
+
 from txdav.base.propertystore.base import PropertyName
 from txdav.caldav.datastore.test.util import CommonStoreTests
+from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE, schema
+from txdav.common.datastore.upgrade.sql.upgrades.calendar_upgrade_from_4_to_5 import updateCalendarHomes, doUpgrade
+from txdav.xml import element
 
 class Upgrade_from_4_to_5(CommonStoreTests):
     """
@@ -35,7 +38,7 @@
     """
 
     @inlineCallbacks
-    def test_calendarTimezoneUpgrade(self):
+    def _calendarTimezoneUpgrade_setup(self):
 
         tz1 = Component.fromString("""BEGIN:VCALENDAR
 VERSION:2.0
@@ -137,19 +140,50 @@
         ).on(txn)
         yield self.commit()
 
-        # Trigger upgrade
-        yield moveCalendarTimezoneProperties(self._sqlCalendarStore)
+        returnValue(user_details)
 
+
+    @inlineCallbacks
+    def _calendarTimezoneUpgrade_check(self, changed_users, unchanged_users, user_details):
+
         # Test results
         for user, calname, tz in user_details:
-            calendar = (yield self.calendarUnderTest(name=calname, home=user))
-            self.assertEqual(calendar.getTimezone(), tz)
-            self.assertTrue(PropertyName.fromElement(caldavxml.CalendarTimeZone) not in calendar.properties())
+            if user in changed_users:
+                home = (yield self.homeUnderTest(name=user))
+                version = (yield home.dataVersion())
+                self.assertEqual(version, 5)
+                calendar = (yield self.calendarUnderTest(name=calname, home=user))
+                self.assertEqual(calendar.getTimezone(), tz)
+                self.assertTrue(PropertyName.fromElement(caldavxml.CalendarTimeZone) not in calendar.properties())
+            else:
+                home = (yield self.homeUnderTest(name=user))
+                version = (yield home.dataVersion())
+                self.assertEqual(version, 4)
+                calendar = (yield self.calendarUnderTest(name=calname, home=user))
+                self.assertEqual(calendar.getTimezone(), None)
+                if tz:
+                    self.assertTrue(PropertyName.fromElement(caldavxml.CalendarTimeZone) in calendar.properties())
+                else:
+                    self.assertTrue(PropertyName.fromElement(caldavxml.CalendarTimeZone) not in calendar.properties())
 
 
     @inlineCallbacks
-    def test_calendarAvailabilityUpgrade(self):
+    def test_calendarTimezoneUpgrade(self):
+        user_details = yield self._calendarTimezoneUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._calendarTimezoneUpgrade_check(("user01", "user02", "user03",), (), user_details)
 
+
+    @inlineCallbacks
+    def test_partialCalendarTimezoneUpgrade(self):
+        user_details = yield self._calendarTimezoneUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._calendarTimezoneUpgrade_check(("user01",), ("user02", "user03",), user_details)
+
+
+    @inlineCallbacks
+    def _calendarAvailabilityUpgrade_setup(self):
+
         av1 = Component.fromString("""BEGIN:VCALENDAR
 VERSION:2.0
 CALSCALE:GREGORIAN
@@ -220,20 +254,68 @@
             self.assertEqual(PropertyName.fromElement(customxml.CalendarAvailability) in calendar.properties(), av is not None)
         yield self.commit()
 
-        # Trigger upgrade
-        yield moveCalendarAvailabilityProperties(self._sqlCalendarStore)
+        returnValue(user_details)
 
+
+    @inlineCallbacks
+    def _calendarAvailabilityUpgrade_check(self, changed_users, unchanged_users, user_details):
+
         # Test results
         for user, av in user_details:
-            home = (yield self.homeUnderTest(name=user))
-            calendar = (yield self.calendarUnderTest(name="inbox", home=user))
-            self.assertEqual(home.getAvailability(), av)
-            self.assertTrue(PropertyName.fromElement(customxml.CalendarAvailability) not in calendar.properties())
+            if user in changed_users:
+                home = (yield self.homeUnderTest(name=user))
+                version = (yield home.dataVersion())
+                self.assertEqual(version, 5)
+                calendar = (yield self.calendarUnderTest(name="inbox", home=user))
+                self.assertEqual(home.getAvailability(), av)
+                self.assertTrue(PropertyName.fromElement(customxml.CalendarAvailability) not in calendar.properties())
+            else:
+                home = (yield self.homeUnderTest(name=user))
+                version = (yield home.dataVersion())
+                self.assertEqual(version, 4)
+                calendar = (yield self.calendarUnderTest(name="inbox", home=user))
+                self.assertEqual(home.getAvailability(), None)
+                if av:
+                    self.assertTrue(PropertyName.fromElement(customxml.CalendarAvailability) in calendar.properties())
+                else:
+                    self.assertTrue(PropertyName.fromElement(customxml.CalendarAvailability) not in calendar.properties())
 
 
     @inlineCallbacks
-    def test_removeOtherPropertiesUpgrade(self):
+    def test_calendarAvailabilityUpgrade(self):
+        user_details = yield self._calendarAvailabilityUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._calendarAvailabilityUpgrade_check(("user01", "user02", "user03",), (), user_details)
 
+
+    @inlineCallbacks
+    def test_partialCalendarAvailabilityUpgrade(self):
+        user_details = yield self._calendarAvailabilityUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._calendarAvailabilityUpgrade_check(("user01",), ("user02", "user03",), user_details)
+
+
+    @inlineCallbacks
+    def test_combinedUpgrade(self):
+        user_details1 = yield self._calendarTimezoneUpgrade_setup()
+        user_details2 = yield self._calendarAvailabilityUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore)
+        yield self._calendarTimezoneUpgrade_check(("user01", "user02", "user03",), (), user_details1)
+        yield self._calendarAvailabilityUpgrade_check(("user01", "user02", "user03",), (), user_details2)
+
+
+    @inlineCallbacks
+    def test_partialCombinedUpgrade(self):
+        user_details1 = yield self._calendarTimezoneUpgrade_setup()
+        user_details2 = yield self._calendarAvailabilityUpgrade_setup()
+        yield updateCalendarHomes(self._sqlCalendarStore, "user01")
+        yield self._calendarTimezoneUpgrade_check(("user01",), ("user02", "user03",), user_details1)
+        yield self._calendarAvailabilityUpgrade_check(("user01",), ("user02", "user03",), user_details2)
+
+
+    @inlineCallbacks
+    def _removeOtherPropertiesUpgrade_setup(self):
+
         # Set dead property on calendar
         for user in ("user01", "user02",):
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
@@ -243,12 +325,55 @@
         for user in ("user01", "user02",):
             calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
             self.assertTrue(PropertyName.fromElement(element.ResourceID) in calendar.properties())
+
+        yield self.transactionUnderTest().updateCalendarserverValue("CALENDAR-DATAVERSION", "4")
+
         yield self.commit()
 
-        # Trigger upgrade
-        yield removeOtherProperties(self._sqlCalendarStore)
 
+    @inlineCallbacks
+    def _removeOtherPropertiesUpgrade_check(self, full=True):
+
         # Test results
         for user in ("user01", "user02",):
-            calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
-            self.assertTrue(PropertyName.fromElement(element.ResourceID) not in calendar.properties())
+            if full:
+                calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
+                self.assertTrue(PropertyName.fromElement(element.ResourceID) not in calendar.properties())
+                version = yield self.transactionUnderTest().calendarserverValue("CALENDAR-DATAVERSION")
+                self.assertEqual(int(version), 5)
+            else:
+                calendar = (yield self.calendarUnderTest(name="calendar_1", home=user))
+                self.assertTrue(PropertyName.fromElement(element.ResourceID) in calendar.properties())
+                version = yield self.transactionUnderTest().calendarserverValue("CALENDAR-DATAVERSION")
+                self.assertEqual(int(version), 4)
+
+
+    @inlineCallbacks
+    def test_removeOtherPropertiesUpgrade(self):
+        yield self._removeOtherPropertiesUpgrade_setup()
+        yield doUpgrade(self._sqlCalendarStore)
+        yield self._removeOtherPropertiesUpgrade_check()
+
+
+    @inlineCallbacks
+    def test_fullUpgrade(self):
+        self.patch(config, "UpgradeHomePrefix", "")
+        user_details1 = yield self._calendarTimezoneUpgrade_setup()
+        user_details2 = yield self._calendarAvailabilityUpgrade_setup()
+        yield self._removeOtherPropertiesUpgrade_setup()
+        yield doUpgrade(self._sqlCalendarStore)
+        yield self._calendarTimezoneUpgrade_check(("user01", "user02", "user03",), (), user_details1)
+        yield self._calendarAvailabilityUpgrade_check(("user01", "user02", "user03",), (), user_details2)
+        yield self._removeOtherPropertiesUpgrade_check()
+
+
+    @inlineCallbacks
+    def test_partialFullUpgrade(self):
+        self.patch(config, "UpgradeHomePrefix", "user01")
+        user_details1 = yield self._calendarTimezoneUpgrade_setup()
+        user_details2 = yield self._calendarAvailabilityUpgrade_setup()
+        yield self._removeOtherPropertiesUpgrade_setup()
+        yield doUpgrade(self._sqlCalendarStore)
+        yield self._calendarTimezoneUpgrade_check(("user01",), ("user02", "user03",), user_details1)
+        yield self._calendarAvailabilityUpgrade_check(("user01",), ("user02", "user03",), user_details2)
+        yield self._removeOtherPropertiesUpgrade_check(False)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/util.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/common/datastore/upgrade/sql/upgrades/util.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -14,7 +14,7 @@
 # limitations under the License.
 ##
 
-from twext.enterprise.dal.syntax import Select, Delete, Update
+from twext.enterprise.dal.syntax import Select, Delete, Update, Count
 from twext.python.log import Logger
 from twisted.internet.defer import inlineCallbacks, returnValue
 from txdav.base.propertystore.base import PropertyName
@@ -44,6 +44,21 @@
 
 
 @inlineCallbacks
+def countProperty(txn, propelement):
+    pname = PropertyName.fromElement(propelement)
+
+    rp = schema.RESOURCE_PROPERTY
+    count = (yield Select(
+        [Count(rp.RESOURCE_ID), ],
+        From=rp,
+        Where=rp.NAME == pname.toString(),
+    ).on(txn))[0][0]
+
+    returnValue(count)
+
+
+
+ at inlineCallbacks
 def cleanPropertyStore():
     """
     We have manually manipulated the SQL property store by-passing the underlying implementation's caching
@@ -114,27 +129,43 @@
 
 
 @inlineCallbacks
-def doToEachHomeNotAtVersion(store, homeSchema, version, doIt):
+def doToEachHomeNotAtVersion(store, homeSchema, version, doIt, logStr, filterOwnerUID=None):
     """
     Do something to each home whose version column indicates it is older
-    than the specified version. Do this in batches as there may be a lot of work to do.
+    than the specified version. Do this in batches as there may be a lot of work to do. Also,
+    allow the GUID to be filtered to support a parallel mode of operation.
     """
 
+    txn = store.newTransaction("updateDataVersion")
+    where = homeSchema.DATAVERSION < version
+    if filterOwnerUID:
+        where = where.And(homeSchema.OWNER_UID.StartsWith(filterOwnerUID))
+    total = (yield Select(
+        [Count(homeSchema.RESOURCE_ID), ],
+        From=homeSchema,
+        Where=where,
+    ).on(txn))[0][0]
+    yield txn.commit()
+    count = 0
+
     while True:
 
+        logUpgradeStatus(logStr, count, total)
+
         # Get the next home with an old version
         txn = store.newTransaction("updateDataVersion")
         try:
             rows = yield Select(
                 [homeSchema.RESOURCE_ID, homeSchema.OWNER_UID, ],
                 From=homeSchema,
-                Where=homeSchema.DATAVERSION < version,
+                Where=where,
                 OrderBy=homeSchema.OWNER_UID,
                 Limit=1,
             ).on(txn)
 
             if len(rows) == 0:
                 yield txn.commit()
+                logUpgradeStatus("End {}".format(logStr), count, total)
                 returnValue(None)
 
             # Apply to the home
@@ -149,6 +180,26 @@
             yield txn.commit()
         except RuntimeError, e:
             f = Failure()
-            log.error("Failed to upgrade %s to %s: %s" % (homeSchema, version, e))
+            logUpgradeError(
+                logStr,
+                "Failed to upgrade {} to {}: {}".format(homeSchema, version, e)
+            )
             yield txn.abort()
             f.raiseException()
+
+        count += 1
+
+
+
+def logUpgradeStatus(title, count=None, total=None):
+    if total is None:
+        log.info("Database upgrade {title}", title=title)
+    else:
+        divisor = 1000 if total > 1000 else 100
+        if (divmod(count, divisor)[1] == 0) or (count == total):
+            log.info("Database upgrade {title}: {count} of {total}", title=title, count=count, total=total)
+
+
+
+def logUpgradeError(title, details):
+    log.error("Database upgrade {title} failed: {details}", title=title, details=details)

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/base.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/base.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/base.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -693,7 +693,7 @@
             return date.strftime("%a, %d %b %Y %H:%M:%S GMT")
 
         if type(date) is int:
-            date = format(datetime.datetime.fromtimestamp(date))
+            date = format(datetime.datetime.utcfromtimestamp(date))
         elif type(date) is str:
             pass
         elif type(date) is unicode:

Modified: CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/rfc6578.py
===================================================================
--- CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/rfc6578.py	2013-11-01 21:46:19 UTC (rev 11870)
+++ CalendarServer/branches/users/cdaboo/fix-no-ischedule/txdav/xml/rfc6578.py	2013-11-01 22:25:30 UTC (rev 11871)
@@ -7,10 +7,10 @@
 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 # copies of the Software, and to permit persons to whom the Software is
 # furnished to do so, subject to the following conditions:
-# 
+#
 # The above copyright notice and this permission notice shall be included in all
 # copies or substantial portions of the Software.
-# 
+#
 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
@@ -51,7 +51,7 @@
     allowed_children = {
         (dav_namespace, "sync-token"): (0, 1), # When used in the REPORT this is required
         (dav_namespace, "sync-level"): (0, 1), # When used in the REPORT this is required
-        (dav_namespace, "prop"      ): (0, 1),
+        (dav_namespace, "prop"): (0, 1),
     }
 
     def __init__(self, *children, **attributes):
@@ -60,6 +60,7 @@
         self.property = None
         self.sync_token = None
         self.sync_level = None
+        self.sync_limit = None
 
         for child in self.children:
             qname = child.qname()
@@ -70,12 +71,20 @@
             elif qname == (dav_namespace, "sync-level"):
                 self.sync_level = str(child)
 
+            elif qname == (dav_namespace, "limit"):
+                if len(child.children) == 1 and child.children[0].qname() == (dav_namespace, "nresults"):
+                    try:
+                        self.sync_limit = int(str(child.children[0]))
+                    except TypeError:
+                        pass
+
             elif qname == (dav_namespace, "prop"):
                 if self.property is not None:
                     raise ValueError("Only one of DAV:prop allowed")
                 self.property = child
 
 
+
 @registerElement
 @registerElementClass
 class SyncToken (WebDAVTextElement):
@@ -87,6 +96,7 @@
     protected = True
 
 
+
 @registerElement
 @registerElementClass
 class SyncLevel (WebDAVTextElement):
@@ -96,5 +106,29 @@
     name = "sync-level"
 
 
+
+ at registerElement
+ at registerElementClass
+class Limit (WebDAVElement):
+    """
+    Synchronization limit in report.
+    """
+    name = "limit"
+
+    allowed_children = {
+        (dav_namespace, "nresults"): (1, 1), # When used in the REPORT this is required
+    }
+
+
+
+ at registerElement
+ at registerElementClass
+class NResults (WebDAVTextElement):
+    """
+    Synchronization numerical limit.
+    """
+    name = "nresults"
+
+
 # Extend MultiStatus, to add sync-token
 MultiStatus.allowed_children[(dav_namespace, "sync-token")] = (0, 1)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20140312/7aa25440/attachment.html>


More information about the calendarserver-changes mailing list