[CalendarServer-changes] [14555] CalendarServer/branches/users/sagen/trashcan-5

source_changes at macosforge.org source_changes at macosforge.org
Tue Mar 10 13:42:34 PDT 2015


Revision: 14555
          http://trac.calendarserver.org//changeset/14555
Author:   cdaboo at apple.com
Date:     2015-03-10 13:42:34 -0700 (Tue, 10 Mar 2015)
Log Message:
-----------
Merge from trunk and resolve conflicts.

Modified Paths:
--------------
    CalendarServer/branches/users/sagen/trashcan-5/bin/_build.sh
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/applepush.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_applepush.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_notifier.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tap/util.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/calverify.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/diagnose.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/export.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/importer.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/principals.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/purge.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/push.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_principals.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_purge_old_events.py
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/webadmin/work.py
    CalendarServer/branches/users/sagen/trashcan-5/conf/caldavd-apple.plist
    CalendarServer/branches/users/sagen/trashcan-5/contrib/od/setup_directory.py
    CalendarServer/branches/users/sagen/trashcan-5/requirements-dev.txt
    CalendarServer/branches/users/sagen/trashcan-5/requirements-stable.txt
    CalendarServer/branches/users/sagen/trashcan-5/setup.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/config.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/database.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/dateops.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/calendaruserproxy.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/test/test_proxyprincipaldb.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/resource.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/storebridge.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_dateops.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_resource.py
    CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_wrapping.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/test/test_subpostgres.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/util.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/index_file.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/builder.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/test/test_filter.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/inbound.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/outbound.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_mailgateway.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_outbound.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/ischedule/delivery.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/test/test_work.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/work.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_external.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/common.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_attachments.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_index_file.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql_sharing.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/util.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/icalendarstore.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql_external.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql_sharing.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/file.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/attachments.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/conduit.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/directory.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/request.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/resource.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/sharing_invites.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/store_api.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_conduit.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_store_api.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/util.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_external.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v51.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_tables.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql_tables.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_trash.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade_with_data.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_2_to_3.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/test/test_notification_upgrade_from_0_to_1.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/work/test/test_revision_cleanup.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/icommondatastore.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/who/delegates.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/who/groups.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_delegates.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_attendees.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_sharees.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_groups.py

Added Paths:
-----------
    CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/pod_migration.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_attachment.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_directory.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/util.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_apn.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_directory.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_imip.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_notification.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/oracle-dialect/v52.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v52.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_51_to_52.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_52_to_53.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_51_to_52.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_52_to_53.sql
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_sharing.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_util.py

Removed Paths:
-------------
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/schedule.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_schedule.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py
    CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py

Property Changed:
----------------
    CalendarServer/branches/users/sagen/trashcan-5/


Property changes on: CalendarServer/branches/users/sagen/trashcan-5
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/release/CalendarServer-5.1-dev:11846
/CalendarServer/branches/release/CalendarServer-5.2-dev:11972,12357-12358,12794,12814
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/cross-pod-sharing:12038-12191
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11607-11871
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/json:11622-11912
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/performance-tweaks:11824-11836
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/reverse-proxy-pods:11875-11900
/CalendarServer/branches/users/cdaboo/scheduling-queue-refresh:11783-12557
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/sharing-in-the-store:11935-12016
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/cleanrevisions:12152-12334
/CalendarServer/branches/users/gaya/groupsharee2:13669-13773
/CalendarServer/branches/users/gaya/sharedgroupfixes:12120-12142
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/log-cleanups:11691-11731
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/whenNotProposed:11881-11897
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/move2who:12819-12860
/CalendarServer/branches/users/sagen/move2who-2:12861-12898
/CalendarServer/branches/users/sagen/move2who-3:12899-12913
/CalendarServer/branches/users/sagen/move2who-4:12914-13157
/CalendarServer/branches/users/sagen/move2who-5:13158-13163
/CalendarServer/branches/users/sagen/newcua:13309-13327
/CalendarServer/branches/users/sagen/newcua-1:13328-13330
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/recordtypes:13648-13656
/CalendarServer/branches/users/sagen/recordtypes-2:13657
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/sagen/trashcan:14185-14269
/CalendarServer/branches/users/sagen/trashcan-2:14270-14324
/CalendarServer/branches/users/sagen/trashcan-3:14325-14450
/CalendarServer/branches/users/sagen/trashcan-4:14451-14471
/CalendarServer/branches/users/wsanchez/psycopg2cffi:14427-14439
/CalendarServer/branches/users/wsanchez/transations:5515-5593
   + /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/release/CalendarServer-5.1-dev:11846
/CalendarServer/branches/release/CalendarServer-5.2-dev:11972,12357-12358,12794,12814
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/cross-pod-sharing:12038-12191
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11607-11871
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/json:11622-11912
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/performance-tweaks:11824-11836
/CalendarServer/branches/users/cdaboo/pod2pod-migration:14338-14520
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/reverse-proxy-pods:11875-11900
/CalendarServer/branches/users/cdaboo/scheduling-queue-refresh:11783-12557
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/sharing-in-the-store:11935-12016
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/cleanrevisions:12152-12334
/CalendarServer/branches/users/gaya/groupsharee2:13669-13773
/CalendarServer/branches/users/gaya/sharedgroupfixes:12120-12142
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/log-cleanups:11691-11731
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/whenNotProposed:11881-11897
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/move2who:12819-12860
/CalendarServer/branches/users/sagen/move2who-2:12861-12898
/CalendarServer/branches/users/sagen/move2who-3:12899-12913
/CalendarServer/branches/users/sagen/move2who-4:12914-13157
/CalendarServer/branches/users/sagen/move2who-5:13158-13163
/CalendarServer/branches/users/sagen/newcua:13309-13327
/CalendarServer/branches/users/sagen/newcua-1:13328-13330
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/recordtypes:13648-13656
/CalendarServer/branches/users/sagen/recordtypes-2:13657
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/sagen/trashcan:14185-14269
/CalendarServer/branches/users/sagen/trashcan-2:14270-14324
/CalendarServer/branches/users/sagen/trashcan-3:14325-14450
/CalendarServer/branches/users/sagen/trashcan-4:14451-14471
/CalendarServer/branches/users/wsanchez/psycopg2cffi:14427-14439
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:14471-14551

Modified: CalendarServer/branches/users/sagen/trashcan-5/bin/_build.sh
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/bin/_build.sh	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/bin/_build.sh	2015-03-10 20:42:34 UTC (rev 14555)
@@ -123,7 +123,8 @@
 
   if find_cmd openssl > /dev/null; then
     if [ -z "${hash}" ]; then hash="md5"; fi;
-    md5 () { "$(find_cmd openssl)" dgst -md5 "$@"; }
+    # remove "(stdin)= " from the front which openssl emits on some platforms
+    md5 () { "$(find_cmd openssl)" dgst -md5 "$@" | sed 's/^.* //'; }
   elif find_cmd md5 > /dev/null; then
     if [ -z "${hash}" ]; then hash="md5"; fi;
     md5 () { "$(find_cmd md5)" "$@"; }

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/applepush.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/applepush.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/applepush.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -234,10 +234,7 @@
                 self.log.debug(
                     "Sending %d APNS notifications for %s" %
                     (numSubscriptions, pushKey))
-                tokens = []
-                for token, uid in subscriptions:
-                    if token and uid:
-                        tokens.append(token)
+                tokens = [record.token for record in subscriptions if record.token and record.subscriberGUID]
                 if tokens:
                     provider.scheduleNotifications(
                         tokens, pushKey,
@@ -349,11 +346,11 @@
                     (token,))
                 txn = self.factory.store.newTransaction(label="APNProviderProtocol.processError")
                 subscriptions = (yield txn.apnSubscriptionsByToken(token))
-                for key, _ignore_modified, _ignore_uid in subscriptions:
+                for record in subscriptions:
                     self.log.debug(
                         "Removing subscription: %s %s" %
-                        (token, key))
-                    yield txn.removeAPNSubscription(token, key)
+                        (token, record.resourceKey))
+                    yield txn.removeAPNSubscription(token, record.resourceKey)
                 yield txn.commit()
 
 
@@ -746,12 +743,12 @@
         txn = self.factory.store.newTransaction(label="APNFeedbackProtocol.processFeedback")
         subscriptions = (yield txn.apnSubscriptionsByToken(token))
 
-        for key, modified, _ignore_uid in subscriptions:
-            if timestamp > modified:
+        for record in subscriptions:
+            if timestamp > record.modified:
                 self.log.debug(
                     "FeedbackProtocol removing subscription: %s %s" %
-                    (token, key))
-                yield txn.removeAPNSubscription(token, key)
+                    (token, record.resourceKey))
+                yield txn.removeAPNSubscription(token, record.resourceKey)
         yield txn.commit()
 
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_applepush.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_applepush.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_applepush.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -88,6 +88,7 @@
         yield txn.addAPNSubscription(token, key2, timestamp2, uid, userAgent, ipAddr)
 
         subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid))
+        subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions]
         self.assertTrue([token, key1, timestamp1, userAgent, ipAddr] in subscriptions)
         self.assertTrue([token, key2, timestamp2, userAgent, ipAddr] in subscriptions)
         self.assertTrue([token2, key1, timestamp1, userAgent, ipAddr] in subscriptions)
@@ -98,9 +99,11 @@
         uid2 = "D8FFB335-9D36-4CE8-A3B9-D1859E38C0DA"
         yield txn.addAPNSubscription(token, key2, timestamp3, uid2, userAgent, ipAddr)
         subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid))
+        subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions]
         self.assertTrue([token, key1, timestamp1, userAgent, ipAddr] in subscriptions)
         self.assertFalse([token, key2, timestamp3, userAgent, ipAddr] in subscriptions)
         subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid2))
+        subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions]
         self.assertTrue([token, key2, timestamp3, userAgent, ipAddr] in subscriptions)
         # Change it back
         yield txn.addAPNSubscription(token, key2, timestamp2, uid, userAgent, ipAddr)
@@ -284,10 +287,10 @@
         txn = self._sqlCalendarStore.newTransaction()
         subscriptions = (yield txn.apnSubscriptionsByToken(token))
         yield txn.commit()
-        self.assertEquals(
-            subscriptions,
-            [["/CalDAV/calendars.example.com/user02/calendar/", 3000, "D2256BCC-48E2-42D1-BD89-CBA1E4CCDFFB"]]
-        )
+        self.assertEquals(len(subscriptions), 1)
+        self.assertEqual(subscriptions[0].resourceKey, "/CalDAV/calendars.example.com/user02/calendar/")
+        self.assertEqual(subscriptions[0].modified, 3000)
+        self.assertEqual(subscriptions[0].subscriberGUID, "D2256BCC-48E2-42D1-BD89-CBA1E4CCDFFB")
 
         # Verify processError removes associated subscriptions and history
         # First find the id corresponding to token2
@@ -326,7 +329,7 @@
         subscriptions = (yield txn.apnSubscriptionsByToken(token2))
         yield txn.commit()
         self.assertEquals(len(subscriptions), 1)
-        self.assertEquals(subscriptions[0][0], key2)
+        self.assertEquals(subscriptions[0].resourceKey, key2)
 
         service.stopService()
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_notifier.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_notifier.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/push/test/test_notifier.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -297,7 +297,7 @@
     @inlineCallbacks
     def test_notificationNotifier(self):
 
-        notifications = yield self.transactionUnderTest().notificationsWithUID("user01")
+        notifications = yield self.transactionUnderTest().notificationsWithUID("user01", create=True)
         yield notifications.notifyChanged(category=ChangeCategory.default)
         self.assertEquals(
             set(self.notifierFactory.history),

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tap/util.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tap/util.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tap/util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -150,7 +150,6 @@
         options=config.Postgres.Options,
         uid=uid, gid=gid,
         spawnedDBUser=config.SpawnedDBUser,
-        importFileName=config.DBImportFile,
         pgCtl=config.Postgres.Ctl,
         initDB=config.Postgres.Init,
     )
@@ -161,8 +160,8 @@
     """
     Create a postgres DB-API connector from the given configuration.
     """
-    import pgdb
-    return DBAPIConnector(pgdb, postgresPreflight, config.DSN).connect
+    from txdav.base.datastore.subpostgres import postgres
+    return DBAPIConnector(postgres, postgresPreflight, config.DSN).connect
 
 
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/calverify.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/calverify.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/calverify.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -60,7 +60,7 @@
 from twisted.python import usage
 from twisted.python.usage import Options
 from twistedcaldav.datafilters.peruserdata import PerUserDataFilter
-from twistedcaldav.dateops import pyCalendarTodatetime
+from twistedcaldav.dateops import pyCalendarToSQLTimestamp
 from twistedcaldav.ical import Component, InvalidICalendarDataError, Property, PERUSER_COMPONENT
 from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
 from twistedcaldav.timezones import TimezoneCache
@@ -530,8 +530,8 @@
         ch = schema.CALENDAR_HOME
         tr = schema.TIME_RANGE
         kwds = {
-            "Start" : pyCalendarTodatetime(start),
-            "Max"   : pyCalendarTodatetime(DateTime(1900, 1, 1, 0, 0, 0))
+            "Start" : pyCalendarToSQLTimestamp(start),
+            "Max"   : pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0))
         }
         rows = (yield Select(
             [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
@@ -583,8 +583,8 @@
         ch = schema.CALENDAR_HOME
         tr = schema.TIME_RANGE
         kwds = {
-            "Start" : pyCalendarTodatetime(start),
-            "Max"   : pyCalendarTodatetime(DateTime(1900, 1, 1, 0, 0, 0)),
+            "Start" : pyCalendarToSQLTimestamp(start),
+            "Max"   : pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)),
             "UUID" : uuid,
         }
         rows = (yield Select(
@@ -613,8 +613,8 @@
             cb.CALENDAR_RESOURCE_NAME != "inbox")
 
         kwds = {
-            "Start" : pyCalendarTodatetime(start),
-            "Max"   : pyCalendarTodatetime(DateTime(1900, 1, 1, 0, 0, 0)),
+            "Start" : pyCalendarToSQLTimestamp(start),
+            "Max"   : pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)),
             "UUID" : uuid,
         }
         rows = (yield Select(
@@ -2159,7 +2159,7 @@
         tr = schema.TIME_RANGE
         kwds = {
             "uuid": uuid,
-            "Start" : pyCalendarTodatetime(start),
+            "Start" : pyCalendarToSQLTimestamp(start),
         }
         rows = (yield Select(
             [co.RESOURCE_ID, ],

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/diagnose.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/diagnose.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/diagnose.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -358,6 +358,7 @@
     runSQLQuery("select * from job;")
 
 
+
 def showConfigKeys():
 
     print()

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/export.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/export.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/export.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -203,7 +203,7 @@
         for this calendar home.
         """
         uid = yield self.getHomeUID(exportService)
-        home = yield txn.calendarHomeWithUID(uid, True)
+        home = yield txn.calendarHomeWithUID(uid, create=True)
         result = []
         if self.collections:
             for collection in self.collections:
@@ -303,6 +303,7 @@
     fileobj.write(comp.getTextWithTimezones(True))
 
 
+
 @inlineCallbacks
 def exportToDirectory(calendars, dirname):
     """

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/importer.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/importer.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/importer.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -82,6 +82,7 @@
     """
 
 
+
 class ImportOptions(Options):
     """
     Command-line options for 'calendarserver_import'
@@ -131,6 +132,7 @@
             return open(self.inputName, 'r')
 
 
+
 # These could probably live on the collection class:
 
 def setCollectionPropertyValue(collection, element, value):
@@ -140,6 +142,7 @@
     )
 
 
+
 def getCollectionPropertyValue(collection, element):
     collectionProperties = collection.properties()
     name = PropertyName.fromElement(element)
@@ -148,7 +151,6 @@
     else:
         return None
 
-#
 
 
 @inlineCallbacks
@@ -286,6 +288,7 @@
             yield txn.commit()
 
 
+
 @inlineCallbacks
 def storeComponentInHomeAndCalendar(
     store, component, homeUID, collectionResourceName, objectResourceName,
@@ -342,7 +345,6 @@
         TimezoneCache.create()
 
 
-
     @inlineCallbacks
     def doWork(self):
         """
@@ -412,6 +414,7 @@
     except UsageError, e:
         usage(e)
 
+
     def makeService(store):
         from twistedcaldav.config import config
         return ImporterService(store, options, reactor, config)

Copied: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/pod_migration.py (from rev 14551, CalendarServer/trunk/calendarserver/tools/pod_migration.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/pod_migration.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/pod_migration.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,293 @@
+#!/usr/bin/env python
+# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+from __future__ import print_function
+
+"""
+This tool manages an overall pod migration. Migration is done in a series of steps,
+with the system admin triggering each step individually by running this tool.
+"""
+
+import os
+import sys
+
+from twisted.internet.defer import inlineCallbacks
+from twisted.python.text import wordWrap
+from twisted.python.usage import Options, UsageError
+
+from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
+from twistedcaldav.timezones import TimezoneCache
+
+from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync
+
+from twext.python.log import Logger
+from twext.who.idirectory import RecordType
+
+from calendarserver.tools.cmdline import utilityMain, WorkerService
+
+
+log = Logger()
+
+VERSION = "1"
+
+
+
+def usage(e=None):
+    if e:
+        print(e)
+        print("")
+    try:
+        PodMigrationOptions().opt_help()
+    except SystemExit:
+        pass
+    if e:
+        sys.exit(64)
+    else:
+        sys.exit(0)
+
+
+description = ''.join(
+    wordWrap(
+        """
+        Usage: calendarserver_pod_migration [options] [input specifiers]
+        """,
+        int(os.environ.get('COLUMNS', '80'))
+    )
+)
+description += "\nVersion: %s" % (VERSION,)
+
+
+
+class ConfigError(Exception):
+    pass
+
+
+
+class PodMigrationOptions(Options):
+    """
+    Command-line options for 'calendarserver_pod_migration'
+    """
+
+    synopsis = description
+
+    optFlags = [
+        ['verbose', 'v', "Verbose logging."],
+        ['debug', 'D', "Debug logging."],
+        ['step1', '1', "Run step 1 of the migration (initial sync)"],
+        ['step2', '2', "Run step 2 of the migration (incremental sync)"],
+        ['step3', '3', "Run step 3 of the migration (prepare for final sync)"],
+        ['step4', '4', "Run step 4 of the migration (final incremental sync)"],
+        ['step5', '5', "Run step 5 of the migration (final reconcile sync)"],
+        ['step6', '6', "Run step 6 of the migration (enable new home)"],
+        ['step7', '7', "Run step 7 of the migration (remove old home)"],
+    ]
+
+    optParameters = [
+        ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."],
+        ['uid', 'u', "", "Directory record uid of user to migrate [REQUIRED]"],
+    ]
+
+    longdesc = "Only one step option is allowed."
+
+    def __init__(self):
+        super(PodMigrationOptions, self).__init__()
+        self.outputName = '-'
+
+
+    def opt_output(self, filename):
+        """
+        Specify output file path (default: '-', meaning stdout).
+        """
+        self.outputName = filename
+
+    opt_o = opt_output
+
+
+    def openOutput(self):
+        """
+        Open the appropriate output file based on the '--output' option.
+        """
+        if self.outputName == '-':
+            return sys.stdout
+        else:
+            return open(self.outputName, 'wb')
+
+
+    def postOptions(self):
+        runstep = None
+        for step in range(7):
+            if self["step{}".format(step + 1)]:
+                if runstep is None:
+                    runstep = step
+                    self["runstep"] = step + 1
+                else:
+                    raise UsageError("Only one step option allowed")
+        else:
+            if runstep is None:
+                raise UsageError("One step option must be present")
+        if not self["uid"]:
+            raise UsageError("A uid is required")
+
+
+
+class PodMigrationService(WorkerService, object):
+    """
+    Service which runs, does its stuff, then stops the reactor.
+    """
+
+    def __init__(self, store, options, output, reactor, config):
+        super(PodMigrationService, self).__init__(store)
+        self.options = options
+        self.output = output
+        self.reactor = reactor
+        self.config = config
+        TimezoneCache.create()
+
+
+    @inlineCallbacks
+    def doWork(self):
+        """
+        Do the work, stopping the reactor when done.
+        """
+        self.output.write("\n---- Pod Migration version: %s ----\n" % (VERSION,))
+
+        # Map short name to uid
+        record = yield self.store.directoryService().recordWithUID(self.options["uid"])
+        if record is None:
+            record = yield self.store.directoryService().recordWithShortName(RecordType.user, self.options["uid"])
+            if record is not None:
+                self.options["uid"] = record.uid
+
+        try:
+            yield getattr(self, "step{}".format(self.options["runstep"]))()
+            self.output.close()
+        except ConfigError:
+            pass
+        except:
+            log.failure("doWork()")
+
+
+    @inlineCallbacks
+    def step1(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 1\n")
+        yield syncer.sync()
+
+
+    @inlineCallbacks
+    def step2(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 2\n")
+        yield syncer.sync()
+
+
+    @inlineCallbacks
+    def step3(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 3\n")
+        yield syncer.disableRemoteHome()
+
+
+    @inlineCallbacks
+    def step4(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            final=True,
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 4\n")
+        yield syncer.sync()
+
+
+    @inlineCallbacks
+    def step5(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            final=True,
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 5\n")
+        yield syncer.finalSync()
+
+
+    @inlineCallbacks
+    def step6(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 6\n")
+        yield syncer.enableLocalHome()
+
+
+    @inlineCallbacks
+    def step7(self):
+        syncer = CrossPodHomeSync(
+            self.store,
+            self.options["uid"],
+            final=True,
+            uselog=self.output if self.options["verbose"] else None
+        )
+        syncer.accounting("Pod Migration Step 7\n")
+        yield syncer.removeRemoteHome()
+
+
+
+def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
+    """
+    Do the export.
+    """
+    if reactor is None:
+        from twisted.internet import reactor
+    options = PodMigrationOptions()
+    try:
+        options.parseOptions(argv[1:])
+    except UsageError as e:
+        stderr.write("Invalid options specified\n")
+        options.opt_help()
+
+    try:
+        output = options.openOutput()
+    except IOError, e:
+        stderr.write("Unable to open output file for writing: %s\n" % (e))
+        sys.exit(1)
+
+
+    def makeService(store):
+        from twistedcaldav.config import config
+        config.TransactionTimeoutSeconds = 0
+        return PodMigrationService(store, options, output, reactor, config)
+
+    utilityMain(options['config'], makeService, reactor, verbose=options["debug"])
+
+if __name__ == '__main__':
+    main()

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/principals.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/principals.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/principals.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -775,15 +775,11 @@
                 groupUIDs.append(record.uid)
 
     for groupUID in groupUIDs:
-        (
-            groupID, name, _ignore_membershipHash, modified, _ignore_extant
-        ) = yield txn.groupByUID(
-            groupUID
-        )
-        print("Group: \"{name}\" ({uid})".format(name=name, uid=groupUID))
+        group = yield txn.groupByUID(groupUID)
+        print("Group: \"{name}\" ({uid})".format(name=group.name, uid=group.groupUID))
 
         for txt, readWrite in (("read-only", False), ("read-write", True)):
-            delegatorUIDs = yield txn.delegatorsToGroup(groupID, readWrite)
+            delegatorUIDs = yield txn.delegatorsToGroup(group.groupID, readWrite)
             for delegatorUID in delegatorUIDs:
                 delegator = yield directory.recordWithUID(delegatorUID)
                 print(
@@ -793,12 +789,12 @@
                 )
 
         print("Group members:")
-        memberUIDs = yield txn.groupMemberUIDs(groupID)
+        memberUIDs = yield txn.groupMemberUIDs(group.groupID)
         for memberUID in memberUIDs:
             record = yield directory.recordWithUID(memberUID)
             print(prettyRecord(record))
 
-        print("Last cached: {} GMT".format(modified))
+        print("Last cached: {} GMT".format(group.modified))
         print()
 
     yield txn.commit()

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/purge.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/purge.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/purge.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -992,7 +992,7 @@
 
         if not self.dryrun:
             yield storeCalHome.removeUnacceptedShares()
-            notificationHome = yield txn.notificationsWithUID(storeCalHome.uid(), create=False)
+            notificationHome = yield txn.notificationsWithUID(storeCalHome.uid())
             if notificationHome is not None:
                 yield notificationHome.remove()
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/push.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/push.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/push.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -68,9 +68,9 @@
             (yield txn.commit())
             if subscriptions:
                 byKey = {}
-                for token, key, timestamp, userAgent, ipAddr in subscriptions:
-                    byKey.setdefault(key, []).append((token, timestamp, userAgent, ipAddr))
-                for key, tokens in byKey.iteritems():
+                for apnrecord in subscriptions:
+                    byKey.setdefault(apnrecord.resourceKey, []).append(apnrecord)
+                for key, apnsrecords in byKey.iteritems():
                     print
                     protocol, _ignore_host, path = key.strip("/").split("/", 2)
                     resource = {
@@ -89,13 +89,13 @@
                     else:
                         print("...is subscribed to %s's %s home" % (user, resource),)
                         # print("   (key: %s)\n" % (key,))
-                    print("with %d device(s):" % (len(tokens),))
-                    for token, timestamp, userAgent, ipAddr in tokens:
+                    print("with %d device(s):" % (len(apnsrecords),))
+                    for apnrecords in apnsrecords:
                         print(" %s\n   '%s' from %s\n   %s" % (
-                            token, userAgent, ipAddr,
+                            apnrecords.token, apnrecords.userAgent, apnrecords.ipAddr,
                             time.strftime(
                                 "on %a, %d %b %Y at %H:%M:%S %z(%Z)",
-                                time.localtime(timestamp)
+                                time.localtime(apnrecords.modified)
                             )
                         ))
             else:

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_principals.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_principals.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_principals.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -155,6 +155,7 @@
         self.assertTrue("group2" in results)
         self.assertTrue("group3" in results)
 
+
     @inlineCallbacks
     def test_addRemove(self):
         results = yield self.runCommand(

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_purge_old_events.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_purge_old_events.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/tools/test/test_purge_old_events.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -25,6 +25,7 @@
 )
 from pycalendar.datetime import DateTime
 from twext.enterprise.dal.syntax import Update, Delete
+from twext.enterprise.util import parseSQLTimestamp
 from twisted.internet import reactor
 from twisted.internet.defer import inlineCallbacks, returnValue, Deferred
 from twistedcaldav.config import config
@@ -458,19 +459,19 @@
         self.assertEquals(
             sorted(results),
             sorted([
-                ['home1', 'calendar1', 'old.ics', '1901-01-01 01:00:00'],
-                ['home1', 'calendar1', 'oldattachment1.ics', '1901-01-01 01:00:00'],
-                ['home1', 'calendar1', 'oldattachment2.ics', '1901-01-01 01:00:00'],
-                ['home1', 'calendar1', 'oldmattachment1.ics', '1901-01-01 01:00:00'],
-                ['home1', 'calendar1', 'oldmattachment2.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar3', 'repeating_awhile.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar2', 'recent.ics', '%s-03-04 22:15:00' % (now,)],
-                ['home2', 'calendar2', 'oldattachment1.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar2', 'oldattachment3.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar2', 'oldattachment4.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar2', 'oldmattachment1.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar2', 'oldmattachment3.ics', '1901-01-01 01:00:00'],
-                ['home2', 'calendar2', 'oldmattachment4.ics', '1901-01-01 01:00:00'],
+                ['home1', 'calendar1', 'old.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home1', 'calendar1', 'oldattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home1', 'calendar1', 'oldattachment2.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home1', 'calendar1', 'oldmattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home1', 'calendar1', 'oldmattachment2.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar3', 'repeating_awhile.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar2', 'recent.ics', parseSQLTimestamp('%s-03-04 22:15:00' % (now,))],
+                ['home2', 'calendar2', 'oldattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar2', 'oldattachment3.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar2', 'oldattachment4.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar2', 'oldmattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar2', 'oldmattachment3.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
+                ['home2', 'calendar2', 'oldmattachment4.ics', parseSQLTimestamp('1901-01-01 01:00:00')],
             ])
         )
 
@@ -497,7 +498,7 @@
         count = (yield txn.removeOldEvents(cutoff))
         self.assertEquals(count, 12)
         results = (yield txn.eventsOlderThan(cutoff))
-        self.assertEquals(results, [])
+        self.assertEquals(list(results), [])
 
         # Remove oldest events (none left)
         count = (yield txn.removeOldEvents(cutoff))

Modified: CalendarServer/branches/users/sagen/trashcan-5/calendarserver/webadmin/work.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/calendarserver/webadmin/work.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/calendarserver/webadmin/work.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -157,13 +157,13 @@
                     if workType == PushNotificationWork:
                         attrs += ("pushID", "priority")
                     elif workType == ScheduleOrganizerWork:
-                        attrs += ("icalendarUid", "attendeeCount")
+                        attrs += ("icalendarUID", "attendeeCount")
                     elif workType == ScheduleRefreshWork:
-                        attrs += ("icalendarUid", "attendeeCount")
+                        attrs += ("icalendarUID", "attendeeCount")
                     elif workType == ScheduleReplyWork:
-                        attrs += ("icalendarUid",)
+                        attrs += ("icalendarUID",)
                     elif workType == ScheduleAutoReplyWork:
-                        attrs += ("icalendarUid",)
+                        attrs += ("icalendarUID",)
                     elif workType == GroupCacherPollingWork:
                         attrs += ()
                     elif workType == IMIPPollingWork:

Modified: CalendarServer/branches/users/sagen/trashcan-5/conf/caldavd-apple.plist
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/conf/caldavd-apple.plist	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/conf/caldavd-apple.plist	2015-03-10 20:42:34 UTC (rev 14555)
@@ -99,8 +99,6 @@
     <string></string>
     <key>DSN</key>
     <string></string>
-    <key>DBImportFile</key>
-    <string>/Library/Server/Calendar and Contacts/DataDump.sql</string>
     <key>Postgres</key>
     <dict>
         <key>Ctl</key>
@@ -331,7 +329,7 @@
 
     <!-- Log levels -->
     <key>DefaultLogLevel</key>
-    <string>warn</string> <!-- debug, info, warn, error -->
+    <string>info</string> <!-- debug, info, warn, error -->
 
     <!-- Server process ID file -->
     <key>PIDFile</key>

Modified: CalendarServer/branches/users/sagen/trashcan-5/contrib/od/setup_directory.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/contrib/od/setup_directory.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/contrib/od/setup_directory.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -283,6 +283,8 @@
 
     return records[0]
 
+
+
 def createRecord(node, recordType, recordName, attrs):
     record, error = node.createRecordWithRecordType_name_attributes_error_(
         recordType,
@@ -294,6 +296,8 @@
         raise ODError(error)
     return record
 
+
+
 def main():
 
     try:
@@ -301,7 +305,7 @@
     except GetoptError, e:
         usage(e)
 
-    for opt, arg in optargs:
+    for opt, _ignore_arg in optargs:
         if opt in ("-h", "--help"):
             usage()
 
@@ -416,6 +420,8 @@
 
             print("")
 
+
+
 class ODError(Exception):
     def __init__(self, error):
         self.message = (str(error), error.code())

Modified: CalendarServer/branches/users/sagen/trashcan-5/requirements-dev.txt
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/requirements-dev.txt	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/requirements-dev.txt	2015-03-10 20:42:34 UTC (rev 14555)
@@ -8,4 +8,4 @@
 q
 tl.eggdeps
 --editable svn+http://svn.calendarserver.org/repository/calendarserver/CalDAVClientLibrary/trunk@13420#egg=CalDAVClientLibrary
---editable svn+http://svn.calendarserver.org/repository/calendarserver/CalDAVTester/trunk@14461#egg=CalDAVTester
+--editable svn+http://svn.calendarserver.org/repository/calendarserver/CalDAVTester/trunk@14535#egg=CalDAVTester

Modified: CalendarServer/branches/users/sagen/trashcan-5/requirements-stable.txt
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/requirements-stable.txt	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/requirements-stable.txt	2015-03-10 20:42:34 UTC (rev 14555)
@@ -36,7 +36,7 @@
             #pyOpenSSL
         pycrypto==2.6.1
 
-    --editable svn+http://svn.calendarserver.org/repository/calendarserver/twext/trunk@14404#egg=twextpy
+    --editable svn+http://svn.calendarserver.org/repository/calendarserver/twext/trunk@14531#egg=twextpy
         cffi==0.8.6
             pycparser==2.10
         #twisted

Modified: CalendarServer/branches/users/sagen/trashcan-5/setup.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/setup.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/setup.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -53,7 +53,7 @@
     Compute the version number.
     """
 
-    base_version = "6.1"
+    base_version = "7.0"
 
     branches = tuple(
         branch.format(
@@ -225,6 +225,9 @@
 
     "verify_data":
     ("calendarserver.tools.calverify", "main"),
+
+    "pod_migration":
+    ("calendarserver.tools.pod_migration", "main"),
 }
 
 for tool, (module, function) in script_entry_points.iteritems():

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/config.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/config.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/config.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -303,7 +303,6 @@
         self._cachedSyncToken = None
 
 
-
     def getKeyPath(self, keyPath):
         """
         Allows the getting of arbitrary nested dictionary keys via a single
@@ -382,6 +381,7 @@
             return dataToken
 
 
+
 def mergeData(oldData, newData):
     """
     Merge two ConfigDict objects; oldData will be updated with all the keys

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/database.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/database.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/database.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -14,11 +14,19 @@
 # limitations under the License.
 ##
 
+"""
+Generic ADAPI database access object.
+"""
+
+__all__ = [
+    "AbstractADBAPIDatabase",
+]
+
 import thread
 
 try:
-    import pgdb as postgres
-except:
+    from txdav.base.datastore.subpostgres import postgres
+except ImportError:
     postgres = None
 
 from twisted.enterprise.adbapi import ConnectionPool
@@ -29,15 +37,9 @@
 
 from twistedcaldav.config import ConfigurationError
 
-"""
-Generic ADAPI database access object.
-"""
+log = Logger()
 
-__all__ = [
-    "AbstractADBAPIDatabase",
-]
 
-log = Logger()
 
 class ConnectionClosingThreadPool(ThreadPool):
     """

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/dateops.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/dateops.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/dateops.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -32,7 +32,7 @@
 from pycalendar.timezone import Timezone
 from pycalendar.period import Period
 
-import datetime
+from datetime import date, datetime
 import dateutil.tz
 
 import calendar
@@ -269,19 +269,19 @@
 
 
 
-def pyCalendarTodatetime(pydt):
+def pyCalendarToSQLTimestamp(pydt):
 
     if pydt.isDateOnly():
-        return datetime.date(year=pydt.getYear(), month=pydt.getMonth(), day=pydt.getDay())
+        return date(year=pydt.getYear(), month=pydt.getMonth(), day=pydt.getDay())
     else:
-        return datetime.datetime(
+        return datetime(
             year=pydt.getYear(),
             month=pydt.getMonth(),
             day=pydt.getDay(),
             hour=pydt.getHours(),
             minute=pydt.getMinutes(),
             second=pydt.getSeconds(),
-            tzinfo=dateutil.tz.tzutc()
+            tzinfo=None
         )
 
 
@@ -295,15 +295,25 @@
     @return: L{DateTime} result
     """
 
-    # Format is "%Y-%m-%d %H:%M:%S"
-    return DateTime(
-        year=int(ts[0:4]),
-        month=int(ts[5:7]),
-        day=int(ts[8:10]),
-        hours=int(ts[11:13]),
-        minutes=int(ts[14:16]),
-        seconds=int(ts[17:19])
-    )
+    if isinstance(ts, datetime):
+        return DateTime(
+            year=ts.year,
+            month=ts.month,
+            day=ts.day,
+            hours=ts.hour,
+            minutes=ts.minute,
+            seconds=ts.second
+        )
+    else:
+        # Format is "%Y-%m-%d %H:%M:%S"
+        return DateTime(
+            year=int(ts[0:4]),
+            month=int(ts[5:7]),
+            day=int(ts[8:10]),
+            hours=int(ts[11:13]),
+            minutes=int(ts[14:16]),
+            seconds=int(ts[17:19])
+        )
 
 
 
@@ -316,18 +326,25 @@
     @return: L{DateTime} result
     """
 
-    # Format is "%Y-%m-%d", though Oracle may add zero time which we ignore
-    return DateTime(
-        year=int(ts[0:4]),
-        month=int(ts[5:7]),
-        day=int(ts[8:10])
-    )
+    if isinstance(ts, date):
+        return DateTime(
+            year=ts.year,
+            month=ts.month,
+            day=ts.day,
+        )
+    else:
+        # Format is "%Y-%m-%d", though Oracle may add zero time which we ignore
+        return DateTime(
+            year=int(ts[0:4]),
+            month=int(ts[5:7]),
+            day=int(ts[8:10])
+        )
 
 
 
 def datetimeMktime(dt):
 
-    assert isinstance(dt, datetime.date)
+    assert isinstance(dt, date)
 
     if dt.tzinfo is None:
         dt.replace(tzinfo=dateutil.tz.tzutc())

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/calendaruserproxy.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/calendaruserproxy.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/calendaruserproxy.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -873,9 +873,13 @@
     """
 
     def __init__(self, host, database, user=None, password=None, dbtype=None):
+        from txdav.base.datastore.subpostgres import postgres
 
         ADBAPIPostgreSQLMixin.__init__(self,)
-        ProxyDB.__init__(self, "Proxies", "pgdb", (), host=host, database=database, user=user, password=password,)
+        ProxyDB.__init__(
+            self, "Proxies", postgres.__name__, (),
+            host=host, database=database, user=user, password=password,
+        )
         if dbtype:
             ProxyDB.schema_type = dbtype
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/test/test_proxyprincipaldb.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/test/test_proxyprincipaldb.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/directory/test/test_proxyprincipaldb.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -598,7 +598,7 @@
 
 
 try:
-    import pgdb as postgres
+    from txdav.base.datastore.subpostgres import postgres
 except ImportError:
     ProxyPrincipalDBPostgreSQL.skip = True
 else:

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/resource.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/resource.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/resource.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -2139,7 +2139,7 @@
     @inlineCallbacks
     def createNotificationsCollection(self):
         txn = self._associatedTransaction
-        notifications = yield txn.notificationsWithUID(self._newStoreHome.uid())
+        notifications = yield txn.notificationsWithUID(self._newStoreHome.uid(), create=True)
 
         from twistedcaldav.storebridge import StoreNotificationCollectionResource
         similar = StoreNotificationCollectionResource(

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/stdconfig.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/stdconfig.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -202,8 +202,6 @@
 
     "SpawnedDBUser": "caldav", # The username to use when DBType is empty
 
-    "DBImportFile": "", # File path to SQL file to import at startup (includes schema)
-
     "DSN": "", # Data Source Name.  Used to connect to an external
                # database if DBType is non-empty.  Format varies
                # depending on database type.
@@ -407,6 +405,7 @@
         "Implicit Errors": False,
         "AutoScheduling": False,
         "iSchedule": False,
+        "migration": False,
     },
     "AccountingPrincipals": [],
     "AccountingLogRoot"   : "accounting",
@@ -581,9 +580,9 @@
     "EnableTrashCollection": False,  # Enable Trash Collection
 
     "ParallelUpgrades": False, # Perform upgrades - currently only the
-                                # database -> filesystem migration - but in
-                                # the future, hopefully all relevant
-                                # upgrades - in parallel in subprocesses.
+                               # database -> filesystem migration - but in
+                               # the future, hopefully all relevant
+                               # upgrades - in parallel in subprocesses.
 
     "MergeUpgrades": False, # During the upgrade phase of startup, rather than
                             # skipping homes found both on the filesystem and in

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/storebridge.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/storebridge.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/storebridge.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -73,7 +73,8 @@
     InvalidObjectResourceError, ObjectResourceNameNotAllowedError,
     ObjectResourceNameAlreadyExistsError, UIDExistsError,
     UIDExistsElsewhereError, InvalidUIDError, InvalidResourceMove,
-    InvalidComponentForStoreError, AlreadyInTrashError
+    InvalidComponentForStoreError, AlreadyInTrashError,
+    HomeChildNameAlreadyExistsError
 )
 from txdav.idav import PropertyChangeNotAllowedError
 from txdav.who.wiki import RecordType as WikiRecordType
@@ -435,7 +436,12 @@
         """
         Override C{createCollection} to actually do the work.
         """
-        self._newStoreObject = (yield self._newStoreParentHome.createChildWithName(self._name))
+        try:
+            self._newStoreObject = (yield self._newStoreParentHome.createChildWithName(self._name))
+        except HomeChildNameAlreadyExistsError:
+            # We already check for an existing child prior to this call so the only time this fails is if
+            # there is an unaccepted share with the same name
+            raise HTTPError(StatusResponse(responsecode.FORBIDDEN, "Unaccepted share exists"))
 
         # Re-initialize to get stuff setup again now we have a "real" object
         self._initializeWithHomeChild(self._newStoreObject, self._parentResource)

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_dateops.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_dateops.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_dateops.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -14,18 +14,18 @@
 # limitations under the License.
 ##
 
-import twistedcaldav.test.util
-from twisted.trial.unittest import SkipTest
+from datetime import datetime, date
+
 from pycalendar.datetime import DateTime
+from pycalendar.timezone import Timezone
 
+from twisted.trial.unittest import SkipTest
+
 from twistedcaldav.dateops import parseSQLTimestampToPyCalendar, \
-    parseSQLDateToPyCalendar, pyCalendarTodatetime, \
+    parseSQLDateToPyCalendar, pyCalendarToSQLTimestamp, \
     normalizeForExpand, normalizeForIndex, normalizeToUTC, timeRangesOverlap
-
-import datetime
-import dateutil
-from pycalendar.timezone import Timezone
 from twistedcaldav.timezones import TimezoneCache
+import twistedcaldav.test.util
 
 class Dateops(twistedcaldav.test.util.TestCase):
     """
@@ -249,17 +249,17 @@
         raise SkipTest("test unimplemented")
 
 
-    def test_pyCalendarTodatetime(self):
+    def test_pyCalendarToSQLTimestamp(self):
         """
-        dateops.pyCalendarTodatetime
+        dateops.pyCalendarToSQLTimestamp
         """
         tests = (
-            (DateTime(2012, 4, 4, 12, 34, 56), datetime.datetime(2012, 4, 4, 12, 34, 56, tzinfo=dateutil.tz.tzutc())),
-            (DateTime(2012, 12, 31), datetime.date(2012, 12, 31)),
+            (DateTime(2012, 4, 4, 12, 34, 56), datetime(2012, 4, 4, 12, 34, 56, tzinfo=None)),
+            (DateTime(2012, 12, 31), date(2012, 12, 31)),
         )
 
         for pycal, result in tests:
-            self.assertEqual(pyCalendarTodatetime(pycal), result)
+            self.assertEqual(pyCalendarToSQLTimestamp(pycal), result)
 
 
     def test_parseSQLTimestampToPyCalendar(self):
@@ -269,6 +269,8 @@
         tests = (
             ("2012-04-04 12:34:56", DateTime(2012, 4, 4, 12, 34, 56)),
             ("2012-12-31 01:01:01", DateTime(2012, 12, 31, 1, 1, 1)),
+            (datetime(2012, 4, 4, 12, 34, 56), DateTime(2012, 4, 4, 12, 34, 56)),
+            (datetime(2012, 12, 31, 1, 1, 1), DateTime(2012, 12, 31, 1, 1, 1)),
         )
 
         for sqlStr, result in tests:
@@ -283,6 +285,8 @@
         tests = (
             ("2012-04-04", DateTime(2012, 4, 4)),
             ("2012-12-31 00:00:00", DateTime(2012, 12, 31)),
+            (date(2012, 4, 4), DateTime(2012, 4, 4)),
+            (date(2012, 12, 31), DateTime(2012, 12, 31)),
         )
 
         for sqlStr, result in tests:

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_resource.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_resource.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_resource.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -118,6 +118,7 @@
         self.assertEquals(resource._mergeSyncTokens("1_4", "1_3"), "1_4")
 
 
+
 class OwnershipTests(TestCase):
     """
     L{CalDAVResource.isOwner} determines if the authenticated principal of the

Modified: CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_wrapping.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_wrapping.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/twistedcaldav/test/test_wrapping.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -120,7 +120,7 @@
         record = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         uid = record.uid
         txn = self.transactionUnderTest()
-        home = yield txn.calendarHomeWithUID(uid, True)
+        home = yield txn.calendarHomeWithUID(uid, create=True)
         cal = yield home.calendarWithName("calendar")
         yield cal.createCalendarObjectWithName(objectName, VComponent.fromString(objectText))
         yield self.commit()
@@ -139,7 +139,7 @@
         record = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         uid = record.uid
         txn = self.transactionUnderTest()
-        home = yield txn.addressbookHomeWithUID(uid, True)
+        home = yield txn.addressbookHomeWithUID(uid, create=True)
         adbk = yield home.addressbookWithName("addressbook")
         yield adbk.createAddressBookObjectWithName(objectName, VCComponent.fromString(objectText))
         yield self.commit()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/subpostgres.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/subpostgres.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -26,11 +26,15 @@
 from hashlib import md5
 from pipes import quote as shell_quote
 
-import pgdb as postgres
+if True:
+    import pgdb as postgres
+else:
+    import pg8000 as postgres
 
 from twisted.python.procutils import which
 from twisted.internet.protocol import ProcessProtocol
 
+from twext.enterprise.dal.parseschema import splitSQLString
 from twext.python.log import Logger
 from twext.python.filepath import CachingFilePath
 
@@ -38,6 +42,7 @@
 from twisted.internet.defer import Deferred
 from txdav.base.datastore.dbapiclient import DBAPIConnector
 from txdav.base.datastore.dbapiclient import postgresPreflight
+from txdav.common.icommondatastore import InternalDataStoreError
 
 from twisted.application.service import MultiService
 
@@ -48,10 +53,12 @@
 _MAGIC_READY_COOKIE = "database system is ready to accept connections"
 
 
-class _PostgresMonitor(ProcessProtocol):
+
+class PostgresMonitor(ProcessProtocol):
     """
     A monitoring protocol which watches the postgres subprocess.
     """
+    log = Logger()
 
     def __init__(self, svc=None):
         self.lineReceiver = LineReceiver()
@@ -77,18 +84,22 @@
 
 
     def outReceived(self, out):
-        log.warn("received postgres stdout {out!r}", out=out)
+        for line in out.split("\n"):
+            if line:
+                self.log.info("{message}", message=line)
         # self.lineReceiver.dataReceived(out)
 
 
     def errReceived(self, err):
-        log.warn("received postgres stderr {err}", err=err)
+        for line in err.split("\n"):
+            if line:
+                self.log.error("{message}", message=line)
         self.lineReceiver.dataReceived(err)
 
 
     def processEnded(self, reason):
-        log.warn(
-            "postgres process ended with status {status}",
+        self.log.info(
+            "pg_ctl process ended with status={status}",
             status=reason.value.status
         )
         # If pg_ctl exited with zero, we were successful in starting postgres
@@ -98,7 +109,7 @@
         if reason.value.status == 0:
             self.completionDeferred.callback(None)
         else:
-            log.warn("Could not start postgres; see postgres.log")
+            self.log.error("Could not start postgres; see postgres.log")
             self.completionDeferred.errback(reason)
 
 
@@ -161,7 +172,7 @@
         """
         The process is over, fire the Deferred with the output.
         """
-        self.deferred.callback(''.join(self.output))
+        self.deferred.callback("".join(self.output))
 
 
 
@@ -179,7 +190,6 @@
         testMode=False,
         uid=None, gid=None,
         spawnedDBUser="caldav",
-        importFileName=None,
         pgCtl="pg_ctl",
         initDB="initdb",
         reactor=None,
@@ -196,9 +206,6 @@
 
         @param spawnedDBUser: the postgres role
         @type spawnedDBUser: C{str}
-        @param importFileName: path to SQL file containing previous data to
-            import
-        @type importFileName: C{str}
         """
 
         # FIXME: By default there is very little (4MB) shared memory available,
@@ -262,12 +269,20 @@
         self.uid = uid
         self.gid = gid
         self.spawnedDBUser = spawnedDBUser
-        self.importFileName = importFileName
         self.schema = schema
         self.monitor = None
         self.openConnections = []
-        self._pgCtl = pgCtl
-        self._initdb = initDB
+
+        def locateCommand(name, cmd):
+            for found in which(cmd):
+                return found
+
+            raise InternalDataStoreError(
+                "Unable to locate {} command: {}".format(name, cmd)
+            )
+
+        self._pgCtl = locateCommand("pg_ctl", pgCtl)
+        self._initdb = locateCommand("initdb", initDB)
         self._reactor = reactor
         self._postgresPid = None
 
@@ -280,33 +295,22 @@
         return self._reactor
 
 
-    def pgCtl(self):
-        """
-        Locate the path to pg_ctl.
-        """
-        return which(self._pgCtl)[0]
-
-
-    def initdb(self):
-        return which(self._initdb)[0]
-
-
     def activateDelayedShutdown(self):
         """
         Call this when starting database initialization code to
         protect against shutdown.
 
-        Sets the delayedShutdown flag to True so that if reactor
-        shutdown commences, the shutdown will be delayed until
-        deactivateDelayedShutdown is called.
+        Sets the delayedShutdown flag to True so that if reactor shutdown
+        commences, the shutdown will be delayed until deactivateDelayedShutdown
+        is called.
         """
         self.delayedShutdown = True
 
 
     def deactivateDelayedShutdown(self):
         """
-        Call this when database initialization code has completed so
-        that the reactor can shutdown.
+        Call this when database initialization code has completed so that the
+        reactor can shutdown.
         """
         self.delayedShutdown = False
         if self.shutdownDeferred:
@@ -317,38 +321,106 @@
         if databaseName is None:
             databaseName = self.databaseName
 
+        m = getattr(self, "_connectorFor_{}".format(postgres.__name__), None)
+        if m is None:
+            raise InternalDataStoreError(
+                "Unknown Postgres DBM module: {}".format(postgres)
+            )
+
+        return m(databaseName)
+
+
+    def _connectorFor_pgdb(self, databaseName):
+        dsn = "{}:dbname={}".format(self.host, databaseName)
+
         if self.spawnedDBUser:
-            dsn = "{}:dbname={}:{}".format(
-                self.host, databaseName, self.spawnedDBUser
-            )
+            dsn = "{}:{}".format(dsn, self.spawnedDBUser)
         elif self.uid is not None:
-            dsn = "{}:dbname={}:{}".format(
-                self.host, databaseName, pwd.getpwuid(self.uid).pw_name
-            )
-        else:
-            dsn = "{}:dbname={}".format(self.host, databaseName)
+            dsn = "{}:{}".format(dsn, pwd.getpwuid(self.uid).pw_name)
 
         kwargs = {}
         if self.port:
             kwargs["host"] = "{}:{}".format(self.host, self.port)
 
+        log.info(
+            "Connecting to Postgres with dsn={dsn!r} args={args}",
+            dsn=dsn, args=kwargs
+        )
+
         return DBAPIConnector(postgres, postgresPreflight, dsn, **kwargs)
 
 
+    def _connectorFor_pg8000(self, databaseName):
+        kwargs = dict(database=databaseName)
+
+        if self.host.startswith("/"):
+            # We're using a socket file
+            socketFP = CachingFilePath(self.host)
+
+            if socketFP.isdir():
+                # We have been given the directory, not the actual socket file
+                socketFP = socketFP.child(
+                    ".s.PGSQL.{}".format(self.port if self.port else 5432)
+                )
+
+            if not socketFP.isSocket():
+                raise InternalDataStoreError(
+                    "No such socket file: {}".format(socketFP.path)
+                )
+
+            kwargs["host"] = None
+            kwargs["unix_sock"] = socketFP.path
+        else:
+            kwargs["host"] = self.host
+            kwargs["unix_sock"] = None
+
+        if self.port:
+            kwargs["port"] = self.port
+
+        if self.spawnedDBUser:
+            kwargs["user"] = self.spawnedDBUser
+        elif self.uid is not None:
+            kwargs["user"] = pwd.getpwuid(self.uid).pw_name
+
+        log.info("Connecting to Postgres with args={args}", args=kwargs)
+
+        return DBAPIConnector(postgres, postgresPreflight, **kwargs)
+
+
     def produceConnection(self, label="<unlabeled>", databaseName=None):
         """
         Produce a DB-API 2.0 connection pointed at this database.
         """
-        return self._connectorFor(databaseName).connect(label)
+        connection = self._connectorFor(databaseName).connect(label)
 
+        if postgres.__name__ == "pg8000":
+            # Patch pg8000 behavior to match what we need wrt text processing
 
+            def my_text_out(v):
+                return v.encode("utf-8") if isinstance(v, unicode) else str(v)
+            connection.realConnection.py_types[str] = (705, postgres.core.FC_TEXT, my_text_out)
+            connection.realConnection.py_types[postgres.six.text_type] = (705, postgres.core.FC_TEXT, my_text_out)
+
+            def my_text_recv(data, offset, length):
+                return str(data[offset: offset + length])
+            connection.realConnection.default_factory = lambda: (postgres.core.FC_TEXT, my_text_recv)
+            connection.realConnection.pg_types[19] = (postgres.core.FC_BINARY, my_text_recv)
+            connection.realConnection.pg_types[25] = (postgres.core.FC_BINARY, my_text_recv)
+            connection.realConnection.pg_types[705] = (postgres.core.FC_BINARY, my_text_recv)
+            connection.realConnection.pg_types[829] = (postgres.core.FC_TEXT, my_text_recv)
+            connection.realConnection.pg_types[1042] = (postgres.core.FC_BINARY, my_text_recv)
+            connection.realConnection.pg_types[1043] = (postgres.core.FC_BINARY, my_text_recv)
+            connection.realConnection.pg_types[2275] = (postgres.core.FC_BINARY, my_text_recv)
+
+        return connection
+
+
     def ready(self, createDatabaseConn, createDatabaseCursor):
         """
         Subprocess is ready.  Time to initialize the subservice.
         If the database has not been created and there is a dump file,
         then the dump file is imported.
         """
-
         if self.resetSchema:
             try:
                 createDatabaseCursor.execute(
@@ -364,24 +436,20 @@
             )
         except:
             # database already exists
-            executeSQL = False
+            sqlToExecute = None
         else:
             # database does not yet exist; if dump file exists, execute it,
             # otherwise execute schema
-            executeSQL = True
             sqlToExecute = self.schema
-            if self.importFileName:
-                importFilePath = CachingFilePath(self.importFileName)
-                if importFilePath.exists():
-                    sqlToExecute = importFilePath.getContent()
 
         createDatabaseCursor.close()
         createDatabaseConn.close()
 
-        if executeSQL:
+        if sqlToExecute is not None:
             connection = self.produceConnection()
             cursor = connection.cursor()
-            cursor.execute(sqlToExecute)
+            for statement in splitSQLString(sqlToExecute):
+                cursor.execute(statement)
             connection.commit()
             connection.close()
 
@@ -419,7 +487,6 @@
         """
         Start the database and initialize the subservice.
         """
-
         def createConnection():
             try:
                 createDatabaseConn = self.produceConnection(
@@ -427,16 +494,26 @@
                 )
             except postgres.DatabaseError as e:
                 log.error(
-                    "Unable to connect to database for schema creation: {error}",
+                    "Unable to connect to database for schema creation:"
+                    " {error}",
                     error=e
                 )
                 raise
+
             createDatabaseCursor = createDatabaseConn.cursor()
-            createDatabaseCursor.execute("commit")
+
+            if postgres.__name__ == "pg8000":
+                createDatabaseConn.realConnection.autocommit = True
+            elif postgres.__name__ == "pgdb":
+                createDatabaseCursor.execute("commit")
+            else:
+                raise InternalDataStoreError(
+                    "Unknown Postgres DBM module: {}".format(postgres)
+                )
+
             return createDatabaseConn, createDatabaseCursor
 
-        monitor = _PostgresMonitor(self)
-        pgCtl = self.pgCtl()
+        monitor = PostgresMonitor(self)
         # check consistency of initdb and postgres?
 
         options = []
@@ -446,7 +523,7 @@
         )
         if self.socketDir:
             options.append(
-                "-k {}"
+                "-c unix_socket_directories={}"
                 .format(shell_quote(self.socketDir.path))
             )
         if self.port:
@@ -477,23 +554,17 @@
         if self.testMode:
             options.append("-c log_statement=all")
 
-        log.warn(
-            "Requesting postgres start via {cmd} {opts}",
-            cmd=pgCtl, opts=options
-        )
+        args = [
+            self._pgCtl, "start",
+            "--log={}".format(self.logFile),
+            "--timeout=86400",  # Plenty of time for a long cluster upgrade
+            "-w",  # Wait for startup to complete
+            "-o", " ".join(options),  # Options passed to postgres
+        ]
+
+        log.info("Requesting postgres start via: {args}", args=args)
         self.reactor.spawnProcess(
-            monitor, pgCtl,
-            [
-                pgCtl,
-                "start",
-                "-l", self.logFile,
-                "-t 86400",  # Give plenty of time for a long cluster upgrade
-                "-w",
-                # XXX what are the quoting rules for '-o'?  do I need to repr()
-                # the path here?
-                "-o",
-                " ".join(options),
-            ],
+            monitor, self._pgCtl, args,
             env=self.env, path=self.workingDir.path,
             uid=self.uid, gid=self.gid,
         )
@@ -517,12 +588,12 @@
             We started postgres; we're responsible for stopping it later.
             Call pgCtl status to get the pid.
             """
-            log.warn("{cmd} exited", cmd=pgCtl)
+            log.info("{cmd} exited", cmd=self._pgCtl)
             self.shouldStopDatabase = True
             d = Deferred()
             statusMonitor = CapturingProcessProtocol(d, None)
             self.reactor.spawnProcess(
-                statusMonitor, pgCtl, [pgCtl, "status"],
+                statusMonitor, self._pgCtl, [self._pgCtl, "status"],
                 env=self.env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
@@ -537,7 +608,7 @@
             d = Deferred()
             statusMonitor = CapturingProcessProtocol(d, None)
             self.reactor.spawnProcess(
-                statusMonitor, pgCtl, [pgCtl, "status"],
+                statusMonitor, self._pgCtl, [self._pgCtl, "status"],
                 env=self.env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
@@ -548,7 +619,10 @@
             We can't start postgres or connect to a running instance.  Shut
             down.
             """
-            log.failure("Can't start or connect to postgres", f)
+            log.critical(
+                "Can't start or connect to postgres: {failure.value}",
+                failure=f
+            )
             self.deactivateDelayedShutdown()
             self.reactor.stop()
 
@@ -565,11 +639,10 @@
         env.update(PGDATA=clusterDir.path,
                    PGHOST=self.host,
                    PGUSER=self.spawnedDBUser)
-        initdb = self.initdb()
 
         if self.socketDir:
             if not self.socketDir.isdir():
-                log.warn("Creating {dir}", dir=self.socketDir.path)
+                log.info("Creating {dir}", dir=self.socketDir.path)
                 self.socketDir.createDirectory()
 
             if self.uid and self.gid:
@@ -578,11 +651,11 @@
             os.chmod(self.socketDir.path, 0770)
 
         if not self.dataStoreDirectory.isdir():
-            log.warn("Creating {dir}", dir=self.dataStoreDirectory.path)
+            log.info("Creating {dir}", dir=self.dataStoreDirectory.path)
             self.dataStoreDirectory.createDirectory()
 
         if not self.workingDir.isdir():
-            log.warn("Creating {dir}", dir=self.workingDir.path)
+            log.info("Creating {dir}", dir=self.workingDir.path)
             self.workingDir.createDirectory()
 
         if self.uid and self.gid:
@@ -591,11 +664,12 @@
 
         if not clusterDir.isdir():
             # No cluster directory, run initdb
-            log.warn("Running initdb for {dir}", dir=clusterDir.path)
+            log.info("Running initdb for {dir}", dir=clusterDir.path)
             dbInited = Deferred()
             self.reactor.spawnProcess(
                 CapturingProcessProtocol(dbInited, None),
-                initdb, [initdb, "-E", "UTF8", "-U", self.spawnedDBUser],
+                self._initdb,
+                [self._initdb, "-E", "UTF8", "-U", self.spawnedDBUser],
                 env=env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
@@ -603,7 +677,7 @@
             def doCreate(result):
                 if result.find("FATAL:") != -1:
                     log.error(result)
-                    raise RuntimeError(
+                    raise InternalDataStoreError(
                         "Unable to initialize postgres database: {}"
                         .format(result)
                     )
@@ -612,7 +686,7 @@
             dbInited.addCallback(doCreate)
 
         else:
-            log.warn("Cluster already exists at {dir}", dir=clusterDir.path)
+            log.info("Cluster already exists at {dir}", dir=clusterDir.path)
             self.startDatabase()
 
 
@@ -633,12 +707,11 @@
             # If pg_ctl's startup wasn't successful, don't bother to stop the
             # database.  (This also happens in command-line tools.)
             if self.shouldStopDatabase:
-                monitor = _PostgresMonitor()
-                pgCtl = self.pgCtl()
+                monitor = PostgresMonitor()
                 # FIXME: why is this 'logfile' and not self.logfile?
                 self.reactor.spawnProcess(
-                    monitor, pgCtl,
-                    [pgCtl, "-l", "logfile", "stop"],
+                    monitor, self._pgCtl,
+                    [self._pgCtl, "-l", "logfile", "stop"],
                     env=self.env, path=self.workingDir.path,
                     uid=self.uid, gid=self.gid,
                 )

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/test/test_subpostgres.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/test/test_subpostgres.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/test/test_subpostgres.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -139,61 +139,3 @@
         cursor.execute("select * from test_dummy_table")
         values = cursor.fetchall()
         self.assertEquals(map(list, values), [["dummy"]])
-
-
-    @inlineCallbacks
-    def test_startService_withDumpFile(self):
-        """
-        Assuming a properly configured environment ($PATH points at an 'initdb'
-        and 'postgres', $PYTHONPATH includes pgdb), starting a
-        L{PostgresService} will start the service passed to it, after importing
-        an existing dump file.
-        """
-
-        test = self
-
-        class SimpleService1(Service):
-
-            instances = []
-            ready = Deferred()
-
-            def __init__(self, connectionFactory, storageService):
-                self.connection = connectionFactory()
-                test.addCleanup(self.connection.close)
-                self.instances.append(self)
-
-
-            def startService(self):
-                cursor = self.connection.cursor()
-                try:
-                    cursor.execute(
-                        "insert into import_test_table values ('value2')"
-                    )
-                except:
-                    self.ready.errback()
-                else:
-                    self.ready.callback(None)
-                finally:
-                    cursor.close()
-
-        # The SQL in importFile.sql will get executed, including the insertion
-        # of "value1"
-        importFileName = (
-            CachingFilePath(__file__).parent().child("importFile.sql").path
-        )
-        svc = PostgresService(
-            CachingFilePath("postgres_3.pgdb"),
-            SimpleService1,
-            "",
-            databaseName="dummy_db",
-            testMode=True,
-            importFileName=importFileName
-        )
-        svc.startService()
-        self.addCleanup(svc.stopService)
-        yield SimpleService1.ready
-        connection = SimpleService1.instances[0].connection
-        cursor = connection.cursor()
-        cursor.execute("select * from import_test_table")
-        values = cursor.fetchall()
-        self.assertEquals(map(list, values), [["value1"], ["value2"]])

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/util.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/util.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/base/datastore/util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -86,6 +86,18 @@
         return self.delete(key)
 
 
+    # Home objects by UID
+
+    def keyForHomeWithUID(self, homeType, ownerUID, status):
+        return "homeWithUID:%s:%s:%s" % (homeType, status, ownerUID)
+
+
+    # Home objects by id
+
+    def keyForHomeWithID(self, homeType, homeResourceID, status):
+        return "homeWithID:%s:%s:%s" % (homeType, status, homeResourceID)
+
+
     # Home child objects by name
 
     def keyForObjectWithName(self, homeResourceID, name):
@@ -100,8 +112,8 @@
 
     # Home child objects by external id
 
-    def keyForObjectWithExternalID(self, homeResourceID, externalID):
-        return "objectWithExternalID:%s:%s" % (homeResourceID, externalID)
+    def keyForObjectWithBindUID(self, homeResourceID, bindUID):
+        return "objectWithBindUID:%s:%s" % (homeResourceID, bindUID)
 
 
     # Home metadata (Created/Modified)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/index_file.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/index_file.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/index_file.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -50,7 +50,7 @@
 from txdav.common.icommondatastore import SyncTokenValidException, \
     ReservationError, IndexedSearchException
 
-from twistedcaldav.dateops import pyCalendarTodatetime
+from twistedcaldav.dateops import pyCalendarToSQLTimestamp
 from twistedcaldav.ical import Component
 from twistedcaldav.sql import AbstractSQLDatabase
 from twistedcaldav.sql import db_prefix
@@ -658,7 +658,7 @@
         Gives all resources which have not been expanded beyond a given date
         in the index
         """
-        return self._db_values_for_sql("select NAME from RESOURCE where RECURRANCE_MAX < :1", pyCalendarTodatetime(minDate))
+        return self._db_values_for_sql("select NAME from RESOURCE where RECURRANCE_MAX < :1", pyCalendarToSQLTimestamp(minDate))
 
 
     def reExpandResource(self, name, expand_until):
@@ -747,7 +747,7 @@
             """
             insert into RESOURCE (NAME, UID, TYPE, RECURRANCE_MAX, ORGANIZER)
             values (:1, :2, :3, :4, :5)
-            """, name, uid, calendar.resourceType(), pyCalendarTodatetime(recurrenceLimit) if recurrenceLimit else None, organizer
+            """, name, uid, calendar.resourceType(), pyCalendarToSQLTimestamp(recurrenceLimit) if recurrenceLimit else None, organizer
         )
         resourceid = self.lastrowid
 
@@ -785,8 +785,8 @@
                     """,
                     resourceid,
                     float,
-                    pyCalendarTodatetime(start),
-                    pyCalendarTodatetime(end),
+                    pyCalendarToSQLTimestamp(start),
+                    pyCalendarToSQLTimestamp(end),
                     icalfbtype_to_indexfbtype.get(instance.component.getFBType(), 'F'),
                     transp
                 )
@@ -811,7 +811,7 @@
                     """
                     insert into TIMESPAN (RESOURCEID, FLOAT, START, END, FBTYPE, TRANSPARENT)
                     values (:1, :2, :3, :4, :5, :6)
-                    """, resourceid, float, pyCalendarTodatetime(start), pyCalendarTodatetime(end), '?', '?'
+                    """, resourceid, float, pyCalendarToSQLTimestamp(start), pyCalendarToSQLTimestamp(end), '?', '?'
                 )
                 instanceid = self.lastrowid
                 peruserdata = calendar.perUserData(None)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/builder.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/builder.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/builder.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -14,7 +14,7 @@
 # limitations under the License.
 ##
 
-from twistedcaldav.dateops import floatoffset, pyCalendarTodatetime
+from twistedcaldav.dateops import floatoffset, pyCalendarToSQLTimestamp
 
 from txdav.caldav.datastore.query.filter import ComponentFilter, PropertyFilter, TextMatch, TimeRange
 from txdav.common.datastore.query import expression
@@ -220,8 +220,8 @@
     endfloat = floatoffset(end, tzinfo) if end else None
 
     return (
-        pyCalendarTodatetime(start) if start else None,
-        pyCalendarTodatetime(end) if end else None,
-        pyCalendarTodatetime(startfloat) if startfloat else None,
-        pyCalendarTodatetime(endfloat) if endfloat else None,
+        pyCalendarToSQLTimestamp(start) if start else None,
+        pyCalendarToSQLTimestamp(end) if end else None,
+        pyCalendarToSQLTimestamp(startfloat) if startfloat else None,
+        pyCalendarToSQLTimestamp(endfloat) if endfloat else None,
     )

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/test/test_filter.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/test/test_filter.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/query/test/test_filter.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -29,7 +29,6 @@
 from txdav.caldav.datastore.query.generator import CalDAVSQLQueryGenerator
 from txdav.common.datastore.sql_tables import schema
 
-from dateutil.tz import tzutc
 import datetime
 from twistedcaldav.ical import Component
 
@@ -97,7 +96,7 @@
 
         self.assertEqual(select.toSQL(), SQLFragment(
             "select distinct RESOURCE_NAME, ICALENDAR_UID, ICALENDAR_TYPE from CALENDAR_OBJECT, TIME_RANGE where ICALENDAR_TYPE in (?, ?, ?) and (FLOATING = ? and START_DATE < ? and END_DATE > ? or FLOATING = ? and START_DATE < ? and END_DATE > ?) and CALENDAR_OBJECT_RESOURCE_ID = RESOURCE_ID and TIME_RANGE.CALENDAR_RESOURCE_ID = ?",
-            [Parameter('arg1', 3), False, datetime.datetime(2006, 6, 5, 17, 0, tzinfo=tzutc()), datetime.datetime(2006, 6, 5, 16, 0, tzinfo=tzutc()), True, datetime.datetime(2006, 6, 5, 13, 0, tzinfo=tzutc()), datetime.datetime(2006, 6, 5, 12, 0, tzinfo=tzutc()), 1234]
+            [Parameter('arg1', 3), False, datetime.datetime(2006, 6, 5, 17, 0), datetime.datetime(2006, 6, 5, 16, 0), True, datetime.datetime(2006, 6, 5, 13, 0), datetime.datetime(2006, 6, 5, 12, 0), 1234]
         ))
         self.assertEqual(args, {"arg1": ("VEVENT", "VFREEBUSY", "VAVAILABILITY")})
         self.assertEqual(usedtimerange, True)
@@ -126,7 +125,7 @@
 
         self.assertEqual(select.toSQL(), SQLFragment(
             "select distinct RESOURCE_NAME, ICALENDAR_UID, ICALENDAR_TYPE, ORGANIZER, FLOATING, coalesce(ADJUSTED_START_DATE, START_DATE), coalesce(ADJUSTED_END_DATE, END_DATE), FBTYPE, TIME_RANGE.TRANSPARENT, PERUSER.TRANSPARENT from CALENDAR_OBJECT, TIME_RANGE left outer join PERUSER on INSTANCE_ID = TIME_RANGE_INSTANCE_ID and USER_ID = ? where ICALENDAR_TYPE in (?, ?, ?) and (FLOATING = ? and coalesce(ADJUSTED_START_DATE, START_DATE) < ? and coalesce(ADJUSTED_END_DATE, END_DATE) > ? or FLOATING = ? and coalesce(ADJUSTED_START_DATE, START_DATE) < ? and coalesce(ADJUSTED_END_DATE, END_DATE) > ?) and CALENDAR_OBJECT_RESOURCE_ID = RESOURCE_ID and TIME_RANGE.CALENDAR_RESOURCE_ID = ?",
-            ['user01', Parameter('arg1', 3), False, datetime.datetime(2006, 6, 5, 17, 0, tzinfo=tzutc()), datetime.datetime(2006, 6, 5, 16, 0, tzinfo=tzutc()), True, datetime.datetime(2006, 6, 5, 13, 0, tzinfo=tzutc()), datetime.datetime(2006, 6, 5, 12, 0, tzinfo=tzutc()), 1234]
+            ['user01', Parameter('arg1', 3), False, datetime.datetime(2006, 6, 5, 17, 0), datetime.datetime(2006, 6, 5, 16, 0), True, datetime.datetime(2006, 6, 5, 13, 0), datetime.datetime(2006, 6, 5, 12, 0), 1234]
         ))
         self.assertEqual(args, {"arg1": ("VEVENT", "VFREEBUSY", "VAVAILABILITY")})
         self.assertEqual(usedtimerange, True)
@@ -193,7 +192,7 @@
 
         self.assertEqual(select.toSQL(), SQLFragment(
             "select distinct RESOURCE_NAME, ICALENDAR_UID, ICALENDAR_TYPE from CALENDAR_OBJECT, TIME_RANGE where (ICALENDAR_TYPE = ? and (FLOATING = ? and END_DATE > ? or FLOATING = ? and END_DATE > ?) or ICALENDAR_TYPE = ?) and CALENDAR_OBJECT_RESOURCE_ID = RESOURCE_ID and TIME_RANGE.CALENDAR_RESOURCE_ID = ?",
-            ['VEVENT', False, datetime.datetime(2006, 6, 5, 16, 0, tzinfo=tzutc()), True, datetime.datetime(2006, 6, 5, 12, 0, tzinfo=tzutc()), 'VTODO', 1234]
+            ['VEVENT', False, datetime.datetime(2006, 6, 5, 16, 0), True, datetime.datetime(2006, 6, 5, 12, 0), 'VTODO', 1234]
         ))
         self.assertEqual(args, {})
         self.assertEqual(usedtimerange, True)

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/schedule.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/schedule.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/schedule.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,214 +0,0 @@
-# -*- test-case-name: txdav.caldav.datastore.test.test_scheduling -*-
-##
-# Copyright (c) 2010-2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-from zope.interface.declarations import implements
-from txdav.caldav.icalendarstore import ICalendarHome, ICalendar, ICalendarObject, \
-    ICalendarTransaction, ICalendarStore
-
-from twisted.python.util import FancyEqMixin
-from twisted.python.components import proxyForInterface
-from twisted.internet.defer import inlineCallbacks, returnValue
-
-
-
-class ImplicitTransaction(
-        proxyForInterface(ICalendarTransaction,
-                          originalAttribute="_transaction")):
-    """
-    Wrapper around an L{ICalendarStoreTransaction}.
-    """
-
-    def __init__(self, transaction):
-        """
-        Initialize an L{ImplicitTransaction}.
-
-        @type transaction: L{ICalendarStoreTransaction}
-        """
-        self._transaction = transaction
-
-
-    @inlineCallbacks
-    def calendarHomeWithUID(self, uid, create=False):
-        # FIXME: 'create' flag
-        newHome = yield super(ImplicitTransaction, self).calendarHomeWithUID(uid, create)
-#        return ImplicitCalendarHome(newHome, self)
-        if newHome is None:
-            returnValue(None)
-        else:
-            # FIXME: relay transaction
-            returnValue(ImplicitCalendarHome(newHome, None))
-
-
-
-class ImplicitCalendarHome(proxyForInterface(ICalendarHome, "_calendarHome")):
-
-    implements(ICalendarHome)
-
-    def __init__(self, calendarHome, transaction):
-        """
-        Initialize L{ImplicitCalendarHome} with an underlying
-        calendar home and L{ImplicitTransaction}.
-        """
-        self._calendarHome = calendarHome
-        self._transaction = transaction
-
-
-#    def properties(self):
-#        # FIXME: wrap?
-#        return self._calendarHome.properties()
-
-    @inlineCallbacks
-    def calendars(self):
-        superCalendars = (yield super(ImplicitCalendarHome, self).calendars())
-        wrapped = []
-        for calendar in superCalendars:
-            wrapped.append(ImplicitCalendar(self, calendar))
-        returnValue(wrapped)
-
-
-    @inlineCallbacks
-    def loadCalendars(self):
-        superCalendars = (yield super(ImplicitCalendarHome, self).loadCalendars())
-        wrapped = []
-        for calendar in superCalendars:
-            wrapped.append(ImplicitCalendar(self, calendar))
-        returnValue(wrapped)
-
-
-    def createCalendarWithName(self, name):
-        self._calendarHome.createCalendarWithName(name)
-
-
-    def removeCalendarWithName(self, name):
-        self._calendarHome.removeCalendarWithName(name)
-
-
-    @inlineCallbacks
-    def calendarWithName(self, name):
-        calendar = yield self._calendarHome.calendarWithName(name)
-        if calendar is not None:
-            returnValue(ImplicitCalendar(self, calendar))
-        else:
-            returnValue(None)
-
-
-    def hasCalendarResourceUIDSomewhereElse(self, uid, ok_object, type):
-        return self._calendarHome.hasCalendarResourceUIDSomewhereElse(uid, ok_object, type)
-
-
-    def getCalendarResourcesForUID(self, uid):
-        return self._calendarHome.getCalendarResourcesForUID(uid)
-
-
-
-class ImplicitCalendarObject(object):
-    implements(ICalendarObject)
-
-    def setComponent(self, component):
-        pass
-
-
-    def component(self):
-        pass
-
-
-    def uid(self):
-        pass
-
-
-    def componentType(self):
-        pass
-
-
-    def organizer(self):
-        pass
-
-
-    def properties(self):
-        pass
-
-
-
-class ImplicitCalendar(FancyEqMixin,
-                       proxyForInterface(ICalendar, "_subCalendar")):
-
-    compareAttributes = (
-        "_subCalendar",
-        "_parentHome",
-    )
-
-    def __init__(self, parentHome, subCalendar):
-        self._parentHome = parentHome
-        self._subCalendar = subCalendar
-        self._supportedComponents = None
-
-#    def ownerCalendarHome(self):
-#        return self._parentHome
-#    def calendarObjects(self):
-#        # FIXME: wrap
-#        return self._subCalendar.calendarObjects()
-#    def calendarObjectWithUID(self, uid): ""
-#    def createCalendarObjectWithName(self, name, component):
-#        # FIXME: implement most of StoreCalendarObjectResource here!
-#        self._subCalendar.createCalendarObjectWithName(name, component)
-#    def syncToken(self): ""
-#    def calendarObjectsInTimeRange(self, start, end, timeZone): ""
-#    def calendarObjectsSinceToken(self, token): ""
-#    def properties(self):
-#        # FIXME: probably need to wrap this as well
-#        return self._subCalendar.properties()
-#
-#    def calendarObjectWithName(self, name):
-#        #FIXME: wrap
-#        return self._subCalendar.calendarObjectWithName(name)
-
-
-    def _createCalendarObjectWithNameInternal(self, name, component, internal_state, options=None):
-        return self.createCalendarObjectWithName(name, component, options)
-
-
-    def setSupportedComponents(self, supported_components):
-        """
-        Update the database column with the supported components. Technically this should only happen once
-        on collection creation, but for migration we may need to change after the fact - hence a separate api.
-        """
-        self._supportedComponents = supported_components
-
-
-    def getSupportedComponents(self):
-        return self._supportedComponents
-
-
-
-class ImplicitStore(proxyForInterface(ICalendarStore, "_calendarStore")):
-    """
-    This is a wrapper around an L{ICalendarStore} that implements implicit
-    scheduling.
-    """
-
-    def __init__(self, calendarStore):
-        """
-        Create an L{ImplicitStore} wrapped around another
-        L{ICalendarStore} provider.
-        """
-        self._calendarStore = calendarStore
-
-
-    def newTransaction(self, label="unlabeled"):
-        """
-        Wrap an underlying L{ITransaction}.
-        """
-        return ImplicitTransaction(self._calendarStore.newTransaction(label))

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/inbound.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/inbound.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/inbound.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -260,11 +260,11 @@
             return
 
         txn = self.store.newTransaction(label="MailReceiver.processDSN")
-        result = (yield txn.imipLookupByToken(token))
+        records = (yield txn.imipLookupByToken(token))
         yield txn.commit()
         try:
             # Note the results are returned as utf-8 encoded strings
-            organizer, attendee, _ignore_icaluid = result[0]
+            record = records[0]
         except:
             # This isn't a token we recognize
             log.error(
@@ -272,7 +272,7 @@
                 % (token, msgId))
             returnValue(self.UNKNOWN_TOKEN)
 
-        calendar.removeAllButOneAttendee(attendee)
+        calendar.removeAllButOneAttendee(record.attendee)
         calendar.getOrganizerProperty().setValue(organizer)
         for comp in calendar.subcomponents():
             if comp.name() == "VEVENT":
@@ -288,8 +288,11 @@
         log.warn("Mail gateway processing DSN %s" % (msgId,))
         txn = self.store.newTransaction(label="MailReceiver.processDSN")
         yield txn.enqueue(
-            IMIPReplyWork, organizer=organizer, attendee=attendee,
-            icalendarText=str(calendar))
+            IMIPReplyWork,
+            organizer=record.organizer,
+            attendee=record.attendee,
+            icalendarText=str(calendar)
+        )
         yield txn.commit()
         returnValue(self.INJECTION_SUBMITTED)
 
@@ -313,11 +316,11 @@
             returnValue(self.MALFORMED_TO_ADDRESS)
 
         txn = self.store.newTransaction(label="MailReceiver.processReply")
-        result = (yield txn.imipLookupByToken(token))
+        records = (yield txn.imipLookupByToken(token))
         yield txn.commit()
         try:
             # Note the results are returned as utf-8 encoded strings
-            organizer, attendee, _ignore_icaluid = result[0]
+            record = records[0]
         except:
             # This isn't a token we recognize
             log.error(
@@ -337,11 +340,11 @@
                 "in message %s" % (msg['Message-ID'],))
 
             toAddr = None
-            fromAddr = attendee[7:]
-            if organizer.startswith("mailto:"):
-                toAddr = organizer[7:]
-            elif organizer.startswith("urn:x-uid:"):
-                uid = organizer[10:]
+            fromAddr = record.attendee[7:]
+            if record.organizer.startswith("mailto:"):
+                toAddr = record.organizer[7:]
+            elif record.organizer.startswith("urn:x-uid:"):
+                uid = record.organizer[10:]
                 record = yield self.directory.recordWithUID(uid)
                 try:
                     if record and record.emailAddresses:
@@ -376,23 +379,23 @@
         calendar = Component.fromString(calBody)
         event = calendar.mainComponent()
 
-        calendar.removeAllButOneAttendee(attendee)
+        calendar.removeAllButOneAttendee(record.attendee)
         organizerProperty = calendar.getOrganizerProperty()
         if organizerProperty is None:
             # ORGANIZER is required per rfc2446 section 3.2.3
             log.warn(
                 "Mail gateway didn't find an ORGANIZER in REPLY %s"
                 % (msg['Message-ID'],))
-            event.addProperty(Property("ORGANIZER", organizer))
+            event.addProperty(Property("ORGANIZER", record.organizer))
         else:
-            organizerProperty.setValue(organizer)
+            organizerProperty.setValue(record.organizer)
 
         if not calendar.getAttendees():
             # The attendee we're expecting isn't there, so add it back
             # with a SCHEDULE-STATUS of SERVICE_UNAVAILABLE.
             # The organizer will then see that the reply was not successful.
             attendeeProp = Property(
-                "ATTENDEE", attendee,
+                "ATTENDEE", record.attendee,
                 params={
                     "SCHEDULE-STATUS": iTIPRequestStatus.SERVICE_UNAVAILABLE,
                 }
@@ -406,8 +409,11 @@
 
         txn = self.store.newTransaction(label="MailReceiver.processReply")
         yield txn.enqueue(
-            IMIPReplyWork, organizer=organizer, attendee=attendee,
-            icalendarText=str(calendar))
+            IMIPReplyWork,
+            organizer=record.organizer,
+            attendee=record.attendee,
+            icalendarText=str(calendar)
+        )
         yield txn.commit()
         returnValue(self.INJECTION_SUBMITTED)
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/outbound.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/outbound.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/outbound.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -364,28 +364,29 @@
 
             # Reuse or generate a token based on originator, toAddr, and
             # event uid
-            token = (yield txn.imipGetToken(originator, toAddr.lower(), icaluid))
-            if token is None:
+            record = (yield txn.imipGetToken(originator, toAddr.lower(), icaluid))
+            if record is None:
 
                 # Because in the past the originator was sometimes in mailto:
                 # form, lookup an existing token by mailto: as well
                 organizerProperty = calendar.getOrganizerProperty()
                 organizerEmailAddress = organizerProperty.parameterValue("EMAIL", None)
                 if organizerEmailAddress is not None:
-                    token = (yield txn.imipGetToken("mailto:%s" % (organizerEmailAddress.lower(),), toAddr.lower(), icaluid))
+                    record = (yield txn.imipGetToken("mailto:%s" % (organizerEmailAddress.lower(),), toAddr.lower(), icaluid))
 
-            if token is None:
-                token = (yield txn.imipCreateToken(originator, toAddr.lower(), icaluid))
+            if record is None:
+                record = (yield txn.imipCreateToken(originator, toAddr.lower(), icaluid))
                 self.log.debug("Mail gateway created token %s for %s "
                                "(originator), %s (recipient) and %s (icaluid)"
-                               % (token, originator, toAddr, icaluid))
+                               % (record.token, originator, toAddr, icaluid))
                 inviteState = "new"
 
             else:
                 self.log.debug("Mail gateway reusing token %s for %s "
                                "(originator), %s (recipient) and %s (icaluid)"
-                               % (token, originator, toAddr, icaluid))
+                               % (record.token, originator, toAddr, icaluid))
                 inviteState = "update"
+            token = record.token
 
             fullServerAddress = self.address
             _ignore_name, serverAddress = email.utils.parseaddr(fullServerAddress)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_inbound.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -175,13 +175,13 @@
 
         # Make sure a known token *is* processed
         txn = self.store.newTransaction()
-        token = (yield txn.imipCreateToken(
+        record = (yield txn.imipCreateToken(
             "urn:x-uid:5A985493-EE2C-4665-94CF-4DFEA3A89500",
             "mailto:user02 at example.com",
             "1E71F9C8-AEDA-48EB-98D0-76E898F6BB5C"
         ))
         yield txn.commit()
-        calBody = template % token
+        calBody = template % record.token
         result = (yield self.receiver.processDSN(calBody, "xyzzy"))
         self.assertEquals(result, MailReceiver.INJECTION_SUBMITTED)
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_mailgateway.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_mailgateway.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_mailgateway.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -44,9 +44,8 @@
             "icaluid1", token="token1")
         yield migrateTokensToStore(self.path, self.store)
         txn = self.store.newTransaction()
-        results = yield (txn.imipLookupByToken("token1"))
-        organizer, attendee, icaluid = results[0]
+        records = yield (txn.imipLookupByToken("token1"))
         yield txn.commit()
-        self.assertEquals(organizer, "urn:uuid:user01")
-        self.assertEquals(attendee, "mailto:attendee at example.com")
-        self.assertEquals(icaluid, "icaluid1")
+        self.assertEquals(records[0].organizer, "urn:uuid:user01")
+        self.assertEquals(records[0].attendee, "mailto:attendee at example.com")
+        self.assertEquals(records[0].icaluid, "icaluid1")

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_outbound.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_outbound.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/imip/test/test_outbound.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -316,17 +316,17 @@
         yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60)
 
         txn = self.store.newTransaction()
-        token = (yield txn.imipGetToken(
+        record = (yield txn.imipGetToken(
             ORGANIZER,
             ATTENDEE,
             ICALUID
         ))
-        self.assertTrue(token)
-        organizer, attendee, icaluid = (yield txn.imipLookupByToken(token))[0]
+        self.assertTrue(record is not None)
+        record = (yield txn.imipLookupByToken(record.token))[0]
         yield txn.commit()
-        self.assertEquals(organizer, ORGANIZER)
-        self.assertEquals(attendee, ATTENDEE)
-        self.assertEquals(icaluid, ICALUID)
+        self.assertEquals(record.organizer, ORGANIZER)
+        self.assertEquals(record.attendee, ATTENDEE)
+        self.assertEquals(record.icaluid, ICALUID)
 
 
     @inlineCallbacks
@@ -492,12 +492,12 @@
             if UID: # The organizer is local, and server is sending to remote
                     # attendee
                 txn = self.store.newTransaction()
-                token = (yield txn.imipGetToken(inputOriginator, inputRecipient, UID))
+                record = (yield txn.imipGetToken(inputOriginator, inputRecipient, UID))
                 yield txn.commit()
-                self.assertNotEquals(token, None)
+                self.assertNotEquals(record, None)
                 self.assertEquals(
                     msg["Reply-To"],
-                    "server+%s at example.com" % (token,))
+                    "server+%s at example.com" % (record.token,))
 
                 # Make sure attendee property for organizer exists and matches
                 # the CUA of the organizer property
@@ -529,31 +529,31 @@
     @inlineCallbacks
     def test_tokens(self):
         txn = self.store.newTransaction()
-        token = (yield txn.imipLookupByToken("xyzzy"))
+        self.assertEquals((yield txn.imipLookupByToken("xyzzy")), [])
         yield txn.commit()
-        self.assertEquals(token, [])
 
         txn = self.store.newTransaction()
-        token1 = (yield txn.imipCreateToken("organizer", "attendee", "icaluid"))
+        record1 = (yield txn.imipCreateToken("organizer", "attendee", "icaluid"))
         yield txn.commit()
 
         txn = self.store.newTransaction()
-        token2 = (yield txn.imipGetToken("organizer", "attendee", "icaluid"))
+        record2 = (yield txn.imipGetToken("organizer", "attendee", "icaluid"))
         yield txn.commit()
-        self.assertEquals(token1, token2)
+        self.assertEquals(record1.token, record2.token)
 
         txn = self.store.newTransaction()
+        record = (yield txn.imipLookupByToken(record1.token))[0]
         self.assertEquals(
-            map(list, (yield txn.imipLookupByToken(token1))),
-            [["organizer", "attendee", "icaluid"]])
+            [record.organizer, record.attendee, record.icaluid],
+            ["organizer", "attendee", "icaluid"])
         yield txn.commit()
 
         txn = self.store.newTransaction()
-        yield txn.imipRemoveToken(token1)
+        yield txn.imipRemoveToken(record1.token)
         yield txn.commit()
 
         txn = self.store.newTransaction()
-        self.assertEquals((yield txn.imipLookupByToken(token1)), [])
+        self.assertEquals((yield txn.imipLookupByToken(record1.token)), [])
         yield txn.commit()
 
 
@@ -568,7 +568,7 @@
         # Explictly store a token with mailto: CUA for organizer
         # (something that doesn't happen any more, but did in the past)
         txn = self.store.newTransaction()
-        origToken = (yield txn.imipCreateToken(
+        origRecord = (yield txn.imipCreateToken(
             organizerEmail,
             "mailto:attendee at example.com",
             "CFDD5E46-4F74-478A-9311-B3FF905449C3"
@@ -588,15 +588,15 @@
 
         # Verify we didn't create a new token...
         txn = self.store.newTransaction()
-        token = (yield txn.imipGetToken(inputOriginator, inputRecipient, UID))
+        record = (yield txn.imipGetToken(inputOriginator, inputRecipient, UID))
         yield txn.commit()
-        self.assertEquals(token, None)
+        self.assertEquals(record, None)
 
         # But instead kept the old one...
         txn = self.store.newTransaction()
-        token = (yield txn.imipGetToken(organizerEmail, inputRecipient, UID))
+        record = (yield txn.imipGetToken(organizerEmail, inputRecipient, UID))
         yield txn.commit()
-        self.assertEquals(token, origToken)
+        self.assertEquals(record.token, origRecord.token)
 
 
     def generateSampleEmail(self, caltext=initialInviteText):

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/ischedule/delivery.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/ischedule/delivery.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/ischedule/delivery.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -448,15 +448,6 @@
 
     @inlineCallbacks
     def _processRequest(self, ssl, host, port, path):
-        from twisted.internet import reactor
-        f = Factory()
-        f.protocol = HTTPClientProtocol
-        if ssl:
-            ep = GAIEndpoint(reactor, host, port, _configuredClientContextFactory())
-        else:
-            ep = GAIEndpoint(reactor, host, port)
-        proto = (yield ep.connect(f))
-
         if not self.server.podding() and config.Scheduling.iSchedule.DKIM.Enabled:
             domain, selector, key_file, algorithm, useDNSKey, useHTTPKey, usePrivateExchangeKey, expire = DKIMUtils.getConfiguration(config)
             request = DKIMRequest(
@@ -481,6 +472,21 @@
         if accountingEnabledForCategory("iSchedule"):
             self.loggedRequest = yield self.logRequest(request)
 
+        response = yield self._submitRequest(ssl, host, port, request)
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def _submitRequest(self, ssl, host, port, request):
+        from twisted.internet import reactor
+        f = Factory()
+        f.protocol = HTTPClientProtocol
+        if ssl:
+            ep = GAIEndpoint(reactor, host, port, _configuredClientContextFactory())
+        else:
+            ep = GAIEndpoint(reactor, host, port)
+        proto = (yield ep.connect(f))
+
         response = (yield proto.submitRequest(request))
 
         returnValue(response)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/test/test_work.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/test/test_work.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/test/test_work.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -250,7 +250,7 @@
 
         work = yield jobs[0].workItem()
         self.assertTrue(isinstance(work, ScheduleOrganizerWork))
-        self.assertEqual(work.icalendarUid, "12345-67890")
+        self.assertEqual(work.icalendarUID, "12345-67890")
         self.assertEqual(scheduleActionFromSQL[work.scheduleAction], "create")
 
         yield work.delete()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/work.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/work.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/scheduling/work.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -77,7 +77,7 @@
 
         baseargs = {
             "jobID": kwargs.pop("jobID"),
-            "icalendarUid": kwargs.pop("icalendarUid"),
+            "icalendarUID": kwargs.pop("icalendarUID"),
             "workType": cls.workType()
         }
 
@@ -121,7 +121,7 @@
         # cause deadlocks if done in the wrong order
 
         # Row level lock on this item
-        locked = yield self.baseWork.trylock(ScheduleWork.icalendarUid == self.icalendarUid)
+        locked = yield self.baseWork.trylock(ScheduleWork.icalendarUID == self.icalendarUID)
         if locked:
             yield self.trylock()
         returnValue(locked)
@@ -136,7 +136,7 @@
         """
         self.__dict__["baseWork"] = baseWork
         self.__dict__["jobID"] = baseWork.jobID
-        self.__dict__["icalendarUid"] = baseWork.icalendarUid
+        self.__dict__["icalendarUID"] = baseWork.icalendarUID
 
 
     def delete(self):
@@ -174,7 +174,7 @@
         if self.workType() == ScheduleOrganizerSendWork.workType():
             all = yield self.baseWork.query(
                 self.transaction,
-                (ScheduleWork.icalendarUid == self.icalendarUid).And(ScheduleWork.workID != self.workID),
+                (ScheduleWork.icalendarUID == self.icalendarUID).And(ScheduleWork.workID != self.workID),
                 order=ScheduleWork.workID,
                 limit=1,
             )
@@ -183,7 +183,7 @@
                 if work.workType == self.workType():
                     job = yield JobItem.load(self.transaction, work.jobID)
                     yield job.update(notBefore=datetime.datetime.utcnow())
-                    log.debug("ScheduleOrganizerSendWork - promoted job: {id}, UID: '{uid}'", id=work.workID, uid=self.icalendarUid)
+                    log.debug("ScheduleOrganizerSendWork - promoted job: {id}, UID: '{uid}'", id=work.workID, uid=self.icalendarUID)
 
 
     @classmethod
@@ -323,7 +323,7 @@
         proposal = (yield txn.enqueue(
             cls,
             notBefore=notBefore,
-            icalendarUid=uid,
+            icalendarUID=uid,
             scheduleAction=scheduleActionToSQL[action],
             homeResourceID=home.id(),
             resourceID=resource.id() if resource else None,
@@ -347,10 +347,10 @@
             calendar_old = Component.fromString(self.icalendarTextOld) if self.icalendarTextOld else None
             calendar_new = Component.fromString(self.icalendarTextNew) if self.icalendarTextNew else None
 
-            log.debug("ScheduleOrganizerWork - running for ID: {id}, UID: {uid}, organizer: {org}", id=self.workID, uid=self.icalendarUid, org=organizer)
+            log.debug("ScheduleOrganizerWork - running for ID: {id}, UID: {uid}, organizer: {org}", id=self.workID, uid=self.icalendarUID, org=organizer)
 
             # We need to get the UID lock for implicit processing.
-            yield NamedLock.acquire(self.transaction, "ImplicitUIDLock:%s" % (hashlib.md5(self.icalendarUid).hexdigest(),))
+            yield NamedLock.acquire(self.transaction, "ImplicitUIDLock:%s" % (hashlib.md5(self.icalendarUID).hexdigest(),))
 
             from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
             scheduler = ImplicitScheduler()
@@ -359,7 +359,7 @@
                 scheduleActionFromSQL[self.scheduleAction],
                 home,
                 resource,
-                self.icalendarUid,
+                self.icalendarUID,
                 calendar_old,
                 calendar_new,
                 self.smartMerge
@@ -368,15 +368,15 @@
             self._dequeued()
 
         except Exception, e:
-            log.debug("ScheduleOrganizerWork - exception ID: {id}, UID: '{uid}', {err}", id=self.workID, uid=self.icalendarUid, err=str(e))
+            log.debug("ScheduleOrganizerWork - exception ID: {id}, UID: '{uid}', {err}", id=self.workID, uid=self.icalendarUID, err=str(e))
             log.debug(traceback.format_exc())
             raise
         except:
-            log.debug("ScheduleOrganizerWork - bare exception ID: {id}, UID: '{uid}'", id=self.workID, uid=self.icalendarUid)
+            log.debug("ScheduleOrganizerWork - bare exception ID: {id}, UID: '{uid}'", id=self.workID, uid=self.icalendarUID)
             log.debug(traceback.format_exc())
             raise
 
-        log.debug("ScheduleOrganizerWork - done for ID: {id}, UID: {uid}, organizer: {org}", id=self.workID, uid=self.icalendarUid, org=organizer)
+        log.debug("ScheduleOrganizerWork - done for ID: {id}, UID: {uid}, organizer: {org}", id=self.workID, uid=self.icalendarUID, org=organizer)
 
 
 
@@ -418,7 +418,7 @@
         proposal = (yield txn.enqueue(
             cls,
             notBefore=notBefore,
-            icalendarUid=uid,
+            icalendarUID=uid,
             scheduleAction=scheduleActionToSQL[action],
             homeResourceID=home.id(),
             resourceID=resource.id() if resource else None,
@@ -449,13 +449,13 @@
             log.debug(
                 "ScheduleOrganizerSendWork - running for ID: {id}, UID: {uid}, organizer: {org}, attendee: {att}",
                 id=self.workID,
-                uid=self.icalendarUid,
+                uid=self.icalendarUID,
                 org=organizer,
                 att=self.attendee
             )
 
             # We need to get the UID lock for implicit processing.
-            yield NamedLock.acquire(self.transaction, "ImplicitUIDLock:%s" % (hashlib.md5(self.icalendarUid).hexdigest(),))
+            yield NamedLock.acquire(self.transaction, "ImplicitUIDLock:%s" % (hashlib.md5(self.icalendarUID).hexdigest(),))
 
             from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
             scheduler = ImplicitScheduler()
@@ -464,7 +464,7 @@
                 scheduleActionFromSQL[self.scheduleAction],
                 home,
                 resource,
-                self.icalendarUid,
+                self.icalendarUID,
                 organizer,
                 self.attendee,
                 itipmsg,
@@ -486,18 +486,18 @@
             self._dequeued()
 
         except Exception, e:
-            log.debug("ScheduleOrganizerSendWork - exception ID: {id}, UID: '{uid}', {err}", id=self.workID, uid=self.icalendarUid, err=str(e))
+            log.debug("ScheduleOrganizerSendWork - exception ID: {id}, UID: '{uid}', {err}", id=self.workID, uid=self.icalendarUID, err=str(e))
             log.debug(traceback.format_exc())
             raise
         except:
-            log.debug("ScheduleOrganizerSendWork - bare exception ID: {id}, UID: '{uid}'", id=self.workID, uid=self.icalendarUid)
+            log.debug("ScheduleOrganizerSendWork - bare exception ID: {id}, UID: '{uid}'", id=self.workID, uid=self.icalendarUID)
             log.debug(traceback.format_exc())
             raise
 
         log.debug(
             "ScheduleOrganizerSendWork - for ID: {id}, UID: {uid}, organizer: {org}, attendee: {att}",
             id=self.workID,
-            uid=self.icalendarUid,
+            uid=self.icalendarUID,
             org=organizer,
             att=self.attendee
         )
@@ -521,7 +521,7 @@
         proposal = (yield txn.enqueue(
             cls,
             notBefore=notBefore,
-            icalendarUid=uid,
+            icalendarUID=uid,
             homeResourceID=home.id(),
             resourceID=resource.id() if resource else None,
             itipMsg=itipmsg.getTextWithTimezones(includeTimezones=not config.EnableTimezonesByReference),
@@ -649,7 +649,7 @@
         notBefore = datetime.datetime.utcnow() + datetime.timedelta(seconds=config.Scheduling.Options.WorkQueues.AttendeeRefreshBatchDelaySeconds)
         proposal = (yield txn.enqueue(
             cls,
-            icalendarUid=organizer_resource.uid(),
+            icalendarUID=organizer_resource.uid(),
             homeResourceID=organizer_resource._home.id(),
             resourceID=organizer_resource.id(),
             attendeeCount=len(attendees),
@@ -676,7 +676,7 @@
             log.debug("Schedule refresh for resource-id: {rid} - ignored", rid=self.resourceID)
             returnValue(None)
 
-        log.debug("ScheduleRefreshWork - running for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUid)
+        log.debug("ScheduleRefreshWork - running for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUID)
 
         # Get the unique list of pending attendees and split into batch to process
         # TODO: do a DELETE ... and rownum <= N returning attendee - but have to fix Oracle to
@@ -707,7 +707,7 @@
             notBefore = datetime.datetime.utcnow() + datetime.timedelta(seconds=config.Scheduling.Options.WorkQueues.AttendeeRefreshBatchIntervalSeconds)
             yield self.transaction.enqueue(
                 self.__class__,
-                icalendarUid=self.icalendarUid,
+                icalendarUID=self.icalendarUID,
                 homeResourceID=self.homeResourceID,
                 resourceID=self.resourceID,
                 attendeeCount=len(pendingAttendees),
@@ -721,7 +721,7 @@
 
         self._dequeued()
 
-        log.debug("ScheduleRefreshWork - done for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUid)
+        log.debug("ScheduleRefreshWork - done for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUID)
 
 
     @inlineCallbacks
@@ -790,7 +790,7 @@
         notBefore = datetime.datetime.utcnow() + datetime.timedelta(seconds=config.Scheduling.Options.WorkQueues.AutoReplyDelaySeconds)
         proposal = (yield txn.enqueue(
             cls,
-            icalendarUid=resource.uid(),
+            icalendarUID=resource.uid(),
             homeResourceID=resource._home.id(),
             resourceID=resource.id(),
             partstat=partstat,
@@ -803,7 +803,7 @@
     @inlineCallbacks
     def doWork(self):
 
-        log.debug("ScheduleAutoReplyWork - running for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUid)
+        log.debug("ScheduleAutoReplyWork - running for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUID)
 
         # Delete all other work items with the same pushID
         yield Delete(
@@ -816,7 +816,7 @@
 
         self._dequeued()
 
-        log.debug("ScheduleAutoReplyWork - done for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUid)
+        log.debug("ScheduleAutoReplyWork - done for ID: {id}, UID: {uid}", id=self.workID, uid=self.icalendarUID)
 
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -26,7 +26,7 @@
     "CalendarObject",
 ]
 
-from twext.enterprise.dal.record import fromTable
+from twext.enterprise.dal.record import fromTable, SerializableRecord
 from twext.enterprise.dal.syntax import Count, ColumnSyntax, Delete, \
     Insert, Len, Max, Parameter, Select, Update, utcNowSQL
 from twext.enterprise.locking import NamedLock
@@ -34,11 +34,9 @@
     WORK_PRIORITY_LOW, WORK_WEIGHT_5, WORK_WEIGHT_3
 from twext.enterprise.util import parseSQLTimestamp
 from twext.python.clsprop import classproperty
-from twext.python.filepath import CachingFilePath
 from twext.python.log import Logger
 from twext.who.idirectory import RecordType
-from twistedcaldav.ical import Component as VComponent
-from txweb2.http_headers import MimeType, generateContentType
+from txweb2.http_headers import MimeType
 from txweb2.stream import readStream
 
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
@@ -48,11 +46,10 @@
 from twistedcaldav import customxml, ical
 from twistedcaldav.stdconfig import config
 from twistedcaldav.datafilters.peruserdata import PerUserDataFilter
-from twistedcaldav.dateops import normalizeForIndex, datetimeMktime, \
-    pyCalendarTodatetime, parseSQLDateToPyCalendar
+from twistedcaldav.dateops import normalizeForIndex, \
+    pyCalendarToSQLTimestamp, parseSQLDateToPyCalendar
 from twistedcaldav.ical import Component, InvalidICalendarDataError, Property
 from twistedcaldav.instance import InvalidOverriddenInstanceError
-from twistedcaldav.memcacher import Memcacher
 from twistedcaldav.timezones import TimezoneException
 
 from txdav.base.propertystore.base import PropertyName
@@ -64,14 +61,15 @@
 from txdav.caldav.datastore.scheduling.icalsplitter import iCalSplitter
 from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
 from txdav.caldav.datastore.scheduling.utils import uidFromCalendarUserAddress
-from txdav.caldav.datastore.util import AttachmentRetrievalTransport, \
-    normalizationLookup
+from txdav.caldav.datastore.sql_attachment import Attachment, DropBoxAttachment, \
+    AttachmentLink, ManagedAttachment
+from txdav.caldav.datastore.sql_directory import GroupAttendeeRecord, \
+    GroupShareeRecord
+from txdav.caldav.datastore.util import normalizationLookup
 from txdav.caldav.datastore.util import CalendarObjectBase
-from txdav.caldav.datastore.util import StorageTransportBase
 from txdav.caldav.datastore.util import dropboxIDFromCalendarObject
 from txdav.caldav.icalendarstore import ICalendarHome, ICalendar, ICalendarObject, \
-    IAttachment, AttachmentStoreFailed, AttachmentStoreValidManagedID, \
-    AttachmentMigrationFailed, AttachmentDropboxNotAllowed, \
+    AttachmentStoreFailed, AttachmentStoreValidManagedID, \
     TooManyAttendeesError, InvalidComponentTypeError, InvalidCalendarAccessError, \
     ResourceDeletedError, \
     AttendeeAllowedError, InvalidPerUserDataMerge, ComponentUpdateState, \
@@ -79,15 +77,16 @@
     InvalidDefaultCalendar, \
     InvalidAttachmentOperation, DuplicatePrivateCommentsError, \
     TimeRangeUpperLimit, TimeRangeLowerLimit, InvalidSplit, \
-    AttachmentSizeTooLarge, UnknownTimezone, SetComponentOptions
-from txdav.caldav.icalendarstore import QuotaExceeded
+    UnknownTimezone, SetComponentOptions
 from txdav.common.datastore.sql import CommonHome, CommonHomeChild, \
-    CommonObjectResource, ECALENDARTYPE, SharingInvitation
+    CommonObjectResource, ECALENDARTYPE
+from txdav.common.datastore.sql_directory import GroupsRecord
 from txdav.common.datastore.sql_tables import _ATTACHMENTS_MODE_NONE, \
     _ATTACHMENTS_MODE_READ, _ATTACHMENTS_MODE_WRITE, _BIND_MODE_DIRECT, \
     _BIND_MODE_GROUP, _BIND_MODE_GROUP_READ, _BIND_MODE_GROUP_WRITE, \
     _BIND_MODE_OWN, _BIND_MODE_READ, _BIND_MODE_WRITE, _BIND_STATUS_ACCEPTED, \
-    _TRANSP_OPAQUE, _TRANSP_TRANSPARENT, schema
+    _TRANSP_OPAQUE, _TRANSP_TRANSPARENT, schema, _CHILD_TYPE_TRASH
+from txdav.common.datastore.sql_sharing import SharingInvitation
 from txdav.common.icommondatastore import IndexedSearchException, \
     InternalDataStoreError, HomeChildNameAlreadyExistsError, \
     HomeChildNameNotAllowedError, ObjectResourceTooBigError, \
@@ -111,8 +110,7 @@
 from urlparse import urlparse, urlunparse
 import collections
 import datetime
-import os
-import tempfile
+import itertools
 import urllib
 import uuid
 
@@ -142,7 +140,7 @@
         @type txn: L{txdav.common.datastore.sql.CommonStoreTransaction}
         """
 
-        at = schema.ATTACHMENT
+        at = Attachment._attachmentSchema
         rows = (yield Select(
             (at.DROPBOX_ID,),
             From=at,
@@ -174,8 +172,8 @@
         txn = self._store.newTransaction("CalendarStoreFeatures.upgradeToManagedAttachments - preliminary work")
         try:
             # Clear out unused CALENDAR_OBJECT.DROPBOX_IDs
-            co = schema.CALENDAR_OBJECT
-            at = schema.ATTACHMENT
+            co = CalendarObject._objectSchema
+            at = Attachment._attachmentSchema
             yield Update(
                 {co.DROPBOX_ID: None},
                 Where=co.RESOURCE_ID.In(Select(
@@ -248,7 +246,7 @@
         log.debug("  {0} affected calendar objects".format(len(cobjs),))
 
         # Get names of each matching attachment
-        at = schema.ATTACHMENT
+        at = Attachment._attachmentSchema
         names = (yield Select(
             (at.PATH,),
             From=at,
@@ -317,8 +315,8 @@
         @type dropbox_id: C{str}
         """
 
-        co = schema.CALENDAR_OBJECT
-        cb = schema.CALENDAR_BIND
+        co = CalendarObject._objectSchema
+        cb = Calendar._bindSchema
         rows = (yield Select(
             (cb.CALENDAR_HOME_RESOURCE_ID, co.CALENDAR_RESOURCE_ID, co.RESOURCE_ID,),
             From=co.join(cb, co.CALENDAR_RESOURCE_ID == cb.CALENDAR_RESOURCE_ID),
@@ -404,6 +402,33 @@
 
 
 
+class CalendarHomeRecord(SerializableRecord, fromTable(schema.CALENDAR_HOME)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.CALENDAR_HOME}.
+    """
+    pass
+
+
+
+class CalendarMetaDataRecord(SerializableRecord, fromTable(schema.CALENDAR_METADATA)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.CALENDAR_METADATA}.
+    """
+    pass
+
+
+
+class CalendarBindRecord(SerializableRecord, fromTable(schema.CALENDAR_BIND)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.CALENDAR_BIND}.
+    """
+    pass
+
+
+
 class CalendarHome(CommonHome):
 
     implements(ICalendarHome)
@@ -412,16 +437,15 @@
 
     # structured tables.  (new, preferred)
     _homeSchema = schema.CALENDAR_HOME
-    _bindSchema = schema.CALENDAR_BIND
     _homeMetaDataSchema = schema.CALENDAR_HOME_METADATA
+
+    _bindSchema = schema.CALENDAR_BIND
     _revisionsSchema = schema.CALENDAR_OBJECT_REVISIONS
     _objectSchema = schema.CALENDAR_OBJECT
 
     _notifierPrefix = "CalDAV"
     _dataVersionKey = "CALENDAR-DATAVERSION"
 
-    _cacher = Memcacher("SQL.calhome", pickle=True, key_normalization=False)
-
     _componentCalendarName = {
         "VEVENT": "calendar",
         "VTODO": "tasks",
@@ -499,37 +523,36 @@
 
     @inlineCallbacks
     def remove(self):
-        ch = schema.CALENDAR_HOME
-        cb = schema.CALENDAR_BIND
-        cor = schema.CALENDAR_OBJECT_REVISIONS
-        rp = schema.RESOURCE_PROPERTY
-
         # delete attachments corresponding to this home, also removing from disk
         yield Attachment.removedHome(self._txn, self._resourceID)
 
-        yield Delete(
-            From=cb,
-            Where=cb.CALENDAR_HOME_RESOURCE_ID == self._resourceID
-        ).on(self._txn)
+        yield super(CalendarHome, self).remove()
 
-        yield Delete(
-            From=cor,
-            Where=cor.CALENDAR_HOME_RESOURCE_ID == self._resourceID
-        ).on(self._txn)
 
-        yield Delete(
-            From=ch,
-            Where=ch.RESOURCE_ID == self._resourceID
-        ).on(self._txn)
+    @inlineCallbacks
+    def copyMetadata(self, other, calendarIDMap):
+        """
+        Copy metadata from one L{CalendarObjectResource} to another. This is only
+        used during a migration step.
+        """
 
-        yield Delete(
-            From=rp,
-            Where=rp.RESOURCE_ID == self._resourceID
+        # Simple attributes that can be copied over as-is, but the calendar id's need to be mapped
+        chm = self._homeMetaDataSchema
+        values = {}
+        for attr, col in zip(self.metadataAttributes(), self.metadataColumns()):
+            value = getattr(other, attr)
+            if attr in self._componentDefaultAttribute.values():
+                value = calendarIDMap.get(value)
+            setattr(self, attr, value)
+            values[col] = value
+
+        # Update the local data
+        yield Update(
+            values,
+            Where=chm.RESOURCE_ID == self._resourceID
         ).on(self._txn)
 
-        yield self._cacher.delete(str(self._ownerUID))
 
-
     @inlineCallbacks
     def hasCalendarResourceUIDSomewhereElse(self, uid, ok_object, mode):
         """
@@ -603,8 +626,8 @@
         """
         Implement lookup via queries.
         """
-        co = schema.CALENDAR_OBJECT
-        cb = schema.CALENDAR_BIND
+        co = self._objectSchema
+        cb = self._bindSchema
         rows = (yield Select(
             [co.PARENT_RESOURCE_ID,
              co.RESOURCE_ID],
@@ -623,10 +646,34 @@
         returnValue(None)
 
 
+    def getAllAttachments(self):
+        """
+        Return all the L{Attachment} objects associated with this calendar home.
+        Needed during migration.
+        """
+        return Attachment.loadAllAttachments(self)
+
+
+    def getAttachmentLinks(self):
+        """
+        Read the attachment<->calendar object mapping data associated with this calendar home.
+        Needed during migration only.
+        """
+        return AttachmentLink.linksForHome(self)
+
+
+    def getAttachmentByID(self, id):
+        """
+        Return a specific attachment associated with this calendar home.
+        Needed during migration only.
+        """
+        return Attachment.loadAttachmentByID(self, id)
+
+
     @inlineCallbacks
     def getAllDropboxIDs(self):
-        co = schema.CALENDAR_OBJECT
-        cb = schema.CALENDAR_BIND
+        co = self._objectSchema
+        cb = self._bindSchema
         rows = (yield Select(
             [co.DROPBOX_ID],
             From=co.join(cb, co.PARENT_RESOURCE_ID == cb.RESOURCE_ID),
@@ -639,7 +686,7 @@
 
     @inlineCallbacks
     def getAllAttachmentNames(self):
-        att = schema.ATTACHMENT
+        att = Attachment._attachmentSchema
         rows = (yield Select(
             [att.DROPBOX_ID],
             From=att,
@@ -651,8 +698,8 @@
 
     @inlineCallbacks
     def getAllManagedIDs(self):
-        at = schema.ATTACHMENT
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
+        at = Attachment._attachmentSchema
+        attco = Attachment._attachmentLinkSchema
         rows = (yield Select(
             [attco.MANAGED_ID, ],
             From=attco.join(at, attco.ATTACHMENT_ID == at.ATTACHMENT_ID),
@@ -663,6 +710,27 @@
 
 
     @inlineCallbacks
+    def getAllGroupAttendees(self):
+        """
+        Return a list of L{GroupAttendeeRecord},L{GroupRecord} for each group attendee referenced in calendar data
+        owned by this home.
+        """
+
+        results = []
+        calendars = yield self.loadChildren()
+        for calendar in calendars:
+            if not calendar.owned():
+                continue
+            children = yield calendar.objectResources()
+            cobjs = [child.id() for child in children]
+            if cobjs:
+                result = yield GroupAttendeeRecord.groupAttendeesForObjects(self._txn, cobjs)
+                results.extend(result)
+
+        returnValue(results)
+
+
+    @inlineCallbacks
     def createdHome(self):
 
         # Check whether components type must be separate
@@ -978,6 +1046,12 @@
     _objectSchema = schema.CALENDAR_OBJECT
     _timeRangeSchema = schema.TIME_RANGE
 
+    _homeRecordClass = CalendarHomeRecord
+    _metadataRecordClass = CalendarMetaDataRecord
+    _bindRecordClass = CalendarBindRecord
+    _bindHomeIDAttributeName = "calendarHomeResourceID"
+    _bindResourceIDAttributeName = "calendarResourceID"
+
     # Mapping of iCalendar property name to DB column name
     _queryFields = {
         "UID": _objectSchema.UID,
@@ -1022,8 +1096,8 @@
         """
 
         if metadataData:
-            childType = metadataData[3]
-            if childType == "trash":  # FIXME: make this an enumeration
+            childType = metadataData[cls.metadataColumns().index(cls._homeChildMetaDataSchema.CHILD_TYPE)]
+            if childType == _CHILD_TYPE_TRASH:
                 actualClass = TrashCollection
             else:
                 actualClass = cls
@@ -1034,7 +1108,6 @@
             )
 
 
-
     @classmethod
     def metadataColumns(cls):
         """
@@ -1047,11 +1120,11 @@
 
         return (
             cls._homeChildMetaDataSchema.SUPPORTED_COMPONENTS,
-            cls._homeChildMetaDataSchema.CREATED,
-            cls._homeChildMetaDataSchema.MODIFIED,
             cls._homeChildMetaDataSchema.CHILD_TYPE,
             cls._homeChildMetaDataSchema.TRASHED,
             cls._homeChildMetaDataSchema.IS_IN_TRASH,
+            cls._homeChildMetaDataSchema.CREATED,
+            cls._homeChildMetaDataSchema.MODIFIED,
         )
 
 
@@ -1067,11 +1140,11 @@
 
         return (
             "_supportedComponents",
-            "_created",
-            "_modified",
             "_childType",
             "_trashed",
             "_isInTrash",
+            "_created",
+            "_modified",
         )
 
 
@@ -1115,6 +1188,46 @@
     def _calendarHome(self):
         return self._home
 
+
+    @inlineCallbacks
+    def copyMetadata(self, other):
+        """
+        Copy metadata from one L{Calendar} to another. This is only
+        used during a migration step.
+        """
+
+        # Copy over list of attributes and the name
+        self._name = other._name
+        for attr in itertools.chain(self.metadataAttributes(), self.additionalBindAttributes()):
+            if attr in ("_created", "_modified"):
+                continue
+            if hasattr(other, attr):
+                setattr(self, attr, getattr(other, attr))
+
+        # Update the metadata table
+        cm = self._homeChildMetaDataSchema
+        values = {}
+        for attr, column in itertools.izip(self.metadataAttributes(), self.metadataColumns()):
+            if attr in ("_created", "_modified"):
+                continue
+            values[column] = getattr(self, attr)
+        yield Update(
+            values,
+            Where=(cm.RESOURCE_ID == self._resourceID)
+        ).on(self._txn)
+
+        # Update the bind table
+        cb = self._bindSchema
+        values = {
+            cb.RESOURCE_NAME: self._name
+        }
+        for attr, column in itertools.izip(self.additionalBindAttributes(), self.additionalBindColumns()):
+            values[column] = getattr(self, attr)
+        yield Update(
+            values,
+            Where=(cb.CALENDAR_HOME_RESOURCE_ID == self.viewerHome()._resourceID).And(cb.CALENDAR_RESOURCE_ID == self._resourceID)
+        ).on(self._txn)
+
     ownerCalendarHome = CommonHomeChild.ownerHome
     viewerCalendarHome = CommonHomeChild.viewerHome
     calendarObjects = CommonHomeChild.objectResources
@@ -1548,7 +1661,7 @@
         """
         Query to find resources that need to be re-expanded
         """
-        co = schema.CALENDAR_OBJECT
+        co = cls._objectSchema
         return Select(
             [co.RESOURCE_NAME],
             From=co,
@@ -1568,8 +1681,8 @@
         returnValue([row[0] for row in (
             yield self._notExpandedWithinQuery.on(
                 self._txn,
-                minDate=pyCalendarTodatetime(normalizeForIndex(minDate)) if minDate is not None else None,
-                maxDate=pyCalendarTodatetime(normalizeForIndex(maxDate)),
+                minDate=pyCalendarToSQLTimestamp(normalizeForIndex(minDate)) if minDate is not None else None,
+                maxDate=pyCalendarToSQLTimestamp(normalizeForIndex(maxDate)),
                 resourceID=self._resourceID))]
         )
 
@@ -1855,8 +1968,8 @@
 
         # First check that the actual group membership has changed
         if (yield self.updateShareeGroupLink(groupUID)):
-            groupID = (yield self._txn.groupByUID(groupUID))[0]
-            memberUIDs = yield self._txn.groupMemberUIDs(groupID)
+            group = yield self._txn.groupByUID(groupUID)
+            memberUIDs = yield self._txn.groupMemberUIDs(group.groupID)
             boundUIDs = set()
 
             home = self._homeSchema
@@ -2006,39 +2119,36 @@
         update schema.GROUP_SHAREE
         """
         changed = False
-        (
-            groupID, _ignore_name, membershipHash, _ignore_modDate,
-            _ignore_extant
-        ) = yield self._txn.groupByUID(groupUID)
+        group = yield self._txn.groupByUID(groupUID)
 
         gs = schema.GROUP_SHAREE
         rows = yield Select(
             [gs.MEMBERSHIP_HASH, gs.GROUP_BIND_MODE],
             From=gs,
             Where=(gs.CALENDAR_ID == self._resourceID).And(
-                gs.GROUP_ID == groupID)
+                gs.GROUP_ID == group.groupID)
         ).on(self._txn)
         if rows:
             [[gsMembershipHash, gsMode]] = rows
             updateMap = {}
-            if gsMembershipHash != membershipHash:
-                updateMap[gs.MEMBERSHIP_HASH] = membershipHash
+            if gsMembershipHash != group.membershipHash:
+                updateMap[gs.MEMBERSHIP_HASH] = group.membershipHash
             if mode is not None and gsMode != mode:
                 updateMap[gs.GROUP_BIND_MODE] = mode
             if updateMap:
                 yield Update(
                     updateMap,
                     Where=(gs.CALENDAR_ID == self._resourceID).And(
-                        gs.GROUP_ID == groupID
+                        gs.GROUP_ID == group.groupID
                     )
                 ).on(self._txn)
                 changed = True
         else:
             yield Insert({
-                gs.MEMBERSHIP_HASH: membershipHash,
+                gs.MEMBERSHIP_HASH: group.membershipHash,
                 gs.GROUP_BIND_MODE: mode,
                 gs.CALENDAR_ID: self._resourceID,
-                gs.GROUP_ID: groupID,
+                gs.GROUP_ID: group.groupID,
             }).on(self._txn)
             changed = True
 
@@ -2125,8 +2235,8 @@
 
         # invite every member of group
         shareeViews = []
-        groupID = (yield self._txn.groupByUID(shareeUID))[0]
-        memberUIDs = yield self._txn.groupMemberUIDs(groupID)
+        group = yield self._txn.groupByUID(shareeUID)
+        memberUIDs = yield self._txn.groupMemberUIDs(group.groupID)
         for memberUID in memberUIDs:
             if memberUID != self._home.uid():
                 shareeView = yield self.shareeView(memberUID)
@@ -2266,6 +2376,14 @@
         returnValue(invitations)
 
 
+    @inlineCallbacks
+    def groupSharees(self):
+        sharees = yield GroupShareeRecord.querysimple(self._txn, calendarID=self.id())
+        groups = set([sharee.groupID for sharee in sharees])
+        groups = (yield GroupsRecord.query(self._txn, GroupsRecord.groupID.In(groups))) if groups else []
+        returnValue({"groups": groups, "sharees": sharees})
+
+
 icalfbtype_to_indexfbtype = {
     "UNKNOWN"         : 0,
     "FREE"            : 1,
@@ -2300,7 +2418,7 @@
     implements(ICalendarObject)
 
     _objectSchema = schema.CALENDAR_OBJECT
-    _componentClass = VComponent
+    _componentClass = Component
 
     _currentDataVersion = 1
 
@@ -2366,11 +2484,11 @@
             obj.SCHEDULE_TAG,
             obj.SCHEDULE_ETAGS,
             obj.PRIVATE_COMMENTS,
+            obj.TRASHED,
+            obj.ORIGINAL_COLLECTION,
             obj.CREATED,
             obj.MODIFIED,
             obj.DATAVERSION,
-            obj.TRASHED,
-            obj.ORIGINAL_COLLECTION,
         ]
 
 
@@ -2389,11 +2507,11 @@
             "_schedule_tag",
             "_schedule_etags",
             "_private_comments",
+            "_trashed",
+            "_original_collection",
             "_created",
             "_modified",
             "_dataversion",
-            "_trashed",
-            "_original_collection",
         )
 
 
@@ -2477,9 +2595,9 @@
             groupRecord = yield self.directoryService().recordWithCalendarUserAddress(groupCUA)
             if groupRecord:
                 # get members
-                groupID = (yield self._txn.groupByUID(groupRecord.uid))[0]
-                if groupID is not None:
-                    members = yield self._txn.groupMembers(groupID)
+                group = yield self._txn.groupByUID(groupRecord.uid)
+                if group is not None:
+                    members = yield self._txn.groupMembers(group.groupID)
                     groupCUAToAttendeeMemberPropMap[groupRecord.canonicalCalendarUserAddress()] = tuple(
                         [member.attendeeProperty(params={"MEMBER": groupCUA}) for member in sorted(members, key=lambda x: x.uid)]
                     )
@@ -2503,19 +2621,14 @@
         @return: a L{dict} with group ids as the key and membership hash as the value
         @rtype: L{dict}
         """
-        ga = schema.GROUP_ATTENDEE
-        rows = yield Select(
-            [ga.GROUP_ID, ga.MEMBERSHIP_HASH],
-            From=ga,
-            Where=ga.RESOURCE_ID == self._resourceID,
-        ).on(self._txn)
-        returnValue(dict(rows))
+        records = yield GroupAttendeeRecord.querysimple(self._txn, resourceID=self._resourceID)
+        returnValue(dict([(record.groupID, record,) for record in records]))
 
 
     @inlineCallbacks
     def updateEventGroupLink(self, groupCUAToAttendeeMemberPropMap=None):
         """
-        update schema.GROUP_ATTENDEE
+        update group event links
         """
         if groupCUAToAttendeeMemberPropMap is None:
             if hasattr(self, "_groupCUAToAttendeeMemberPropMap"):
@@ -2532,42 +2645,27 @@
                 groupUID = groupRecord.uid
             else:
                 groupUID = uidFromCalendarUserAddress(groupCUA)
-            (
-                groupID, _ignore_name, membershipHash, _ignore_modDate,
-                _ignore_extant
-            ) = yield self._txn.groupByUID(groupUID)
+            group = yield self._txn.groupByUID(groupUID)
 
-            ga = schema.GROUP_ATTENDEE
-            if groupID in groupIDToMembershipHashMap:
-                if groupIDToMembershipHashMap[groupID] != membershipHash:
-                    yield Update(
-                        {ga.MEMBERSHIP_HASH: membershipHash, },
-                        Where=(ga.RESOURCE_ID == self._resourceID).And(
-                            ga.GROUP_ID == groupID)
-                    ).on(self._txn)
+            if group.groupID in groupIDToMembershipHashMap:
+                if groupIDToMembershipHashMap[group.groupID].membershipHash != group.membershipHash:
+                    yield groupIDToMembershipHashMap[group.groupID].update(membershipHash=group.membershipHash)
                     changed = True
-                del groupIDToMembershipHashMap[groupID]
+                del groupIDToMembershipHashMap[group.groupID]
             else:
-                yield Insert({
-                    ga.RESOURCE_ID: self._resourceID,
-                    ga.GROUP_ID: groupID,
-                    ga.MEMBERSHIP_HASH: membershipHash,
-                }).on(self._txn)
+                yield GroupAttendeeRecord.create(
+                    self._txn,
+                    resourceID=self._resourceID,
+                    groupID=group.groupID,
+                    membershipHash=group.membershipHash,
+                )
                 changed = True
 
         if groupIDToMembershipHashMap:
-            groupIDsToRemove = groupIDToMembershipHashMap.keys()
-            yield Delete(
-                From=ga,
-                Where=(ga.RESOURCE_ID == self._resourceID).And(
-                    ga.GROUP_ID.In(
-                        Parameter(
-                            "groupIDsToRemove",
-                            len(groupIDsToRemove)
-                        )
-                    )
-                )
-            ).on(self._txn, groupIDsToRemove=groupIDsToRemove)
+            yield GroupAttendeeRecord.deletesome(
+                self._txn,
+                GroupAttendeeRecord.groupID.In(groupIDToMembershipHashMap.keys()),
+            )
             changed = True
 
         returnValue(changed)
@@ -2628,11 +2726,7 @@
                     del self._groupCUAToAttendeeMemberPropMap
                 else:
                     # delete existing group rows
-                    ga = schema.GROUP_ATTENDEE
-                    yield Delete(
-                        From=ga,
-                        Where=ga.RESOURCE_ID == self._resourceID,
-                    ).on(txn)
+                    yield GroupAttendeeRecord.deletesimple(self._txn, resourceID=self._resourceID)
 
         returnValue(isOldEventWithGroupAttendees)
 
@@ -2678,13 +2772,11 @@
                     # remove group link to ensure update (update to unknown hash would work too)
                     # FIXME: its possible that more than one group id gets updated during this single work item, so we
                     # need to make sure that ALL the group_id's are removed by this query.
-                    ga = schema.GROUP_ATTENDEE
-                    yield Delete(
-                        From=ga,
-                        Where=(ga.RESOURCE_ID == self._resourceID).And(
-                            ga.GROUP_ID == groupID
-                        )
-                    ).on(self._txn)
+                    yield GroupAttendeeRecord.deletesimple(
+                        self._txn,
+                        resourceID=self._resourceID,
+                        groupID=groupID,
+                    )
 
                     # update group attendee in remaining component
                     component = yield self.componentForUser()
@@ -2704,7 +2796,7 @@
         """
 
         # Valid calendar data checks
-        if not isinstance(component, VComponent):
+        if not isinstance(component, Component):
             raise InvalidObjectResourceError("Wrong type of object: {0}".format(type(component),))
 
         try:
@@ -3589,7 +3681,7 @@
                 recurrenceLowerLimit = None
                 recurrenceLimit = DateTime(1900, 1, 1, 0, 0, 0, tzid=Timezone(utc=True))
 
-        co = schema.CALENDAR_OBJECT
+        co = self._objectSchema
         tr = schema.TIME_RANGE
 
         # Do not update if reCreate (re-indexing - we don't want to re-write data
@@ -3649,8 +3741,8 @@
 
             # Only needed if indexing being changed
             if instanceIndexingRequired:
-                values[co.RECURRANCE_MIN] = pyCalendarTodatetime(normalizeForIndex(recurrenceLowerLimit)) if recurrenceLowerLimit else None
-                values[co.RECURRANCE_MAX] = pyCalendarTodatetime(normalizeForIndex(recurrenceLimit)) if recurrenceLimit else None
+                values[co.RECURRANCE_MIN] = pyCalendarToSQLTimestamp(normalizeForIndex(recurrenceLowerLimit)) if recurrenceLowerLimit else None
+                values[co.RECURRANCE_MAX] = pyCalendarToSQLTimestamp(normalizeForIndex(recurrenceLimit)) if recurrenceLimit else None
 
             if inserting:
                 self._resourceID, self._created, self._modified = (
@@ -3659,15 +3751,17 @@
                         Return=(co.RESOURCE_ID, co.CREATED, co.MODIFIED)
                     ).on(txn)
                 )[0]
+                self._created = parseSQLTimestamp(self._created)
+                self._modified = parseSQLTimestamp(self._modified)
             else:
                 values[co.MODIFIED] = utcNowSQL
-                self._modified = (
+                self._modified = parseSQLTimestamp((
                     yield Update(
                         values,
                         Where=co.RESOURCE_ID == self._resourceID,
                         Return=co.MODIFIED,
                     ).on(txn)
-                )[0][0]
+                )[0][0])
 
                 # Need to wipe the existing time-range for this and rebuild if required
                 if instanceIndexingRequired:
@@ -3678,8 +3772,8 @@
         else:
             # Keep MODIFIED the same when doing an index-only update
             values = {
-                co.RECURRANCE_MIN : pyCalendarTodatetime(normalizeForIndex(recurrenceLowerLimit)) if recurrenceLowerLimit else None,
-                co.RECURRANCE_MAX : pyCalendarTodatetime(normalizeForIndex(recurrenceLimit)) if recurrenceLimit else None,
+                co.RECURRANCE_MIN : pyCalendarToSQLTimestamp(normalizeForIndex(recurrenceLowerLimit)) if recurrenceLowerLimit else None,
+                co.RECURRANCE_MAX : pyCalendarToSQLTimestamp(normalizeForIndex(recurrenceLimit)) if recurrenceLimit else None,
                 co.MODIFIED : self._modified,
             }
 
@@ -3763,8 +3857,8 @@
             tr.CALENDAR_RESOURCE_ID        : self._calendar._resourceID,
             tr.CALENDAR_OBJECT_RESOURCE_ID : self._resourceID,
             tr.FLOATING                    : floating,
-            tr.START_DATE                  : pyCalendarTodatetime(start),
-            tr.END_DATE                    : pyCalendarTodatetime(end),
+            tr.START_DATE                  : pyCalendarToSQLTimestamp(start),
+            tr.END_DATE                    : pyCalendarToSQLTimestamp(end),
             tr.FBTYPE                      : icalfbtype_to_indexfbtype.get(fbtype, icalfbtype_to_indexfbtype["FREE"]),
             tr.TRANSPARENT                 : transp,
         }, Return=tr.INSTANCE_ID).on(txn))[0][0]
@@ -3776,9 +3870,9 @@
 
                 def _adjustDateTime(dt, adjustment, add_duration):
                     if isinstance(adjustment, Duration):
-                        return pyCalendarTodatetime((dt + adjustment) if add_duration else (dt - adjustment))
+                        return pyCalendarToSQLTimestamp((dt + adjustment) if add_duration else (dt - adjustment))
                     elif isinstance(adjustment, DateTime):
-                        return pyCalendarTodatetime(normalizeForIndex(adjustment))
+                        return pyCalendarToSQLTimestamp(normalizeForIndex(adjustment))
                     else:
                         return None
 
@@ -3793,6 +3887,29 @@
 
 
     @inlineCallbacks
+    def copyMetadata(self, other):
+        """
+        Copy metadata from one L{CalendarObjectResource} to another. This is only
+        used during a migration step.
+        """
+        co = self._objectSchema
+        values = {
+            co.ATTACHMENTS_MODE                : other._attachment,
+            co.DROPBOX_ID                      : other._dropboxID,
+            co.ACCESS                          : other._access,
+            co.SCHEDULE_OBJECT                 : other._schedule_object,
+            co.SCHEDULE_TAG                    : other._schedule_tag,
+            co.SCHEDULE_ETAGS                  : other._schedule_etags,
+            co.PRIVATE_COMMENTS                : other._private_comments,
+        }
+
+        yield Update(
+            values,
+            Where=co.RESOURCE_ID == self._resourceID
+        ).on(self._txn)
+
+
+    @inlineCallbacks
     def component(self, doUpdate=False):
         """
         Read calendar data and validate/fix it. Do not raise a store error here
@@ -3807,7 +3924,7 @@
             text = yield self._text()
 
             try:
-                component = VComponent.fromString(text)
+                component = Component.fromString(text)
             except InvalidICalendarDataError, e:
                 # This is a really bad situation, so do raise
                 raise InternalDataStoreError(
@@ -3958,6 +4075,15 @@
         )
 
 
+    def purge(self):
+        """
+        Do a "silent" removal of this object resource.
+        """
+        return self._removeInternal(
+            ComponentRemoveState.NORMAL_NO_IMPLICIT
+        )
+
+
     @inlineCallbacks
     def _removeInternal(self, internal_state=ComponentRemoveState.NORMAL):
 
@@ -4028,7 +4154,7 @@
         """
         DAL query to load RECURRANCE_MIN, RECURRANCE_MAX via an object's resource ID.
         """
-        co = schema.CALENDAR_OBJECT
+        co = cls._objectSchema
         return Select(
             [co.RECURRANCE_MIN, co.RECURRANCE_MAX, ],
             From=co,
@@ -4563,8 +4689,8 @@
         Get a list of managed attachments where the names returned are for the last path segment
         of the attachment URI.
         """
-        at = schema.ATTACHMENT
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
+        at = Attachment._attachmentSchema
+        attco = Attachment._attachmentLinkSchema
         rows = (yield Select(
             [attco.MANAGED_ID, at.PATH, ],
             From=attco.join(at, attco.ATTACHMENT_ID == at.ATTACHMENT_ID),
@@ -4580,8 +4706,8 @@
         """
 
         # Scan all the associated attachments for the one that matches
-        at = schema.ATTACHMENT
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
+        at = Attachment._attachmentSchema
+        attco = Attachment._attachmentLinkSchema
         rows = (yield Select(
             [attco.MANAGED_ID, at.PATH, ],
             From=attco.join(at, attco.ATTACHMENT_ID == at.ATTACHMENT_ID),
@@ -4634,8 +4760,10 @@
     @inlineCallbacks
     def attachments(self):
         if self._dropboxID:
-            rows = yield self._attachmentsQuery.on(self._txn,
-                                                   dropboxID=self._dropboxID)
+            rows = yield self._attachmentsQuery.on(
+                self._txn,
+                dropboxID=self._dropboxID,
+            )
             result = []
             for row in rows:
                 result.append((yield self.attachmentWithName(row[0])))
@@ -4695,7 +4823,7 @@
 
         splitter = iCalSplitter(config.Scheduling.Options.Splitting.Size, config.Scheduling.Options.Splitting.PastDays)
         ical = (yield self.component())
-        will_split, fullyInFuture = splitter.willSplit(ical)
+        will_split, _ignore_fullyInFuture = splitter.willSplit(ical)
         returnValue(will_split)
 
 
@@ -4914,7 +5042,6 @@
                 yield cobj.split()
 
 
-
     @inlineCallbacks
     def fromTrash(self):
         name = yield super(CalendarObject, self).fromTrash()
@@ -4981,904 +5108,9 @@
 
 
 
-class AttachmentStorageTransport(StorageTransportBase):
-
-    _TEMPORARY_UPLOADS_DIRECTORY = "Temporary"
-
-    def __init__(self, attachment, contentType, dispositionName, creating=False):
-        super(AttachmentStorageTransport, self).__init__(
-            attachment, contentType, dispositionName)
-
-        fileDescriptor, fileName = self._temporaryFile()
-        # Wrap the file descriptor in a file object we can write to
-        self._file = os.fdopen(fileDescriptor, "w")
-        self._path = CachingFilePath(fileName)
-        self._hash = hashlib.md5()
-        self._creating = creating
-
-        self._txn.postAbort(self.aborted)
-
-
-    def _temporaryFile(self):
-        """
-        Returns a (file descriptor, absolute path) tuple for a temporary file within
-        the Attachments/Temporary directory (creating the Temporary subdirectory
-        if it doesn't exist).  It is the caller's responsibility to remove the
-        file.
-        """
-        attachmentRoot = self._txn._store.attachmentsPath
-        tempUploadsPath = attachmentRoot.child(self._TEMPORARY_UPLOADS_DIRECTORY)
-        if not tempUploadsPath.exists():
-            tempUploadsPath.createDirectory()
-        return tempfile.mkstemp(dir=tempUploadsPath.path)
-
-
-    @property
-    def _txn(self):
-        return self._attachment._txn
-
-
-    def aborted(self):
-        """
-        Transaction aborted - clean up temp files.
-        """
-        if self._path.exists():
-            self._path.remove()
-
-
-    def write(self, data):
-        if isinstance(data, buffer):
-            data = str(data)
-        self._file.write(data)
-        self._hash.update(data)
-
-
-    @inlineCallbacks
-    def loseConnection(self):
-
-        # FIXME: this should be synchronously accessible; IAttachment should
-        # have a method for getting its parent just as CalendarObject/Calendar
-        # do.
-
-        # FIXME: If this method isn't called, the transaction should be
-        # prevented from committing successfully.  It's not valid to have an
-        # attachment that doesn't point to a real file.
-
-        home = (yield self._txn.calendarHomeWithResourceID(self._attachment._ownerHomeID))
-
-        oldSize = self._attachment.size()
-        newSize = self._file.tell()
-        self._file.close()
-
-        # Check max size for attachment
-        if newSize > config.MaximumAttachmentSize:
-            self._path.remove()
-            if self._creating:
-                yield self._attachment._internalRemove()
-            raise AttachmentSizeTooLarge()
-
-        # Check overall user quota
-        allowed = home.quotaAllowedBytes()
-        if allowed is not None and allowed < ((yield home.quotaUsedBytes())
-                                              + (newSize - oldSize)):
-            self._path.remove()
-            if self._creating:
-                yield self._attachment._internalRemove()
-            raise QuotaExceeded()
-
-        self._path.moveTo(self._attachment._path)
-
-        yield self._attachment.changed(
-            self._contentType,
-            self._dispositionName,
-            self._hash.hexdigest(),
-            newSize
-        )
-
-        if home:
-            # Adjust quota
-            yield home.adjustQuotaUsedBytes(self._attachment.size() - oldSize)
-
-            # Send change notification to home
-            yield home.notifyChanged()
-
-
-
-def sqltime(value):
-    return datetimeMktime(parseSQLTimestamp(value))
-
-
-
-class Attachment(object):
-
-    implements(IAttachment)
-
-    def __init__(self, txn, a_id, dropboxID, name, ownerHomeID=None, justCreated=False):
-        self._txn = txn
-        self._attachmentID = a_id
-        self._ownerHomeID = ownerHomeID
-        self._dropboxID = dropboxID
-        self._contentType = None
-        self._size = 0
-        self._md5 = None
-        self._created = None
-        self._modified = None
-        self._name = name
-        self._justCreated = justCreated
-
-
-    def __repr__(self):
-        return (
-            "<{self.__class__.__name__}: {self._attachmentID}>"
-            .format(self=self)
-        )
-
-
-    def _attachmentPathRoot(self):
-        return self._txn._store.attachmentsPath
-
-
-    @inlineCallbacks
-    def initFromStore(self):
-        """
-        Execute necessary SQL queries to retrieve attributes.
-
-        @return: C{True} if this attachment exists, C{False} otherwise.
-        """
-        att = schema.ATTACHMENT
-        if self._dropboxID:
-            where = (att.DROPBOX_ID == self._dropboxID).And(
-                att.PATH == self._name)
-        else:
-            where = (att.ATTACHMENT_ID == self._attachmentID)
-        rows = (yield Select(
-            [
-                att.ATTACHMENT_ID,
-                att.DROPBOX_ID,
-                att.CALENDAR_HOME_RESOURCE_ID,
-                att.CONTENT_TYPE,
-                att.SIZE,
-                att.MD5,
-                att.CREATED,
-                att.MODIFIED,
-                att.PATH,
-            ],
-            From=att,
-            Where=where
-        ).on(self._txn))
-
-        if not rows:
-            returnValue(None)
-
-        row_iter = iter(rows[0])
-        self._attachmentID = row_iter.next()
-        self._dropboxID = row_iter.next()
-        self._ownerHomeID = row_iter.next()
-        self._contentType = MimeType.fromString(row_iter.next())
-        self._size = row_iter.next()
-        self._md5 = row_iter.next()
-        self._created = sqltime(row_iter.next())
-        self._modified = sqltime(row_iter.next())
-        self._name = row_iter.next()
-
-        returnValue(self)
-
-
-    def dropboxID(self):
-        return self._dropboxID
-
-
-    def isManaged(self):
-        return self._dropboxID == "."
-
-
-    def name(self):
-        return self._name
-
-
-    def properties(self):
-        pass  # stub
-
-
-    def store(self, contentType, dispositionName=None):
-        if not self._name:
-            self._name = dispositionName
-        return AttachmentStorageTransport(self, contentType, dispositionName, self._justCreated)
-
-
-    def retrieve(self, protocol):
-        return AttachmentRetrievalTransport(self._path).start(protocol)
-
-
-    def changed(self, contentType, dispositionName, md5, size):
-        raise NotImplementedError
-
-    _removeStatement = Delete(
-        From=schema.ATTACHMENT,
-        Where=(schema.ATTACHMENT.ATTACHMENT_ID == Parameter("attachmentID"))
-    )
-
-
-    @inlineCallbacks
-    def remove(self):
-        oldSize = self._size
-        self._txn.postCommit(self.removePaths)
-        yield self._internalRemove()
-        # Adjust quota
-        home = (yield self._txn.calendarHomeWithResourceID(self._ownerHomeID))
-        if home:
-            yield home.adjustQuotaUsedBytes(-oldSize)
-
-            # Send change notification to home
-            yield home.notifyChanged()
-
-
-    def removePaths(self):
-        """
-        Remove the actual file and up to attachment parent directory if empty.
-        """
-        self._path.remove()
-        self.removeParentPaths()
-
-
-    def removeParentPaths(self):
-        """
-        Remove up to attachment parent directory if empty.
-        """
-        parent = self._path.parent()
-        toppath = self._attachmentPathRoot().path
-        while parent.path != toppath:
-            if len(parent.listdir()) == 0:
-                parent.remove()
-                parent = parent.parent()
-            else:
-                break
-
-
-    def _internalRemove(self):
-        """
-        Just delete the row; don't do any accounting / bookkeeping.  (This is
-        for attachments that have failed to be created due to errors during
-        storage.)
-        """
-        return self._removeStatement.on(self._txn, attachmentID=self._attachmentID)
-
-
-    @classmethod
-    @inlineCallbacks
-    def removedHome(cls, txn, homeID):
-        """
-        A calendar home is being removed so all of its attachments must go too. When removing,
-        we don't care about quota adjustment as there will be no quota once the home is removed.
-
-        TODO: this needs to be transactional wrt the actual file deletes.
-        """
-        att = schema.ATTACHMENT
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-
-        rows = (yield Select(
-            [att.ATTACHMENT_ID, att.DROPBOX_ID, ],
-            From=att,
-            Where=(
-                att.CALENDAR_HOME_RESOURCE_ID == homeID
-            ),
-        ).on(txn))
-
-        for attachmentID, dropboxID in rows:
-            if dropboxID:
-                attachment = DropBoxAttachment(txn, attachmentID, None, None)
-            else:
-                attachment = ManagedAttachment(txn, attachmentID, None, None)
-            attachment = (yield attachment.initFromStore())
-            if attachment._path.exists():
-                attachment.removePaths()
-
-        yield Delete(
-            From=attco,
-            Where=(
-                attco.ATTACHMENT_ID.In(Select(
-                    [att.ATTACHMENT_ID, ],
-                    From=att,
-                    Where=(
-                        att.CALENDAR_HOME_RESOURCE_ID == homeID
-                    ),
-                ))
-            ),
-        ).on(txn)
-
-        yield Delete(
-            From=att,
-            Where=(
-                att.CALENDAR_HOME_RESOURCE_ID == homeID
-            ),
-        ).on(txn)
-
-
-    # IDataStoreObject
-    def contentType(self):
-        return self._contentType
-
-
-    def md5(self):
-        return self._md5
-
-
-    def size(self):
-        return self._size
-
-
-    def created(self):
-        return self._created
-
-
-    def modified(self):
-        return self._modified
-
-
-
-class DropBoxAttachment(Attachment):
-
-    @classmethod
-    @inlineCallbacks
-    def create(cls, txn, dropboxID, name, ownerHomeID):
-        """
-        Create a new Attachment object.
-
-        @param txn: The transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param dropboxID: the identifier for the attachment (dropbox id or managed id)
-        @type dropboxID: C{str}
-        @param name: the name of the attachment
-        @type name: C{str}
-        @param ownerHomeID: the resource-id of the home collection of the attachment owner
-        @type ownerHomeID: C{int}
-        """
-
-        # If store has already migrated to managed attachments we will prevent creation of dropbox attachments
-        dropbox = (yield txn.store().dropboxAllowed(txn))
-        if not dropbox:
-            raise AttachmentDropboxNotAllowed
-
-        # Now create the DB entry
-        att = schema.ATTACHMENT
-        rows = (yield Insert({
-            att.CALENDAR_HOME_RESOURCE_ID : ownerHomeID,
-            att.DROPBOX_ID                : dropboxID,
-            att.CONTENT_TYPE              : "",
-            att.SIZE                      : 0,
-            att.MD5                       : "",
-            att.PATH                      : name,
-        }, Return=(att.ATTACHMENT_ID, att.CREATED, att.MODIFIED)).on(txn))
-
-        row_iter = iter(rows[0])
-        a_id = row_iter.next()
-        created = sqltime(row_iter.next())
-        modified = sqltime(row_iter.next())
-
-        attachment = cls(txn, a_id, dropboxID, name, ownerHomeID, True)
-        attachment._created = created
-        attachment._modified = modified
-
-        # File system paths need to exist
-        try:
-            attachment._path.parent().makedirs()
-        except:
-            pass
-
-        returnValue(attachment)
-
-
-    @classmethod
-    @inlineCallbacks
-    def load(cls, txn, dropboxID, name):
-        attachment = cls(txn, None, dropboxID, name)
-        attachment = (yield attachment.initFromStore())
-        returnValue(attachment)
-
-
-    @property
-    def _path(self):
-        # Use directory hashing scheme based on MD5 of dropboxID
-        hasheduid = hashlib.md5(self._dropboxID).hexdigest()
-        attachmentRoot = self._attachmentPathRoot().child(hasheduid[0:2]).child(hasheduid[2:4]).child(hasheduid)
-        return attachmentRoot.child(self.name())
-
-
-    @classmethod
-    @inlineCallbacks
-    def resourceRemoved(cls, txn, resourceID, dropboxID):
-        """
-        Remove all attachments referencing the specified resource.
-        """
-
-        # See if any other resources still reference this dropbox ID
-        co = schema.CALENDAR_OBJECT
-        rows = (yield Select(
-            [co.RESOURCE_ID, ],
-            From=co,
-            Where=(co.DROPBOX_ID == dropboxID).And(
-                co.RESOURCE_ID != resourceID)
-        ).on(txn))
-
-        if not rows:
-            # Find each attachment with matching dropbox ID
-            att = schema.ATTACHMENT
-            rows = (yield Select(
-                [att.PATH],
-                From=att,
-                Where=(att.DROPBOX_ID == dropboxID)
-            ).on(txn))
-            for name in rows:
-                name = name[0]
-                attachment = yield cls.load(txn, dropboxID, name)
-                yield attachment.remove()
-
-
-    @inlineCallbacks
-    def changed(self, contentType, dispositionName, md5, size):
-        """
-        Dropbox attachments never change their path - ignore dispositionName.
-        """
-
-        self._contentType = contentType
-        self._md5 = md5
-        self._size = size
-
-        att = schema.ATTACHMENT
-        self._created, self._modified = map(
-            sqltime,
-            (yield Update(
-                {
-                    att.CONTENT_TYPE    : generateContentType(self._contentType),
-                    att.SIZE            : self._size,
-                    att.MD5             : self._md5,
-                    att.MODIFIED        : utcNowSQL,
-                },
-                Where=(att.ATTACHMENT_ID == self._attachmentID),
-                Return=(att.CREATED, att.MODIFIED)).on(self._txn))[0]
-        )
-
-
-    @inlineCallbacks
-    def convertToManaged(self):
-        """
-        Convert this dropbox attachment into a managed attachment by updating the
-        database and returning a new ManagedAttachment object that does not reference
-        any calendar object. Referencing will be added later.
-
-        @return: the managed attachment object
-        @rtype: L{ManagedAttachment}
-        """
-
-        # Change the DROPBOX_ID to a single "." to indicate a managed attachment.
-        att = schema.ATTACHMENT
-        (yield Update(
-            {att.DROPBOX_ID    : ".", },
-            Where=(att.ATTACHMENT_ID == self._attachmentID),
-        ).on(self._txn))
-
-        # Create an "orphaned" ManagedAttachment that points to the updated data but without
-        # an actual managed-id (which only exists when there is a reference to a calendar object).
-        mattach = (yield ManagedAttachment.load(self._txn, None, None, attachmentID=self._attachmentID))
-        mattach._managedID = str(uuid.uuid4())
-        if mattach is None:
-            raise AttachmentMigrationFailed
-
-        # Then move the file on disk from the old path to the new one
-        try:
-            mattach._path.parent().makedirs()
-        except Exception:
-            # OK to fail if it already exists, otherwise must raise
-            if not mattach._path.parent().exists():
-                raise
-        oldpath = self._path
-        newpath = mattach._path
-        oldpath.moveTo(newpath)
-        self.removeParentPaths()
-
-        returnValue(mattach)
-
-
-
-class ManagedAttachment(Attachment):
-    """
-    Managed attachments are ones that the server is in total control of. Clients do POSTs on calendar objects
-    to store the attachment data and have ATTACH properties added, updated or remove from the calendar objects.
-    Each ATTACH property in a calendar object has a MANAGED-ID iCalendar parameter that is used in the POST requests
-    to target a specific attachment. The MANAGED-ID values are unique to each calendar object resource, though
-    multiple calendar object resources can point to the same underlying attachment as there is a separate database
-    table that maps calendar objects/managed-ids to actual attachments.
-    """
-
-    @classmethod
-    @inlineCallbacks
-    def _create(cls, txn, managedID, ownerHomeID):
-        """
-        Create a new managed Attachment object.
-
-        @param txn: The transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param managedID: the identifier for the attachment
-        @type managedID: C{str}
-        @param ownerHomeID: the resource-id of the home collection of the attachment owner
-        @type ownerHomeID: C{int}
-        """
-
-        # Now create the DB entry
-        att = schema.ATTACHMENT
-        rows = (yield Insert({
-            att.CALENDAR_HOME_RESOURCE_ID : ownerHomeID,
-            att.DROPBOX_ID                : ".",
-            att.CONTENT_TYPE              : "",
-            att.SIZE                      : 0,
-            att.MD5                       : "",
-            att.PATH                      : "",
-        }, Return=(att.ATTACHMENT_ID, att.CREATED, att.MODIFIED)).on(txn))
-
-        row_iter = iter(rows[0])
-        a_id = row_iter.next()
-        created = sqltime(row_iter.next())
-        modified = sqltime(row_iter.next())
-
-        attachment = cls(txn, a_id, ".", None, ownerHomeID, True)
-        attachment._managedID = managedID
-        attachment._created = created
-        attachment._modified = modified
-
-        # File system paths need to exist
-        try:
-            attachment._path.parent().makedirs()
-        except:
-            pass
-
-        returnValue(attachment)
-
-
-    @classmethod
-    @inlineCallbacks
-    def create(cls, txn, managedID, ownerHomeID, referencedBy):
-        """
-        Create a new Attachment object.
-
-        @param txn: The transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param managedID: the identifier for the attachment
-        @type managedID: C{str}
-        @param ownerHomeID: the resource-id of the home collection of the attachment owner
-        @type ownerHomeID: C{int}
-        @param referencedBy: the resource-id of the calendar object referencing the attachment
-        @type referencedBy: C{int}
-        """
-
-        # Now create the DB entry
-        attachment = (yield cls._create(txn, managedID, ownerHomeID))
-        attachment._objectResourceID = referencedBy
-
-        # Create the attachment<->calendar object relationship for managed attachments
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        yield Insert({
-            attco.ATTACHMENT_ID               : attachment._attachmentID,
-            attco.MANAGED_ID                  : attachment._managedID,
-            attco.CALENDAR_OBJECT_RESOURCE_ID : attachment._objectResourceID,
-        }).on(txn)
-
-        returnValue(attachment)
-
-
-    @classmethod
-    @inlineCallbacks
-    def update(cls, txn, oldManagedID, ownerHomeID, referencedBy, oldAttachmentID):
-        """
-        Create a new Attachment object.
-
-        @param txn: The transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param oldManagedID: the identifier for the original attachment
-        @type oldManagedID: C{str}
-        @param ownerHomeID: the resource-id of the home collection of the attachment owner
-        @type ownerHomeID: C{int}
-        @param referencedBy: the resource-id of the calendar object referencing the attachment
-        @type referencedBy: C{int}
-        @param oldAttachmentID: the attachment-id of the existing attachment being updated
-        @type oldAttachmentID: C{int}
-        """
-
-        # Now create the DB entry with a new managed-ID
-        managed_id = str(uuid.uuid4())
-        attachment = (yield cls._create(txn, managed_id, ownerHomeID))
-        attachment._objectResourceID = referencedBy
-
-        # Update the attachment<->calendar object relationship for managed attachments
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        yield Update(
-            {
-                attco.ATTACHMENT_ID    : attachment._attachmentID,
-                attco.MANAGED_ID       : attachment._managedID,
-            },
-            Where=(attco.MANAGED_ID == oldManagedID).And(
-                attco.CALENDAR_OBJECT_RESOURCE_ID == attachment._objectResourceID
-            ),
-        ).on(txn)
-
-        # Now check whether old attachmentID is still referenced - if not delete it
-        rows = (yield Select(
-            [attco.ATTACHMENT_ID, ],
-            From=attco,
-            Where=(attco.ATTACHMENT_ID == oldAttachmentID),
-        ).on(txn))
-        aids = [row[0] for row in rows] if rows is not None else ()
-        if len(aids) == 0:
-            oldattachment = ManagedAttachment(txn, oldAttachmentID, None, None)
-            oldattachment = (yield oldattachment.initFromStore())
-            yield oldattachment.remove()
-
-        returnValue(attachment)
-
-
-    @classmethod
-    @inlineCallbacks
-    def load(cls, txn, referencedID, managedID, attachmentID=None):
-        """
-        Load a ManagedAttachment via either its managedID or attachmentID.
-        """
-
-        if managedID:
-            attco = schema.ATTACHMENT_CALENDAR_OBJECT
-            where = (attco.MANAGED_ID == managedID)
-            if referencedID is not None:
-                where = where.And(attco.CALENDAR_OBJECT_RESOURCE_ID == referencedID)
-            rows = (yield Select(
-                [attco.ATTACHMENT_ID, ],
-                From=attco,
-                Where=where,
-            ).on(txn))
-            if len(rows) == 0:
-                returnValue(None)
-            elif referencedID is not None and len(rows) != 1:
-                raise AttachmentStoreValidManagedID
-            attachmentID = rows[0][0]
-
-        attachment = cls(txn, attachmentID, None, None)
-        attachment = (yield attachment.initFromStore())
-        attachment._managedID = managedID
-        attachment._objectResourceID = referencedID
-        returnValue(attachment)
-
-
-    @classmethod
-    @inlineCallbacks
-    def referencesTo(cls, txn, managedID):
-        """
-        Find all the calendar object resourceIds referenced by this supplied managed-id.
-        """
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        rows = (yield Select(
-            [attco.CALENDAR_OBJECT_RESOURCE_ID, ],
-            From=attco,
-            Where=(attco.MANAGED_ID == managedID),
-        ).on(txn))
-        cobjs = set([row[0] for row in rows]) if rows is not None else set()
-        returnValue(cobjs)
-
-
-    @classmethod
-    @inlineCallbacks
-    def usedManagedID(cls, txn, managedID):
-        """
-        Return the "owner" home and referencing resource is, and UID for a managed-id.
-        """
-        att = schema.ATTACHMENT
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        co = schema.CALENDAR_OBJECT
-        rows = (yield Select(
-            [
-                att.CALENDAR_HOME_RESOURCE_ID,
-                attco.CALENDAR_OBJECT_RESOURCE_ID,
-                co.ICALENDAR_UID,
-            ],
-            From=att.join(
-                attco, att.ATTACHMENT_ID == attco.ATTACHMENT_ID, "left outer"
-            ).join(co, co.RESOURCE_ID == attco.CALENDAR_OBJECT_RESOURCE_ID),
-            Where=(attco.MANAGED_ID == managedID),
-        ).on(txn))
-        returnValue(rows)
-
-
-    @classmethod
-    @inlineCallbacks
-    def resourceRemoved(cls, txn, resourceID):
-        """
-        Remove all attachments referencing the specified resource.
-        """
-
-        # Find all reference attachment-ids and dereference
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        rows = (yield Select(
-            [attco.MANAGED_ID, ],
-            From=attco,
-            Where=(attco.CALENDAR_OBJECT_RESOURCE_ID == resourceID),
-        ).on(txn))
-        mids = set([row[0] for row in rows]) if rows is not None else set()
-        for managedID in mids:
-            attachment = (yield ManagedAttachment.load(txn, resourceID, managedID))
-            (yield attachment.removeFromResource(resourceID))
-
-
-    @classmethod
-    @inlineCallbacks
-    def copyManagedID(cls, txn, managedID, referencedBy):
-        """
-        Associate an existing attachment with the new resource.
-        """
-
-        # Find the associated attachment-id and insert new reference
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        aid = (yield Select(
-            [attco.ATTACHMENT_ID, ],
-            From=attco,
-            Where=(attco.MANAGED_ID == managedID),
-        ).on(txn))[0][0]
-
-        yield Insert({
-            attco.ATTACHMENT_ID               : aid,
-            attco.MANAGED_ID                  : managedID,
-            attco.CALENDAR_OBJECT_RESOURCE_ID : referencedBy,
-        }).on(txn)
-
-
-    def managedID(self):
-        return self._managedID
-
-
-    @inlineCallbacks
-    def objectResource(self):
-        """
-        Return the calendar object resource associated with this attachment.
-        """
-
-        home = (yield self._txn.calendarHomeWithResourceID(self._ownerHomeID))
-        obj = (yield home.objectResourceWithID(self._objectResourceID))
-        returnValue(obj)
-
-
-    @property
-    def _path(self):
-        # Use directory hashing scheme based on MD5 of attachmentID
-        hasheduid = hashlib.md5(str(self._attachmentID)).hexdigest()
-        return self._attachmentPathRoot().child(hasheduid[0:2]).child(hasheduid[2:4]).child(hasheduid)
-
-
-    @inlineCallbacks
-    def location(self):
-        """
-        Return the URI location of the attachment.
-        """
-        if not hasattr(self, "_ownerName"):
-            home = (yield self._txn.calendarHomeWithResourceID(self._ownerHomeID))
-            self._ownerName = home.name()
-        if not hasattr(self, "_objectDropboxID"):
-            if not hasattr(self, "_objectResource"):
-                self._objectResource = (yield self.objectResource())
-            self._objectDropboxID = self._objectResource._dropboxID
-
-        fname = self.lastSegmentOfUriPath(self._managedID, self._name)
-        location = self._txn._store.attachmentsURIPattern % {
-            "home": self._ownerName,
-            "dropbox_id": urllib.quote(self._objectDropboxID),
-            "name": urllib.quote(fname),
-        }
-        returnValue(location)
-
-
-    @classmethod
-    def lastSegmentOfUriPath(cls, managed_id, name):
-        splits = name.rsplit(".", 1)
-        fname = splits[0]
-        suffix = splits[1] if len(splits) == 2 else "unknown"
-        return "{0}-{1}.{2}".format(fname, managed_id[:8], suffix)
-
-
-    @inlineCallbacks
-    def changed(self, contentType, dispositionName, md5, size):
-        """
-        Always update name to current disposition name.
-        """
-
-        self._contentType = contentType
-        self._name = dispositionName
-        self._md5 = md5
-        self._size = size
-        att = schema.ATTACHMENT
-        self._created, self._modified = map(
-            sqltime,
-            (yield Update(
-                {
-                    att.CONTENT_TYPE    : generateContentType(self._contentType),
-                    att.SIZE            : self._size,
-                    att.MD5             : self._md5,
-                    att.MODIFIED        : utcNowSQL,
-                    att.PATH            : self._name,
-                },
-                Where=(att.ATTACHMENT_ID == self._attachmentID),
-                Return=(att.CREATED, att.MODIFIED)).on(self._txn))[0]
-        )
-
-
-    @inlineCallbacks
-    def newReference(self, resourceID):
-        """
-        Create a new reference of this attachment to the supplied calendar object resource id, and
-        return a ManagedAttachment for the new reference.
-
-        @param resourceID: the resource id to reference
-        @type resourceID: C{int}
-
-        @return: the new managed attachment
-        @rtype: L{ManagedAttachment}
-        """
-
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        yield Insert({
-            attco.ATTACHMENT_ID               : self._attachmentID,
-            attco.MANAGED_ID                  : self._managedID,
-            attco.CALENDAR_OBJECT_RESOURCE_ID : resourceID,
-        }).on(self._txn)
-
-        mattach = (yield ManagedAttachment.load(self._txn, resourceID, self._managedID))
-        returnValue(mattach)
-
-
-    @inlineCallbacks
-    def removeFromResource(self, resourceID):
-
-        # Delete the reference
-        attco = schema.ATTACHMENT_CALENDAR_OBJECT
-        yield Delete(
-            From=attco,
-            Where=(attco.ATTACHMENT_ID == self._attachmentID).And(
-                attco.CALENDAR_OBJECT_RESOURCE_ID == resourceID),
-        ).on(self._txn)
-
-        # References still exist - if not remove actual attachment
-        rows = (yield Select(
-            [attco.CALENDAR_OBJECT_RESOURCE_ID, ],
-            From=attco,
-            Where=(attco.ATTACHMENT_ID == self._attachmentID),
-        ).on(self._txn))
-        if len(rows) == 0:
-            yield self.remove()
-
-
-    @inlineCallbacks
-    def attachProperty(self):
-        """
-        Return an iCalendar ATTACH property for this attachment.
-        """
-        attach = Property("ATTACH", "", valuetype=Value.VALUETYPE_URI)
-        location = (yield self.updateProperty(attach))
-        returnValue((attach, location,))
-
-
-    @inlineCallbacks
-    def updateProperty(self, attach):
-        """
-        Update an iCalendar ATTACH property for this attachment.
-        """
-
-        location = (yield self.location())
-
-        attach.setParameter("MANAGED-ID", self.managedID())
-        attach.setParameter("FMTTYPE", "{0}/{1}".format(self.contentType().mediaType, self.contentType().mediaSubtype))
-        attach.setParameter("FILENAME", self.name())
-        attach.setParameter("SIZE", str(self.size()))
-        attach.setValue(location)
-
-        returnValue(location)
-
-
 class TrashCollection(Calendar):
 
-    _childType = "trash"  # FIXME: make childType an enumeration
+    _childType = _CHILD_TYPE_TRASH
 
     def isTrash(self):
         return True
@@ -5917,8 +5149,6 @@
 
 
 
-
-
 # Hook-up class relationships at the end after they have all been defined
 from txdav.caldav.datastore.sql_external import CalendarHomeExternal, CalendarExternal, CalendarObjectExternal
 CalendarHome._externalClass = CalendarHomeExternal

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_attachment.py (from rev 14551, CalendarServer/trunk/txdav/caldav/datastore/sql_attachment.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_attachment.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_attachment.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,1204 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from pycalendar.value import Value
+
+from twext.enterprise.dal.syntax import Select, Insert, Delete, Parameter, \
+    Update, utcNowSQL
+from twext.enterprise.util import parseSQLTimestamp
+from twext.python.filepath import CachingFilePath
+
+from twisted.internet.defer import inlineCallbacks, returnValue
+
+from twistedcaldav.config import config
+from twistedcaldav.dateops import datetimeMktime
+from twistedcaldav.ical import Property
+
+from txdav.caldav.datastore.util import StorageTransportBase, \
+    AttachmentRetrievalTransport
+from txdav.caldav.icalendarstore import AttachmentSizeTooLarge, QuotaExceeded, \
+    IAttachment, AttachmentDropboxNotAllowed, AttachmentMigrationFailed, \
+    AttachmentStoreValidManagedID
+from txdav.common.datastore.sql_tables import schema
+
+from txweb2.http_headers import MimeType, generateContentType
+
+from zope.interface.declarations import implements
+
+import hashlib
+import itertools
+import os
+import tempfile
+import urllib
+import uuid
+
+"""
+Classes and methods that relate to CalDAV attachments in the SQL store.
+"""
+
+
+class AttachmentStorageTransport(StorageTransportBase):
+
+    _TEMPORARY_UPLOADS_DIRECTORY = "Temporary"
+
+    def __init__(self, attachment, contentType, dispositionName, creating=False, migrating=False):
+        super(AttachmentStorageTransport, self).__init__(
+            attachment, contentType, dispositionName)
+
+        fileDescriptor, fileName = self._temporaryFile()
+        # Wrap the file descriptor in a file object we can write to
+        self._file = os.fdopen(fileDescriptor, "w")
+        self._path = CachingFilePath(fileName)
+        self._hash = hashlib.md5()
+        self._creating = creating
+        self._migrating = migrating
+
+        self._txn.postAbort(self.aborted)
+
+
+    def _temporaryFile(self):
+        """
+        Returns a (file descriptor, absolute path) tuple for a temporary file within
+        the Attachments/Temporary directory (creating the Temporary subdirectory
+        if it doesn't exist).  It is the caller's responsibility to remove the
+        file.
+        """
+        attachmentRoot = self._txn._store.attachmentsPath
+        tempUploadsPath = attachmentRoot.child(self._TEMPORARY_UPLOADS_DIRECTORY)
+        if not tempUploadsPath.exists():
+            tempUploadsPath.createDirectory()
+        return tempfile.mkstemp(dir=tempUploadsPath.path)
+
+
+    @property
+    def _txn(self):
+        return self._attachment._txn
+
+
+    def aborted(self):
+        """
+        Transaction aborted - clean up temp files.
+        """
+        if self._path.exists():
+            self._path.remove()
+
+
+    def write(self, data):
+        if isinstance(data, buffer):
+            data = str(data)
+        self._file.write(data)
+        self._hash.update(data)
+
+
+    @inlineCallbacks
+    def loseConnection(self):
+        """
+        Note that when self._migrating is set we only care about the data and don't need to
+        do any quota checks/adjustments.
+        """
+
+        # FIXME: this should be synchronously accessible; IAttachment should
+        # have a method for getting its parent just as CalendarObject/Calendar
+        # do.
+
+        # FIXME: If this method isn't called, the transaction should be
+        # prevented from committing successfully.  It's not valid to have an
+        # attachment that doesn't point to a real file.
+
+        home = (yield self._txn.calendarHomeWithResourceID(self._attachment._ownerHomeID))
+
+        oldSize = self._attachment.size()
+        newSize = self._file.tell()
+        self._file.close()
+
+        # Check max size for attachment
+        if not self._migrating and newSize > config.MaximumAttachmentSize:
+            self._path.remove()
+            if self._creating:
+                yield self._attachment._internalRemove()
+            raise AttachmentSizeTooLarge()
+
+        # Check overall user quota
+        if not self._migrating:
+            allowed = home.quotaAllowedBytes()
+            if allowed is not None and allowed < ((yield home.quotaUsedBytes())
+                                                  + (newSize - oldSize)):
+                self._path.remove()
+                if self._creating:
+                    yield self._attachment._internalRemove()
+                raise QuotaExceeded()
+
+        self._path.moveTo(self._attachment._path)
+
+        yield self._attachment.changed(
+            self._contentType,
+            self._dispositionName,
+            self._hash.hexdigest(),
+            newSize
+        )
+
+        if not self._migrating and home:
+            # Adjust quota
+            yield home.adjustQuotaUsedBytes(self._attachment.size() - oldSize)
+
+            # Send change notification to home
+            yield home.notifyChanged()
+
+
+
+class AttachmentLink(object):
+    """
+    A binding between an L{Attachment} and an L{CalendarObject}.
+    """
+
+    _attachmentSchema = schema.ATTACHMENT
+    _attachmentLinkSchema = schema.ATTACHMENT_CALENDAR_OBJECT
+
+    @classmethod
+    def makeClass(cls, txn, linkData):
+        """
+        Given the various database rows, build the actual class.
+
+        @param objectData: the standard set of object columns
+        @type objectData: C{list}
+
+        @return: the constructed child class
+        @rtype: L{CommonHomeChild}
+        """
+
+        child = cls(txn)
+        for attr, value in zip(child._rowAttributes(), linkData):
+            setattr(child, attr, value)
+        return child
+
+
+    @classmethod
+    def _allColumns(cls):
+        """
+        Full set of columns in the object table that need to be loaded to
+        initialize the object resource state.
+        """
+        aco = cls._attachmentLinkSchema
+        return [
+            aco.ATTACHMENT_ID,
+            aco.MANAGED_ID,
+            aco.CALENDAR_OBJECT_RESOURCE_ID,
+        ]
+
+
+    @classmethod
+    def _rowAttributes(cls):
+        """
+        Object attributes used to store the column values from L{_allColumns}. This used to create
+        a mapping when serializing the object for cross-pod requests.
+        """
+        return (
+            "_attachmentID",
+            "_managedID",
+            "_calendarObjectID",
+        )
+
+
+    @classmethod
+    @inlineCallbacks
+    def linksForHome(cls, home):
+        """
+        Load all attachment<->calendar object mappings for the specified home collection.
+        """
+
+        # Load from the main table first
+        att = cls._attachmentSchema
+        attco = cls._attachmentLinkSchema
+        dataRows = yield Select(
+            cls._allColumns(),
+            From=attco.join(att, on=(attco.ATTACHMENT_ID == att.ATTACHMENT_ID)),
+            Where=att.CALENDAR_HOME_RESOURCE_ID == home.id(),
+        ).on(home._txn)
+
+        # Create the actual objects
+        returnValue([cls.makeClass(home._txn, row) for row in dataRows])
+
+
+    def __init__(self, txn):
+        self._txn = txn
+        for attr in self._rowAttributes():
+            setattr(self, attr, None)
+
+
+    def serialize(self):
+        """
+        Create a dictionary mapping key attributes so this object can be sent over a cross-pod call
+        and reconstituted at the other end. Note that the other end may have a different schema so
+        the attributes may not match exactly and will need to be processed accordingly.
+        """
+        return dict([(attr[1:], getattr(self, attr, None)) for attr in self._rowAttributes()])
+
+
+    @classmethod
+    def deserialize(cls, txn, mapping):
+        """
+        Given a mapping generated by L{serialize}, convert the values into an array of database
+        like items that conforms to the ordering of L{_allColumns} so it can be fed into L{makeClass}.
+        Note that there may be a schema mismatch with the external data, so treat missing items as
+        C{None} and ignore extra items.
+        """
+
+        return cls.makeClass(txn, [mapping.get(row[1:]) for row in cls._rowAttributes()])
+
+
+    def insert(self):
+        """
+        Insert the object.
+        """
+
+        row = dict([(column, getattr(self, attr)) for column, attr in itertools.izip(self._allColumns(), self._rowAttributes())])
+        return Insert(row).on(self._txn)
+
+
+
+class Attachment(object):
+
+    implements(IAttachment)
+
+    _attachmentSchema = schema.ATTACHMENT
+    _attachmentLinkSchema = schema.ATTACHMENT_CALENDAR_OBJECT
+
+    @classmethod
+    def makeClass(cls, txn, attachmentData):
+        """
+        Given the various database rows, build the actual class.
+
+        @param attachmentData: the standard set of attachment columns
+        @type attachmentData: C{list}
+
+        @return: the constructed child class
+        @rtype: L{Attachment}
+        """
+
+        att = cls._attachmentSchema
+        dropbox_id = attachmentData[cls._allColumns().index(att.DROPBOX_ID)]
+        c = ManagedAttachment if dropbox_id == "." else DropBoxAttachment
+        child = c(
+            txn,
+            attachmentData[cls._allColumns().index(att.ATTACHMENT_ID)],
+            attachmentData[cls._allColumns().index(att.DROPBOX_ID)],
+            attachmentData[cls._allColumns().index(att.PATH)],
+        )
+
+        for attr, value in zip(child._rowAttributes(), attachmentData):
+            setattr(child, attr, value)
+        child._created = parseSQLTimestamp(child._created)
+        child._modified = parseSQLTimestamp(child._modified)
+        child._contentType = MimeType.fromString(child._contentType)
+
+        return child
+
+
+    @classmethod
+    def _allColumns(cls):
+        """
+        Full set of columns in the object table that need to be loaded to
+        initialize the object resource state.
+        """
+        att = cls._attachmentSchema
+        return [
+            att.ATTACHMENT_ID,
+            att.DROPBOX_ID,
+            att.CALENDAR_HOME_RESOURCE_ID,
+            att.CONTENT_TYPE,
+            att.SIZE,
+            att.MD5,
+            att.CREATED,
+            att.MODIFIED,
+            att.PATH,
+        ]
+
+
+    @classmethod
+    def _rowAttributes(cls):
+        """
+        Object attributes used to store the column values from L{_allColumns}. This used to create
+        a mapping when serializing the object for cross-pod requests.
+        """
+        return (
+            "_attachmentID",
+            "_dropboxID",
+            "_ownerHomeID",
+            "_contentType",
+            "_size",
+            "_md5",
+            "_created",
+            "_modified",
+            "_name",
+        )
+
+
+    @classmethod
+    @inlineCallbacks
+    def loadAllAttachments(cls, home):
+        """
+        Load all attachments assigned to the specified home collection. This should only be
+        used when sync'ing an entire home's set of attachments.
+        """
+
+        results = []
+
+        # Load from the main table first
+        att = cls._attachmentSchema
+        dataRows = yield Select(
+            cls._allColumns(),
+            From=att,
+            Where=att.CALENDAR_HOME_RESOURCE_ID == home.id(),
+        ).on(home._txn)
+
+        # Create the actual objects
+        for row in dataRows:
+            child = cls.makeClass(home._txn, row)
+            results.append(child)
+
+        returnValue(results)
+
+
+    @classmethod
+    @inlineCallbacks
+    def loadAttachmentByID(cls, home, id):
+        """
+        Load one attachments assigned to the specified home collection. This should only be
+        used when sync'ing an entire home's set of attachments.
+        """
+
+        # Load from the main table first
+        att = cls._attachmentSchema
+        rows = yield Select(
+            cls._allColumns(),
+            From=att,
+            Where=(att.CALENDAR_HOME_RESOURCE_ID == home.id()).And(
+                att.ATTACHMENT_ID == id),
+        ).on(home._txn)
+
+        # Create the actual object
+        returnValue(cls.makeClass(home._txn, rows[0]) if len(rows) == 1 else None)
+
+
+    def serialize(self):
+        """
+        Create a dictionary mapping key attributes so this object can be sent over a cross-pod call
+        and reconstituted at the other end. Note that the other end may have a different schema so
+        the attributes may not match exactly and will need to be processed accordingly.
+        """
+        result = dict([(attr[1:], getattr(self, attr, None)) for attr in self._rowAttributes()])
+        result["created"] = result["created"].isoformat(" ")
+        result["modified"] = result["modified"].isoformat(" ")
+        result["contentType"] = generateContentType(result["contentType"])
+        return result
+
+
+    @classmethod
+    def deserialize(cls, txn, mapping):
+        """
+        Given a mapping generated by L{serialize}, convert the values into an array of database
+        like items that conforms to the ordering of L{_allColumns} so it can be fed into L{makeClass}.
+        Note that there may be a schema mismatch with the external data, so treat missing items as
+        C{None} and ignore extra items.
+        """
+
+        return cls.makeClass(txn, [mapping.get(row[1:]) for row in cls._rowAttributes()])
+
+
+    def __init__(self, txn, a_id, dropboxID, name, ownerHomeID=None, justCreated=False):
+        self._txn = txn
+        self._attachmentID = a_id
+        self._ownerHomeID = ownerHomeID
+        self._dropboxID = dropboxID
+        self._contentType = None
+        self._size = 0
+        self._md5 = None
+        self._created = None
+        self._modified = None
+        self._name = name
+        self._justCreated = justCreated
+
+
+    def __repr__(self):
+        return (
+            "<{self.__class__.__name__}: {self._attachmentID}>"
+            .format(self=self)
+        )
+
+
+    def _attachmentPathRoot(self):
+        return self._txn._store.attachmentsPath
+
+
+    @inlineCallbacks
+    def initFromStore(self):
+        """
+        Execute necessary SQL queries to retrieve attributes.
+
+        @return: C{True} if this attachment exists, C{False} otherwise.
+        """
+        att = self._attachmentSchema
+        if self._dropboxID and self._dropboxID != ".":
+            where = (att.DROPBOX_ID == self._dropboxID).And(
+                att.PATH == self._name)
+        else:
+            where = (att.ATTACHMENT_ID == self._attachmentID)
+        rows = (yield Select(
+            self._allColumns(),
+            From=att,
+            Where=where
+        ).on(self._txn))
+
+        if not rows:
+            returnValue(None)
+
+        for attr, value in zip(self._rowAttributes(), rows[0]):
+            setattr(self, attr, value)
+        self._created = parseSQLTimestamp(self._created)
+        self._modified = parseSQLTimestamp(self._modified)
+        self._contentType = MimeType.fromString(self._contentType)
+
+        returnValue(self)
+
+
+    def copyRemote(self, remote):
+        """
+        Copy properties from a remote (external) attachment that is being migrated.
+
+        @param remote: the external attachment
+        @type remote: L{Attachment}
+        """
+        return self.changed(remote.contentType(), remote.name(), remote.md5(), remote.size())
+
+
+    def id(self):
+        return self._attachmentID
+
+
+    def dropboxID(self):
+        return self._dropboxID
+
+
+    def isManaged(self):
+        return self._dropboxID == "."
+
+
+    def name(self):
+        return self._name
+
+
+    def properties(self):
+        pass  # stub
+
+
+    def store(self, contentType, dispositionName=None, migrating=False):
+        if not self._name:
+            self._name = dispositionName
+        return AttachmentStorageTransport(self, contentType, dispositionName, self._justCreated, migrating=migrating)
+
+
+    def retrieve(self, protocol):
+        return AttachmentRetrievalTransport(self._path).start(protocol)
+
+
+    def changed(self, contentType, dispositionName, md5, size):
+        raise NotImplementedError
+
+    _removeStatement = Delete(
+        From=schema.ATTACHMENT,
+        Where=(schema.ATTACHMENT.ATTACHMENT_ID == Parameter("attachmentID"))
+    )
+
+
+    @inlineCallbacks
+    def remove(self, adjustQuota=True):
+        oldSize = self._size
+        self._txn.postCommit(self.removePaths)
+        yield self._internalRemove()
+
+        # Adjust quota
+        if adjustQuota:
+            home = (yield self._txn.calendarHomeWithResourceID(self._ownerHomeID))
+            if home:
+                yield home.adjustQuotaUsedBytes(-oldSize)
+
+                # Send change notification to home
+                yield home.notifyChanged()
+
+
+    def removePaths(self):
+        """
+        Remove the actual file and up to attachment parent directory if empty.
+        """
+        self._path.remove()
+        self.removeParentPaths()
+
+
+    def removeParentPaths(self):
+        """
+        Remove up to attachment parent directory if empty.
+        """
+        parent = self._path.parent()
+        toppath = self._attachmentPathRoot().path
+        while parent.path != toppath:
+            if len(parent.listdir()) == 0:
+                parent.remove()
+                parent = parent.parent()
+            else:
+                break
+
+
+    def _internalRemove(self):
+        """
+        Just delete the row; don't do any accounting / bookkeeping.  (This is
+        for attachments that have failed to be created due to errors during
+        storage.)
+        """
+        return self._removeStatement.on(self._txn, attachmentID=self._attachmentID)
+
+
+    @classmethod
+    @inlineCallbacks
+    def removedHome(cls, txn, homeID):
+        """
+        A calendar home is being removed so all of its attachments must go too. When removing,
+        we don't care about quota adjustment as there will be no quota once the home is removed.
+
+        TODO: this needs to be transactional wrt the actual file deletes.
+        """
+        att = cls._attachmentSchema
+        attco = cls._attachmentLinkSchema
+
+        rows = (yield Select(
+            [att.ATTACHMENT_ID, att.DROPBOX_ID, ],
+            From=att,
+            Where=(
+                att.CALENDAR_HOME_RESOURCE_ID == homeID
+            ),
+        ).on(txn))
+
+        for attachmentID, dropboxID in rows:
+            if dropboxID != ".":
+                attachment = DropBoxAttachment(txn, attachmentID, None, None)
+            else:
+                attachment = ManagedAttachment(txn, attachmentID, None, None)
+            attachment = (yield attachment.initFromStore())
+            if attachment._path.exists():
+                attachment.removePaths()
+
+        yield Delete(
+            From=attco,
+            Where=(
+                attco.ATTACHMENT_ID.In(Select(
+                    [att.ATTACHMENT_ID, ],
+                    From=att,
+                    Where=(
+                        att.CALENDAR_HOME_RESOURCE_ID == homeID
+                    ),
+                ))
+            ),
+        ).on(txn)
+
+        yield Delete(
+            From=att,
+            Where=(
+                att.CALENDAR_HOME_RESOURCE_ID == homeID
+            ),
+        ).on(txn)
+
+
+    # IDataStoreObject
+    def contentType(self):
+        return self._contentType
+
+
+    def md5(self):
+        return self._md5
+
+
+    def size(self):
+        return self._size
+
+
+    def created(self):
+        return datetimeMktime(self._created)
+
+
+    def modified(self):
+        return datetimeMktime(self._modified)
+
+
+
+class DropBoxAttachment(Attachment):
+
+    @classmethod
+    @inlineCallbacks
+    def create(cls, txn, dropboxID, name, ownerHomeID):
+        """
+        Create a new Attachment object.
+
+        @param txn: The transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param dropboxID: the identifier for the attachment (dropbox id or managed id)
+        @type dropboxID: C{str}
+        @param name: the name of the attachment
+        @type name: C{str}
+        @param ownerHomeID: the resource-id of the home collection of the attachment owner
+        @type ownerHomeID: C{int}
+        """
+
+        # If store has already migrated to managed attachments we will prevent creation of dropbox attachments
+        dropbox = (yield txn.store().dropboxAllowed(txn))
+        if not dropbox:
+            raise AttachmentDropboxNotAllowed
+
+        # Now create the DB entry
+        att = cls._attachmentSchema
+        rows = (yield Insert({
+            att.CALENDAR_HOME_RESOURCE_ID : ownerHomeID,
+            att.DROPBOX_ID                : dropboxID,
+            att.CONTENT_TYPE              : "",
+            att.SIZE                      : 0,
+            att.MD5                       : "",
+            att.PATH                      : name,
+        }, Return=(att.ATTACHMENT_ID, att.CREATED, att.MODIFIED)).on(txn))
+
+        row_iter = iter(rows[0])
+        a_id = row_iter.next()
+        created = parseSQLTimestamp(row_iter.next())
+        modified = parseSQLTimestamp(row_iter.next())
+
+        attachment = cls(txn, a_id, dropboxID, name, ownerHomeID, True)
+        attachment._created = created
+        attachment._modified = modified
+
+        # File system paths need to exist
+        try:
+            attachment._path.parent().makedirs()
+        except:
+            pass
+
+        returnValue(attachment)
+
+
+    @classmethod
+    @inlineCallbacks
+    def load(cls, txn, dropboxID, name):
+        attachment = cls(txn, None, dropboxID, name)
+        attachment = (yield attachment.initFromStore())
+        returnValue(attachment)
+
+
+    @property
+    def _path(self):
+        # Use directory hashing scheme based on MD5 of dropboxID
+        hasheduid = hashlib.md5(self._dropboxID).hexdigest()
+        attachmentRoot = self._attachmentPathRoot().child(hasheduid[0:2]).child(hasheduid[2:4]).child(hasheduid)
+        return attachmentRoot.child(self.name())
+
+
+    @classmethod
+    @inlineCallbacks
+    def resourceRemoved(cls, txn, resourceID, dropboxID):
+        """
+        Remove all attachments referencing the specified resource.
+        """
+
+        # See if any other resources still reference this dropbox ID
+        co = schema.CALENDAR_OBJECT
+        rows = (yield Select(
+            [co.RESOURCE_ID, ],
+            From=co,
+            Where=(co.DROPBOX_ID == dropboxID).And(
+                co.RESOURCE_ID != resourceID)
+        ).on(txn))
+
+        if not rows:
+            # Find each attachment with matching dropbox ID
+            att = cls._attachmentSchema
+            rows = (yield Select(
+                [att.PATH],
+                From=att,
+                Where=(att.DROPBOX_ID == dropboxID)
+            ).on(txn))
+            for name in rows:
+                name = name[0]
+                attachment = yield cls.load(txn, dropboxID, name)
+                yield attachment.remove()
+
+
+    @inlineCallbacks
+    def changed(self, contentType, dispositionName, md5, size):
+        """
+        Dropbox attachments never change their path - ignore dispositionName.
+        """
+
+        self._contentType = contentType
+        self._md5 = md5
+        self._size = size
+
+        att = self._attachmentSchema
+        self._created, self._modified = map(
+            parseSQLTimestamp,
+            (yield Update(
+                {
+                    att.CONTENT_TYPE    : generateContentType(self._contentType),
+                    att.SIZE            : self._size,
+                    att.MD5             : self._md5,
+                    att.MODIFIED        : utcNowSQL,
+                },
+                Where=(att.ATTACHMENT_ID == self._attachmentID),
+                Return=(att.CREATED, att.MODIFIED)).on(self._txn))[0]
+        )
+
+
+    @inlineCallbacks
+    def convertToManaged(self):
+        """
+        Convert this dropbox attachment into a managed attachment by updating the
+        database and returning a new ManagedAttachment object that does not reference
+        any calendar object. Referencing will be added later.
+
+        @return: the managed attachment object
+        @rtype: L{ManagedAttachment}
+        """
+
+        # Change the DROPBOX_ID to a single "." to indicate a managed attachment.
+        att = self._attachmentSchema
+        (yield Update(
+            {att.DROPBOX_ID    : ".", },
+            Where=(att.ATTACHMENT_ID == self._attachmentID),
+        ).on(self._txn))
+
+        # Create an "orphaned" ManagedAttachment that points to the updated data but without
+        # an actual managed-id (which only exists when there is a reference to a calendar object).
+        mattach = (yield ManagedAttachment.load(self._txn, None, None, attachmentID=self._attachmentID))
+        mattach._managedID = str(uuid.uuid4())
+        if mattach is None:
+            raise AttachmentMigrationFailed
+
+        # Then move the file on disk from the old path to the new one
+        try:
+            mattach._path.parent().makedirs()
+        except Exception:
+            # OK to fail if it already exists, otherwise must raise
+            if not mattach._path.parent().exists():
+                raise
+        oldpath = self._path
+        newpath = mattach._path
+        oldpath.moveTo(newpath)
+        self.removeParentPaths()
+
+        returnValue(mattach)
+
+
+
+class ManagedAttachment(Attachment):
+    """
+    Managed attachments are ones that the server is in total control of. Clients do POSTs on calendar objects
+    to store the attachment data and have ATTACH properties added, updated or remove from the calendar objects.
+    Each ATTACH property in a calendar object has a MANAGED-ID iCalendar parameter that is used in the POST requests
+    to target a specific attachment. The MANAGED-ID values are unique to each calendar object resource, though
+    multiple calendar object resources can point to the same underlying attachment as there is a separate database
+    table that maps calendar objects/managed-ids to actual attachments.
+    """
+
+    @classmethod
+    @inlineCallbacks
+    def _create(cls, txn, managedID, ownerHomeID):
+        """
+        Create a new managed Attachment object.
+
+        @param txn: The transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param managedID: the identifier for the attachment
+        @type managedID: C{str}
+        @param ownerHomeID: the resource-id of the home collection of the attachment owner
+        @type ownerHomeID: C{int}
+        """
+
+        # Now create the DB entry
+        att = cls._attachmentSchema
+        rows = (yield Insert({
+            att.CALENDAR_HOME_RESOURCE_ID : ownerHomeID,
+            att.DROPBOX_ID                : ".",
+            att.CONTENT_TYPE              : "",
+            att.SIZE                      : 0,
+            att.MD5                       : "",
+            att.PATH                      : "",
+        }, Return=(att.ATTACHMENT_ID, att.CREATED, att.MODIFIED)).on(txn))
+
+        row_iter = iter(rows[0])
+        a_id = row_iter.next()
+        created = parseSQLTimestamp(row_iter.next())
+        modified = parseSQLTimestamp(row_iter.next())
+
+        attachment = cls(txn, a_id, ".", None, ownerHomeID, True)
+        attachment._managedID = managedID
+        attachment._created = created
+        attachment._modified = modified
+
+        # File system paths need to exist
+        try:
+            attachment._path.parent().makedirs()
+        except:
+            pass
+
+        returnValue(attachment)
+
+
+    @classmethod
+    @inlineCallbacks
+    def create(cls, txn, managedID, ownerHomeID, referencedBy):
+        """
+        Create a new Attachment object and reference it.
+
+        @param txn: The transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param managedID: the identifier for the attachment
+        @type managedID: C{str}
+        @param ownerHomeID: the resource-id of the home collection of the attachment owner
+        @type ownerHomeID: C{int}
+        @param referencedBy: the resource-id of the calendar object referencing the attachment
+        @type referencedBy: C{int}
+        """
+
+        # Now create the DB entry
+        attachment = (yield cls._create(txn, managedID, ownerHomeID))
+        attachment._objectResourceID = referencedBy
+
+        # Create the attachment<->calendar object relationship for managed attachments
+        attco = cls._attachmentLinkSchema
+        yield Insert({
+            attco.ATTACHMENT_ID               : attachment._attachmentID,
+            attco.MANAGED_ID                  : attachment._managedID,
+            attco.CALENDAR_OBJECT_RESOURCE_ID : attachment._objectResourceID,
+        }).on(txn)
+
+        returnValue(attachment)
+
+
+    @classmethod
+    @inlineCallbacks
+    def update(cls, txn, oldManagedID, ownerHomeID, referencedBy, oldAttachmentID):
+        """
+        Update an Attachment object. This creates a new one and adjusts the reference to the old
+        one to point to the new one. If the old one is no longer referenced at all, it is deleted.
+
+        @param txn: The transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param oldManagedID: the identifier for the original attachment
+        @type oldManagedID: C{str}
+        @param ownerHomeID: the resource-id of the home collection of the attachment owner
+        @type ownerHomeID: C{int}
+        @param referencedBy: the resource-id of the calendar object referencing the attachment
+        @type referencedBy: C{int}
+        @param oldAttachmentID: the attachment-id of the existing attachment being updated
+        @type oldAttachmentID: C{int}
+        """
+
+        # Now create the DB entry with a new managed-ID
+        managed_id = str(uuid.uuid4())
+        attachment = (yield cls._create(txn, managed_id, ownerHomeID))
+        attachment._objectResourceID = referencedBy
+
+        # Update the attachment<->calendar object relationship for managed attachments
+        attco = cls._attachmentLinkSchema
+        yield Update(
+            {
+                attco.ATTACHMENT_ID    : attachment._attachmentID,
+                attco.MANAGED_ID       : attachment._managedID,
+            },
+            Where=(attco.MANAGED_ID == oldManagedID).And(
+                attco.CALENDAR_OBJECT_RESOURCE_ID == attachment._objectResourceID
+            ),
+        ).on(txn)
+
+        # Now check whether old attachmentID is still referenced - if not delete it
+        rows = (yield Select(
+            [attco.ATTACHMENT_ID, ],
+            From=attco,
+            Where=(attco.ATTACHMENT_ID == oldAttachmentID),
+        ).on(txn))
+        aids = [row[0] for row in rows] if rows is not None else ()
+        if len(aids) == 0:
+            oldattachment = ManagedAttachment(txn, oldAttachmentID, None, None)
+            oldattachment = (yield oldattachment.initFromStore())
+            yield oldattachment.remove()
+
+        returnValue(attachment)
+
+
+    @classmethod
+    @inlineCallbacks
+    def load(cls, txn, referencedID, managedID, attachmentID=None):
+        """
+        Load a ManagedAttachment via either its managedID or attachmentID.
+        """
+
+        if managedID:
+            attco = cls._attachmentLinkSchema
+            where = (attco.MANAGED_ID == managedID)
+            if referencedID is not None:
+                where = where.And(attco.CALENDAR_OBJECT_RESOURCE_ID == referencedID)
+            rows = (yield Select(
+                [attco.ATTACHMENT_ID, ],
+                From=attco,
+                Where=where,
+            ).on(txn))
+            if len(rows) == 0:
+                returnValue(None)
+            elif referencedID is not None and len(rows) != 1:
+                raise AttachmentStoreValidManagedID
+            attachmentID = rows[0][0]
+
+        attachment = cls(txn, attachmentID, None, None)
+        attachment = (yield attachment.initFromStore())
+        attachment._managedID = managedID
+        attachment._objectResourceID = referencedID
+        returnValue(attachment)
+
+
+    @classmethod
+    @inlineCallbacks
+    def referencesTo(cls, txn, managedID):
+        """
+        Find all the calendar object resourceIds referenced by this supplied managed-id.
+        """
+        attco = cls._attachmentLinkSchema
+        rows = (yield Select(
+            [attco.CALENDAR_OBJECT_RESOURCE_ID, ],
+            From=attco,
+            Where=(attco.MANAGED_ID == managedID),
+        ).on(txn))
+        cobjs = set([row[0] for row in rows]) if rows is not None else set()
+        returnValue(cobjs)
+
+
+    @classmethod
+    @inlineCallbacks
+    def usedManagedID(cls, txn, managedID):
+        """
+        Return the "owner" home and referencing resource is, and UID for a managed-id.
+        """
+        att = cls._attachmentSchema
+        attco = cls._attachmentLinkSchema
+        co = schema.CALENDAR_OBJECT
+        rows = (yield Select(
+            [
+                att.CALENDAR_HOME_RESOURCE_ID,
+                attco.CALENDAR_OBJECT_RESOURCE_ID,
+                co.ICALENDAR_UID,
+            ],
+            From=att.join(
+                attco, att.ATTACHMENT_ID == attco.ATTACHMENT_ID, "left outer"
+            ).join(co, co.RESOURCE_ID == attco.CALENDAR_OBJECT_RESOURCE_ID),
+            Where=(attco.MANAGED_ID == managedID),
+        ).on(txn))
+        returnValue(rows)
+
+
+    @classmethod
+    @inlineCallbacks
+    def resourceRemoved(cls, txn, resourceID):
+        """
+        Remove all attachments referencing the specified resource.
+        """
+
+        # Find all reference attachment-ids and dereference
+        attco = cls._attachmentLinkSchema
+        rows = (yield Select(
+            [attco.MANAGED_ID, ],
+            From=attco,
+            Where=(attco.CALENDAR_OBJECT_RESOURCE_ID == resourceID),
+        ).on(txn))
+        mids = set([row[0] for row in rows]) if rows is not None else set()
+        for managedID in mids:
+            attachment = (yield ManagedAttachment.load(txn, resourceID, managedID))
+            (yield attachment.removeFromResource(resourceID))
+
+
+    @classmethod
+    @inlineCallbacks
+    def copyManagedID(cls, txn, managedID, referencedBy):
+        """
+        Associate an existing attachment with the new resource.
+        """
+
+        # Find the associated attachment-id and insert new reference
+        attco = cls._attachmentLinkSchema
+        aid = (yield Select(
+            [attco.ATTACHMENT_ID, ],
+            From=attco,
+            Where=(attco.MANAGED_ID == managedID),
+        ).on(txn))[0][0]
+
+        yield Insert({
+            attco.ATTACHMENT_ID               : aid,
+            attco.MANAGED_ID                  : managedID,
+            attco.CALENDAR_OBJECT_RESOURCE_ID : referencedBy,
+        }).on(txn)
+
+
+    def managedID(self):
+        return self._managedID
+
+
+    @inlineCallbacks
+    def objectResource(self):
+        """
+        Return the calendar object resource associated with this attachment.
+        """
+
+        home = (yield self._txn.calendarHomeWithResourceID(self._ownerHomeID))
+        obj = (yield home.objectResourceWithID(self._objectResourceID))
+        returnValue(obj)
+
+
+    @property
+    def _path(self):
+        # Use directory hashing scheme based on MD5 of attachmentID
+        hasheduid = hashlib.md5(str(self._attachmentID)).hexdigest()
+        return self._attachmentPathRoot().child(hasheduid[0:2]).child(hasheduid[2:4]).child(hasheduid)
+
+
+    @inlineCallbacks
+    def location(self):
+        """
+        Return the URI location of the attachment.
+        """
+        if not hasattr(self, "_ownerName"):
+            home = (yield self._txn.calendarHomeWithResourceID(self._ownerHomeID))
+            self._ownerName = home.name()
+        if not hasattr(self, "_objectDropboxID"):
+            if not hasattr(self, "_objectResource"):
+                self._objectResource = (yield self.objectResource())
+            self._objectDropboxID = self._objectResource._dropboxID
+
+        fname = self.lastSegmentOfUriPath(self._managedID, self._name)
+        location = self._txn._store.attachmentsURIPattern % {
+            "home": self._ownerName,
+            "dropbox_id": urllib.quote(self._objectDropboxID),
+            "name": urllib.quote(fname),
+        }
+        returnValue(location)
+
+
+    @classmethod
+    def lastSegmentOfUriPath(cls, managed_id, name):
+        splits = name.rsplit(".", 1)
+        fname = splits[0]
+        suffix = splits[1] if len(splits) == 2 else "unknown"
+        return "{0}-{1}.{2}".format(fname, managed_id[:8], suffix)
+
+
+    @inlineCallbacks
+    def changed(self, contentType, dispositionName, md5, size):
+        """
+        Always update name to current disposition name.
+        """
+
+        self._contentType = contentType
+        self._name = dispositionName
+        self._md5 = md5
+        self._size = size
+        att = self._attachmentSchema
+        self._created, self._modified = map(
+            parseSQLTimestamp,
+            (yield Update(
+                {
+                    att.CONTENT_TYPE    : generateContentType(self._contentType),
+                    att.SIZE            : self._size,
+                    att.MD5             : self._md5,
+                    att.MODIFIED        : utcNowSQL,
+                    att.PATH            : self._name,
+                },
+                Where=(att.ATTACHMENT_ID == self._attachmentID),
+                Return=(att.CREATED, att.MODIFIED)).on(self._txn))[0]
+        )
+
+
+    @inlineCallbacks
+    def newReference(self, resourceID):
+        """
+        Create a new reference of this attachment to the supplied calendar object resource id, and
+        return a ManagedAttachment for the new reference.
+
+        @param resourceID: the resource id to reference
+        @type resourceID: C{int}
+
+        @return: the new managed attachment
+        @rtype: L{ManagedAttachment}
+        """
+
+        attco = self._attachmentLinkSchema
+        yield Insert({
+            attco.ATTACHMENT_ID               : self._attachmentID,
+            attco.MANAGED_ID                  : self._managedID,
+            attco.CALENDAR_OBJECT_RESOURCE_ID : resourceID,
+        }).on(self._txn)
+
+        mattach = (yield ManagedAttachment.load(self._txn, resourceID, self._managedID))
+        returnValue(mattach)
+
+
+    @inlineCallbacks
+    def removeFromResource(self, resourceID):
+
+        # Delete the reference
+        attco = self._attachmentLinkSchema
+        yield Delete(
+            From=attco,
+            Where=(attco.ATTACHMENT_ID == self._attachmentID).And(
+                attco.CALENDAR_OBJECT_RESOURCE_ID == resourceID),
+        ).on(self._txn)
+
+        # References still exist - if not remove actual attachment
+        rows = (yield Select(
+            [attco.CALENDAR_OBJECT_RESOURCE_ID, ],
+            From=attco,
+            Where=(attco.ATTACHMENT_ID == self._attachmentID),
+        ).on(self._txn))
+        if len(rows) == 0:
+            yield self.remove()
+
+
+    @inlineCallbacks
+    def attachProperty(self):
+        """
+        Return an iCalendar ATTACH property for this attachment.
+        """
+        attach = Property("ATTACH", "", valuetype=Value.VALUETYPE_URI)
+        location = (yield self.updateProperty(attach))
+        returnValue((attach, location,))
+
+
+    @inlineCallbacks
+    def updateProperty(self, attach):
+        """
+        Update an iCalendar ATTACH property for this attachment.
+        """
+
+        location = (yield self.location())
+
+        attach.setParameter("MANAGED-ID", self.managedID())
+        attach.setParameter("FMTTYPE", "{0}/{1}".format(self.contentType().mediaType, self.contentType().mediaSubtype))
+        attach.setParameter("FILENAME", self.name())
+        attach.setParameter("SIZE", str(self.size()))
+        attach.setValue(location)
+
+        returnValue(location)

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_directory.py (from rev 14551, CalendarServer/trunk/txdav/caldav/datastore/sql_directory.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_directory.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_directory.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,73 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.record import SerializableRecord, fromTable
+from twext.enterprise.dal.syntax import Select, Parameter
+from twisted.internet.defer import inlineCallbacks, returnValue
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.datastore.sql_directory import GroupsRecord
+
+"""
+Classes and methods that relate to directory objects in the SQL store. e.g.,
+delegates, groups etc
+"""
+
+class GroupAttendeeRecord(SerializableRecord, fromTable(schema.GROUP_ATTENDEE)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.GROUP_ATTENDEE}.
+    """
+
+    @classmethod
+    @inlineCallbacks
+    def groupAttendeesForObjects(cls, txn, cobjs):
+        """
+        Get delegator/group pairs for each of the specified calendar objects.
+        """
+
+        # Do a join to get what we need
+        rows = yield Select(
+            list(GroupAttendeeRecord.table) + list(GroupsRecord.table),
+            From=GroupAttendeeRecord.table.join(GroupsRecord.table, GroupAttendeeRecord.groupID == GroupsRecord.groupID),
+            Where=(GroupAttendeeRecord.resourceID.In(Parameter("cobjs", len(cobjs))))
+        ).on(txn, cobjs=cobjs)
+
+        results = []
+        groupAttendeeNames = [GroupAttendeeRecord.__colmap__[column] for column in list(GroupAttendeeRecord.table)]
+        groupsNames = [GroupsRecord.__colmap__[column] for column in list(GroupsRecord.table)]
+        split_point = len(groupAttendeeNames)
+        for row in rows:
+            groupAttendeeRow = row[:split_point]
+            groupAttendeeRecord = GroupAttendeeRecord()
+            groupAttendeeRecord._attributesFromRow(zip(groupAttendeeNames, groupAttendeeRow))
+            groupAttendeeRecord.transaction = txn
+            groupsRow = row[split_point:]
+            groupsRecord = GroupsRecord()
+            groupsRecord._attributesFromRow(zip(groupsNames, groupsRow))
+            groupsRecord.transaction = txn
+            results.append((groupAttendeeRecord, groupsRecord,))
+
+        returnValue(results)
+
+
+
+class GroupShareeRecord(SerializableRecord, fromTable(schema.GROUP_SHAREE)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.GROUP_SHAREE}.
+    """
+    pass

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_external.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_external.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/sql_external.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -18,12 +18,15 @@
 SQL backend for CalDAV storage when resources are external.
 """
 
-from twisted.internet.defer import succeed, inlineCallbacks, returnValue
+from twisted.internet.defer import inlineCallbacks, returnValue
 
 from twext.python.log import Logger
 
 from txdav.caldav.datastore.sql import CalendarHome, Calendar, CalendarObject
+from txdav.caldav.datastore.sql_attachment import Attachment, AttachmentLink
+from txdav.caldav.datastore.sql_directory import GroupAttendeeRecord, GroupShareeRecord
 from txdav.caldav.icalendarstore import ComponentUpdateState, ComponentRemoveState
+from txdav.common.datastore.sql_directory import GroupsRecord
 from txdav.common.datastore.sql_external import CommonHomeExternal, CommonHomeChildExternal, \
     CommonObjectResourceExternal
 
@@ -34,10 +37,10 @@
     Wrapper for a CalendarHome that is external and only supports a limited set of operations.
     """
 
-    def __init__(self, transaction, ownerUID, resourceID):
+    def __init__(self, transaction, homeData):
 
-        CalendarHome.__init__(self, transaction, ownerUID)
-        CommonHomeExternal.__init__(self, transaction, ownerUID, resourceID)
+        CalendarHome.__init__(self, transaction, homeData)
+        CommonHomeExternal.__init__(self, transaction, homeData)
 
 
     def hasCalendarResourceUIDSomewhereElse(self, uid, ok_object, mode):
@@ -61,6 +64,36 @@
         raise AssertionError("CommonHomeExternal: not supported")
 
 
+    @inlineCallbacks
+    def getAllAttachments(self):
+        """
+        Return all the L{Attachment} objects associated with this calendar home.
+        Needed during migration.
+        """
+        raw_results = yield self._txn.store().conduit.send_home_get_all_attachments(self)
+        returnValue([Attachment.deserialize(self._txn, attachment) for attachment in raw_results])
+
+
+    @inlineCallbacks
+    def readAttachmentData(self, remote_id, attachment):
+        """
+        Read the data associated with an attachment associated with this calendar home.
+        Needed during migration only.
+        """
+        stream = attachment.store(attachment.contentType(), attachment.name(), migrating=True)
+        yield self._txn.store().conduit.send_get_attachment_data(self, remote_id, stream)
+
+
+    @inlineCallbacks
+    def getAttachmentLinks(self):
+        """
+        Read the attachment<->calendar object mapping data associated with this calendar home.
+        Needed during migration only.
+        """
+        raw_results = yield self._txn.store().conduit.send_home_get_attachment_links(self)
+        returnValue([AttachmentLink.deserialize(self._txn, attachment) for attachment in raw_results])
+
+
     def getAllDropboxIDs(self):
         """
         No children.
@@ -82,13 +115,17 @@
         raise AssertionError("CommonHomeExternal: not supported")
 
 
-    def createdHome(self):
+    @inlineCallbacks
+    def getAllGroupAttendees(self):
         """
-        No children - make this a no-op.
+        Return a list of L{GroupAttendeeRecord},L{GroupRecord} for each group attendee referenced in calendar data
+        owned by this home.
         """
-        return succeed(None)
 
+        raw_results = yield self._txn.store().conduit.send_home_get_all_group_attendees(self)
+        returnValue([(GroupAttendeeRecord.deserialize(item[0]), GroupsRecord.deserialize(item[1]),) for item in raw_results])
 
+
     def splitCalendars(self):
         """
         No children.
@@ -157,10 +194,16 @@
     """
     SQL-based implementation of L{ICalendar}.
     """
-    pass
 
+    @inlineCallbacks
+    def groupSharees(self):
+        results = yield self._txn.store().conduit.send_homechild_group_sharees(self)
+        results["groups"] = [GroupsRecord.deserialize(items) for items in results["groups"]]
+        results["sharees"] = [GroupShareeRecord.deserialize(items) for items in results["sharees"]]
+        returnValue(results)
 
 
+
 class CalendarObjectExternal(CommonObjectResourceExternal, CalendarObject):
     """
     SQL-based implementation of L{ICalendarObject}.

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/common.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/common.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -375,7 +375,7 @@
     @inlineCallbacks
     def notificationUnderTest(self):
         txn = self.transactionUnderTest()
-        notifications = yield txn.notificationsWithUID("home1")
+        notifications = yield txn.notificationsWithUID("home1", create=True)
         yield notifications.writeNotificationObject(
             "abc",
             json.loads("{\"notification-type\":\"invite-notification\"}"),
@@ -402,7 +402,7 @@
         objects changed or deleted since
         """
         txn = self.transactionUnderTest()
-        coll = yield txn.notificationsWithUID("home1")
+        coll = yield txn.notificationsWithUID("home1", create=True)
         yield coll.writeNotificationObject(
             "1",
             json.loads("{\"notification-type\":\"invite-notification\"}"),
@@ -435,7 +435,7 @@
         overwrite the notification object.
         """
         notifications = yield self.transactionUnderTest().notificationsWithUID(
-            "home1"
+            "home1", create=True
         )
         yield notifications.writeNotificationObject(
             "abc",
@@ -462,7 +462,7 @@
         """
         # Prime the home collection first
         yield self.transactionUnderTest().notificationsWithUID(
-            "home1"
+            "home1", create=True
         )
         yield self.commit()
 
@@ -512,7 +512,7 @@
         overwrite the notification object.
         """
         notifications = yield self.transactionUnderTest().notificationsWithUID(
-            "home1"
+            "home1", create=True
         )
         yield notifications.writeNotificationObject(
             "abc",
@@ -555,7 +555,7 @@
         L{INotificationCollection} that the object was retrieved from.
         """
         txn = self.transactionUnderTest()
-        collection = yield txn.notificationsWithUID("home1")
+        collection = yield txn.notificationsWithUID("home1", create=True)
         notification = yield self.notificationUnderTest()
         self.assertIdentical(collection, notification.notificationCollection())
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_attachments.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_attachments.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_attachments.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -29,7 +29,8 @@
 from twistedcaldav.config import config
 from twistedcaldav.ical import Property, Component
 
-from txdav.caldav.datastore.sql import CalendarStoreFeatures, DropBoxAttachment, \
+from txdav.caldav.datastore.sql import CalendarStoreFeatures
+from txdav.caldav.datastore.sql_attachment import DropBoxAttachment, \
     ManagedAttachment
 from txdav.caldav.datastore.test.common import CaptureProtocol
 from txdav.caldav.icalendarstore import IAttachmentStorageTransport, IAttachment, \

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_index_file.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_index_file.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_index_file.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -512,7 +512,7 @@
 """,
                 "20080601T000000Z", "20080602T000000Z",
                 "mailto:user1 at example.com",
-                (('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),),
+                (('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),),
             ),
             (
                 "#1.2 Simple component - transparent",
@@ -534,7 +534,7 @@
 """,
                 "20080602T000000Z", "20080603T000000Z",
                 "mailto:user1 at example.com",
-                (('N', "2008-06-02 12:00:00+00:00", "2008-06-02 13:00:00+00:00", 'B', 'T'),),
+                (('N', "2008-06-02 12:00:00", "2008-06-02 13:00:00", 'B', 'T'),),
             ),
             (
                 "#1.3 Simple component - canceled",
@@ -556,7 +556,7 @@
 """,
                 "20080603T000000Z", "20080604T000000Z",
                 "mailto:user1 at example.com",
-                (('N', "2008-06-03 12:00:00+00:00", "2008-06-03 13:00:00+00:00", 'F', 'F'),),
+                (('N', "2008-06-03 12:00:00", "2008-06-03 13:00:00", 'F', 'F'),),
             ),
             (
                 "#1.4 Simple component - tentative",
@@ -578,7 +578,7 @@
 """,
                 "20080604T000000Z", "20080605T000000Z",
                 "mailto:user1 at example.com",
-                (('N', "2008-06-04 12:00:00+00:00", "2008-06-04 13:00:00+00:00", 'T', 'F'),),
+                (('N', "2008-06-04 12:00:00", "2008-06-04 13:00:00", 'T', 'F'),),
             ),
             (
                 "#2.1 Recurring component - busy",
@@ -601,8 +601,8 @@
                 "20080605T000000Z", "20080607T000000Z",
                 "mailto:user1 at example.com",
                 (
-                    ('N', "2008-06-05 12:00:00+00:00", "2008-06-05 13:00:00+00:00", 'B', 'F'),
-                    ('N', "2008-06-06 12:00:00+00:00", "2008-06-06 13:00:00+00:00", 'B', 'F'),
+                    ('N', "2008-06-05 12:00:00", "2008-06-05 13:00:00", 'B', 'F'),
+                    ('N', "2008-06-06 12:00:00", "2008-06-06 13:00:00", 'B', 'F'),
                 ),
             ),
             (
@@ -637,8 +637,8 @@
                 "20080607T000000Z", "20080609T000000Z",
                 "mailto:user1 at example.com",
                 (
-                    ('N', "2008-06-07 12:00:00+00:00", "2008-06-07 13:00:00+00:00", 'B', 'F'),
-                    ('N', "2008-06-08 14:00:00+00:00", "2008-06-08 15:00:00+00:00", 'B', 'T'),
+                    ('N', "2008-06-07 12:00:00", "2008-06-07 13:00:00", 'B', 'F'),
+                    ('N', "2008-06-08 14:00:00", "2008-06-08 15:00:00", 'B', 'T'),
                 ),
             ),
         )
@@ -714,11 +714,11 @@
                 (
                     (
                         "user01",
-                        (('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'T'),),
+                        (('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'T'),),
                     ),
                     (
                         "user02",
-                        (('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),),
+                        (('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),),
                     ),
                 ),
             ),
@@ -767,15 +767,15 @@
                 (
                     (
                         "user01",
-                        (('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'T'),),
+                        (('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'T'),),
                     ),
                     (
                         "user02",
-                        (('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),),
+                        (('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),),
                     ),
                     (
                         "user03",
-                        (('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),),
+                        (('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),),
                     ),
                 ),
             ),
@@ -815,15 +815,15 @@
                     (
                         "user01",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'T'),
-                            ('N', "2008-06-02 12:00:00+00:00", "2008-06-02 13:00:00+00:00", 'B', 'T'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'T'),
+                            ('N', "2008-06-02 12:00:00", "2008-06-02 13:00:00", 'B', 'T'),
                         ),
                     ),
                     (
                         "user02",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-02 12:00:00+00:00", "2008-06-02 13:00:00+00:00", 'B', 'F'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),
+                            ('N', "2008-06-02 12:00:00", "2008-06-02 13:00:00", 'B', 'F'),
                         ),
                     ),
                 ),
@@ -875,22 +875,22 @@
                     (
                         "user01",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'T'),
-                            ('N', "2008-06-02 12:00:00+00:00", "2008-06-02 13:00:00+00:00", 'B', 'T'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'T'),
+                            ('N', "2008-06-02 12:00:00", "2008-06-02 13:00:00", 'B', 'T'),
                         ),
                     ),
                     (
                         "user02",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-02 12:00:00+00:00", "2008-06-02 13:00:00+00:00", 'B', 'F'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),
+                            ('N', "2008-06-02 12:00:00", "2008-06-02 13:00:00", 'B', 'F'),
                         ),
                     ),
                     (
                         "user03",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-02 12:00:00+00:00", "2008-06-02 13:00:00+00:00", 'B', 'F'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),
+                            ('N', "2008-06-02 12:00:00", "2008-06-02 13:00:00", 'B', 'F'),
                         ),
                     ),
                 ),
@@ -945,17 +945,17 @@
                     (
                         "user01",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'T'),
-                            ('N', "2008-06-02 13:00:00+00:00", "2008-06-02 14:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-03 12:00:00+00:00", "2008-06-03 13:00:00+00:00", 'B', 'T'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'T'),
+                            ('N', "2008-06-02 13:00:00", "2008-06-02 14:00:00", 'B', 'F'),
+                            ('N', "2008-06-03 12:00:00", "2008-06-03 13:00:00", 'B', 'T'),
                         ),
                     ),
                     (
                         "user02",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-02 13:00:00+00:00", "2008-06-02 14:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-03 12:00:00+00:00", "2008-06-03 13:00:00+00:00", 'B', 'F'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),
+                            ('N', "2008-06-02 13:00:00", "2008-06-02 14:00:00", 'B', 'F'),
+                            ('N', "2008-06-03 12:00:00", "2008-06-03 13:00:00", 'B', 'F'),
                         ),
                     ),
                 ),
@@ -1025,25 +1025,25 @@
                     (
                         "user01",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'T'),
-                            ('N', "2008-06-02 13:00:00+00:00", "2008-06-02 14:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-03 12:00:00+00:00", "2008-06-03 13:00:00+00:00", 'B', 'T'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'T'),
+                            ('N', "2008-06-02 13:00:00", "2008-06-02 14:00:00", 'B', 'F'),
+                            ('N', "2008-06-03 12:00:00", "2008-06-03 13:00:00", 'B', 'T'),
                         ),
                     ),
                     (
                         "user02",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-02 13:00:00+00:00", "2008-06-02 14:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-03 12:00:00+00:00", "2008-06-03 13:00:00+00:00", 'B', 'T'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),
+                            ('N', "2008-06-02 13:00:00", "2008-06-02 14:00:00", 'B', 'F'),
+                            ('N', "2008-06-03 12:00:00", "2008-06-03 13:00:00", 'B', 'T'),
                         ),
                     ),
                     (
                         "user03",
                         (
-                            ('N', "2008-06-01 12:00:00+00:00", "2008-06-01 13:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-02 13:00:00+00:00", "2008-06-02 14:00:00+00:00", 'B', 'F'),
-                            ('N', "2008-06-03 12:00:00+00:00", "2008-06-03 13:00:00+00:00", 'B', 'F'),
+                            ('N', "2008-06-01 12:00:00", "2008-06-01 13:00:00", 'B', 'F'),
+                            ('N', "2008-06-02 13:00:00", "2008-06-02 14:00:00", 'B', 'F'),
+                            ('N', "2008-06-03 12:00:00", "2008-06-03 13:00:00", 'B', 'F'),
                         ),
                     ),
                 ),

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_schedule.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_schedule.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_schedule.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,70 +0,0 @@
-##
-# Copyright (c) 2010-2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-"""
-Tests for L{txdav.caldav.datastore.scheduling}.
-
-The aforementioned module is intended to eventually support implicit
-scheduling; however, it does not currently.  The interim purpose of this module
-and accompanying tests is to effectively test the interface specifications to
-make sure that the common tests don't require anything I{not} specified in the
-interface, so that dynamic proxies specified with a tool like
-C{proxyForInterface} can be used to implement features such as implicit
-scheduling or data caching as middleware in the data-store layer.
-"""
-
-from twisted.trial.unittest import TestCase, SkipTest
-from txdav.caldav.datastore.test.test_file import FileStorageTests
-from txdav.caldav.datastore.schedule import ImplicitStore
-
-simpleEvent = """BEGIN:VCALENDAR
-VERSION:2.0
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:12345-67890
-DTSTART:20080601T120000Z
-DTEND:20080601T130000Z
-ORGANIZER:mailto:user1 at example.com
-ATTENDEE:mailto:user1 at example.com
-ATTENDEE:mailto:user2 at example.com
-END:VEVENT
-END:VCALENDAR
-"""
-
-class ImplicitStoreTests(FileStorageTests, TestCase):
-    """
-    Tests for L{ImplicitSchedulingStore}.
-    """
-
-    implicitStore = None
-
-    def storeUnderTest(self):
-        if self.implicitStore is None:
-            sut = super(ImplicitStoreTests, self).storeUnderTest()
-            self.implicitStore = ImplicitStore(sut)
-        return self.implicitStore
-
-
-    def skipit(self):
-        raise SkipTest("No private attribute tests.")
-
-    test_calendarObjectsWithDotFile = skipit
-    test_countComponentTypes = skipit
-    test_init = skipit
-    test_calendarObjectsWithDirectory = skipit
-    test_hasCalendarResourceUIDSomewhereElse = skipit
-
-del FileStorageTests

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -73,6 +73,7 @@
     Update
 from twext.enterprise.ienterprise import AlreadyFinishedError
 from twext.enterprise.jobqueue import JobItem
+from twext.enterprise.util import parseSQLTimestamp
 
 import datetime
 import os
@@ -741,14 +742,14 @@
         txn = calendarStore.newTransaction()
         home = yield txn.homeWithUID(ECALENDARTYPE, "uid1", create=True)
         cal = yield home.calendarWithName("calendar")
-        cal._created = "2011-02-05 11:22:47"
-        cal._modified = "2011-02-06 11:22:47"
+        cal._created = parseSQLTimestamp("2011-02-05 11:22:47")
+        cal._modified = parseSQLTimestamp("2011-02-06 11:22:47")
         self.assertEqual(cal.created(), datetimeMktime(datetime.datetime(2011, 2, 5, 11, 22, 47)))
         self.assertEqual(cal.modified(), datetimeMktime(datetime.datetime(2011, 2, 6, 11, 22, 47)))
 
         obj = yield self.calendarObjectUnderTest()
-        obj._created = "2011-02-07 11:22:47"
-        obj._modified = "2011-02-08 11:22:47"
+        obj._created = parseSQLTimestamp("2011-02-07 11:22:47")
+        obj._modified = parseSQLTimestamp("2011-02-08 11:22:47")
         self.assertEqual(obj.created(), datetimeMktime(datetime.datetime(2011, 2, 7, 11, 22, 47)))
         self.assertEqual(obj.modified(), datetimeMktime(datetime.datetime(2011, 2, 8, 11, 22, 47)))
 
@@ -767,13 +768,13 @@
         txn2 = calendarStore.newTransaction()
 
         notification_uid1_1 = yield txn1.notificationsWithUID(
-            "uid1",
+            "uid1", create=True
         )
 
         @inlineCallbacks
         def _defer_notification_uid1_2():
             notification_uid1_2 = yield txn2.notificationsWithUID(
-                "uid1",
+                "uid1", create=True
             )
             yield txn2.commit()
             returnValue(notification_uid1_2)
@@ -2223,7 +2224,37 @@
         yield self.commit()
 
 
+    @inlineCallbacks
+    def test_removeAfterRevisionCleanup(self):
+        """
+        Make sure L{Calendar}'s can be renamed after revision cleanup
+        removes their revision table entry..
+        """
+        yield self.homeUnderTest(name="user01", create=True)
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        self.assertTrue(cal is not None)
+        yield self.commit()
 
+        # Remove the revision
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        yield cal.syncToken()
+        yield self.transactionUnderTest().deleteRevisionsBefore(cal._syncTokenRevision + 1)
+        yield self.commit()
+
+        # Rename the calendar
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        self.assertTrue(cal is not None)
+        yield cal.rename("calendar_renamed")
+        yield self.commit()
+
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        self.assertTrue(cal is None)
+        cal = yield self.calendarUnderTest(home="user01", name="calendar_renamed")
+        self.assertTrue(cal is not None)
+        yield self.commit()
+
+
+
 class SchedulingTests(CommonCommonTests, unittest.TestCase):
     """
     CalendarObject splitting tests

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql_sharing.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql_sharing.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/test/test_sql_sharing.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -464,7 +464,7 @@
         shared = yield self.calendarUnderTest(home="user02", name=sharedName)
         self.assertTrue(shared is not None)
 
-        notifyHome = yield self.transactionUnderTest().notificationsWithUID("user02")
+        notifyHome = yield self.transactionUnderTest().notificationsWithUID("user02", create=True)
         notifications = yield notifyHome.listNotificationObjects()
         self.assertEqual(len(notifications), 0)
 
@@ -587,7 +587,42 @@
         yield self.commit()
 
 
+    @inlineCallbacks
+    def test_sharingBindRecords(self):
 
+        yield self.calendarUnderTest(home="user01", name="calendar")
+        yield self.commit()
+
+        shared_name = yield self._createShare()
+
+        shared = yield self.calendarUnderTest(home="user01", name="calendar")
+        results = yield shared.sharingBindRecords()
+        self.assertEqual(len(results), 1)
+        self.assertEqual(results.keys(), ["user02"])
+        self.assertEqual(results["user02"].calendarResourceName, shared_name)
+
+
+    @inlineCallbacks
+    def test_sharedToBindRecords(self):
+
+        yield self.calendarUnderTest(home="user01", name="calendar")
+        yield self.commit()
+
+        shared_name = yield self._createShare()
+
+        home = yield self.homeUnderTest(name="user02")
+        results = yield home.sharedToBindRecords()
+        self.assertEqual(len(results), 1)
+        self.assertEqual(results.keys(), ["user01"])
+        sharedRecord = results["user01"][0]
+        ownerRecord = results["user01"][1]
+        metadataRecord = results["user01"][2]
+        self.assertEqual(ownerRecord.calendarResourceName, "calendar")
+        self.assertEqual(sharedRecord.calendarResourceName, shared_name)
+        self.assertEqual(metadataRecord.supportedComponents, None)
+
+
+
 class GroupSharingTests(BaseSharingTests):
     """
     Test store-based group sharing.
@@ -619,7 +654,7 @@
 
     @inlineCallbacks
     def _check_notifications(self, uid, items):
-        notifyHome = yield self.transactionUnderTest().notificationsWithUID(uid)
+        notifyHome = yield self.transactionUnderTest().notificationsWithUID(uid, create=True)
         notifications = yield notifyHome.listNotificationObjects()
         self.assertEqual(set(notifications), set(items))
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/util.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/util.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/datastore/util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -522,6 +522,11 @@
             self._contentType = http_headers.MimeType.fromString(getType(self._attachment.name(), self.contentTypes))
 
 
+    def resetDetails(self, contentType, dispositionName):
+        self._contentType = contentType
+        self._dispositionName = dispositionName
+
+
     def write(self, data):
         """
         Children must override this to actually write the data, but should

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/icalendarstore.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/icalendarstore.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/caldav/icalendarstore.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -906,7 +906,7 @@
                             is done (more than RAW).
 
     RAW                   - store the supplied data as-is without any processing or validation. This is used
-                            for unit testing purposes only.
+                            for unit testing purposes only, or during migration.
     """
 
     NORMAL = NamedConstant()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -31,16 +31,16 @@
 from twext.enterprise.dal.syntax import Delete, Insert, Len, Parameter, \
     Update, Union, Max, Select, utcNowSQL
 from twext.enterprise.locking import NamedLock
+from twext.enterprise.util import parseSQLTimestamp
 from twext.python.clsprop import classproperty
 from txweb2.http import HTTPError
 from txweb2.http_headers import MimeType
 from txweb2.responsecode import FORBIDDEN
 
-from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.internet.defer import inlineCallbacks, returnValue, succeed
 from twisted.python import hashlib
 
 from twistedcaldav.config import config
-from twistedcaldav.memcacher import Memcacher
 from twistedcaldav.vcard import Component as VCard, InvalidVCardDataError, Property, \
     vCardProductID
 
@@ -53,11 +53,12 @@
     KindChangeNotAllowedError
 from txdav.common.datastore.query.generator import SQLQueryGenerator
 from txdav.common.datastore.sql import CommonHome, CommonHomeChild, \
-    CommonObjectResource, EADDRESSBOOKTYPE, SharingMixIn, SharingInvitation
+    CommonObjectResource, EADDRESSBOOKTYPE, SharingMixIn
 from txdav.common.datastore.sql_tables import _ABO_KIND_PERSON, \
     _ABO_KIND_GROUP, _ABO_KIND_RESOURCE, _ABO_KIND_LOCATION, schema, \
     _BIND_MODE_OWN, _BIND_MODE_WRITE, _BIND_STATUS_ACCEPTED, \
     _BIND_STATUS_INVITED, _BIND_MODE_INDIRECT, _BIND_STATUS_DECLINED
+from txdav.common.datastore.sql_sharing import SharingInvitation
 from txdav.common.icommondatastore import InternalDataStoreError, \
     InvalidUIDError, UIDExistsError, ObjectResourceTooBigError, \
     InvalidObjectResourceError, InvalidComponentForStoreError, \
@@ -77,20 +78,20 @@
 
     # structured tables.  (new, preferred)
     _homeSchema = schema.ADDRESSBOOK_HOME
+    _homeMetaDataSchema = schema.ADDRESSBOOK_HOME_METADATA
+
     _bindSchema = schema.SHARED_ADDRESSBOOK_BIND
-    _homeMetaDataSchema = schema.ADDRESSBOOK_HOME_METADATA
     _revisionsSchema = schema.ADDRESSBOOK_OBJECT_REVISIONS
     _objectSchema = schema.ADDRESSBOOK_OBJECT
 
     _notifierPrefix = "CardDAV"
     _dataVersionKey = "ADDRESSBOOK-DATAVERSION"
-    _cacher = Memcacher("SQL.adbkhome", pickle=True, key_normalization=False)
 
 
-    def __init__(self, transaction, ownerUID, authzUID=None):
+    def __init__(self, transaction, homeData, authzUID=None):
 
-        super(AddressBookHome, self).__init__(transaction, ownerUID, authzUID=authzUID)
         self._addressbookPropertyStoreID = None
+        super(AddressBookHome, self).__init__(transaction, homeData, authzUID=authzUID)
         self._addressbook = None
 
 
@@ -116,6 +117,7 @@
         return (
             cls._homeSchema.RESOURCE_ID,
             cls._homeSchema.OWNER_UID,
+            cls._homeSchema.STATUS,
             cls._homeSchema.ADDRESSBOOK_PROPERTY_STORE_ID,
         )
 
@@ -131,19 +133,20 @@
         return (
             "_resourceID",
             "_ownerUID",
+            "_status",
             "_addressbookPropertyStoreID",
         )
 
 
     @inlineCallbacks
-    def initFromStore(self, no_cache=False):
+    def initFromStore(self):
         """
         Initialize this object from the store. We read in and cache all the
         extra meta-data from the DB to avoid having to do DB queries for those
         individually later.
         """
 
-        result = yield super(AddressBookHome, self).initFromStore(no_cache)
+        result = yield super(AddressBookHome, self).initFromStore()
         if result is not None:
             # Created owned address book
             addressbook = AddressBook(
@@ -167,36 +170,23 @@
 
     @inlineCallbacks
     def remove(self):
-        ah = schema.ADDRESSBOOK_HOME
         ahb = schema.SHARED_ADDRESSBOOK_BIND
-        aor = schema.ADDRESSBOOK_OBJECT_REVISIONS
-        rp = schema.RESOURCE_PROPERTY
 
         yield Delete(
             From=ahb,
             Where=ahb.ADDRESSBOOK_HOME_RESOURCE_ID == self._resourceID,
         ).on(self._txn)
 
-        yield Delete(
-            From=aor,
-            Where=aor.ADDRESSBOOK_HOME_RESOURCE_ID == self._resourceID,
-        ).on(self._txn)
+        yield super(AddressBookHome, self).remove()
 
-        yield Delete(
-            From=ah,
-            Where=ah.RESOURCE_ID == self._resourceID,
-        ).on(self._txn)
 
-        yield Delete(
-            From=rp,
-            Where=(rp.RESOURCE_ID == self._resourceID).Or(
-                rp.RESOURCE_ID == self._addressbookPropertyStoreID
-            )
-        ).on(self._txn)
+    def removeAllChildren(self):
+        """
+        This is a NoOp for the single child address book home
+        """
+        return succeed(None)
 
-        yield self._cacher.delete(str(self._ownerUID))
 
-
     @inlineCallbacks
     def createdHome(self):
         yield self.addressbook()._initSyncToken()
@@ -473,7 +463,7 @@
 
     @classmethod
     @inlineCallbacks
-    def _getDBDataIndirect(cls, home, name, resourceID, externalID):
+    def _getDBDataIndirect(cls, home, name, resourceID, bindUID):
 
         # Get the bind row data
         row = None
@@ -503,7 +493,7 @@
         overallBindStatus = _BIND_STATUS_INVITED
         minBindRevision = None
         for row in rows:
-            bindMode, homeID, resourceGroupID, externalID, name, bindStatus, bindRevision, bindMessage = row[:cls.bindColumnCount] #@UnusedVariable
+            homeID, resourceGroupID, name, bindMode, bindStatus, bindRevision, bindUID, bindMessage = row[:cls.bindColumnCount] #@UnusedVariable
             if groupID is None:
                 groupID = resourceGroupID
             minBindRevision = min(minBindRevision, bindRevision) if minBindRevision is not None else bindRevision
@@ -543,9 +533,9 @@
         returnValue((bindData, additionalBindData, metadataData, ownerHome,))
 
 
-    def __init__(self, home, name, resourceID, mode, status, revision=0, message=None, ownerHome=None, ownerName=None, externalID=None):
+    def __init__(self, home, name, resourceID, mode, status, revision=0, message=None, ownerHome=None, ownerName=None, bindUID=None):
         ownerName = ownerHome.addressbook().name() if ownerHome else None
-        super(AddressBook, self).__init__(home, name, resourceID, mode, status, revision=revision, message=message, ownerHome=ownerHome, ownerName=ownerName, externalID=externalID)
+        super(AddressBook, self).__init__(home, name, resourceID, mode, status, revision=revision, message=message, ownerHome=ownerHome, ownerName=ownerName, bindUID=bindUID)
 
 
     def __repr__(self):
@@ -602,6 +592,14 @@
                     self._txn, resourceID=self._resourceID, name=name, id=id))
             if rows:
                 self._syncTokenRevision = rows[0][0]
+            else:
+                # Nothing was matched on the delete so insert a new row
+                self._syncTokenRevision = (
+                    yield self._completelyNewDeletedRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
+
         elif action == "update":
             rows = (
                 yield self._updateBumpTokenQuery.on(
@@ -609,9 +607,14 @@
             if rows:
                 self._syncTokenRevision = rows[0][0]
             else:
-                action = "insert"
+                # Nothing was matched on the update so insert a new row
+                self._syncTokenRevision = (
+                    yield self._completelyNewRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
 
-        if action == "insert":
+        elif action == "insert":
             # Note that an "insert" may happen for a resource that previously
             # existed and then was deleted. In that case an entry in the
             # REVISIONS table still exists so we have to detect that and do db
@@ -862,7 +865,7 @@
 
 
     @classmethod
-    def create(cls, home, name, externalID=None):
+    def create(cls, home, name, bindUID=None):
         if name == home.addressbook().name():
             # raise HomeChildNameAlreadyExistsError
             pass
@@ -987,6 +990,8 @@
             _ABO_KIND_GROUP,  # obj.KIND,
             "1",  # obj.MD5, non-zero temporary value; set to correct value when known
             "1",  # Len(obj.TEXT), non-zero temporary value; set to correct value when known
+            None,
+            False,
             self._created,  # obj.CREATED,
             self._modified,  # obj.CREATED,
         ]
@@ -1128,7 +1133,7 @@
             home._txn, homeID=home._resourceID
         )
         for groupRow in groupRows:
-            bindMode, homeID, resourceID, externalID, bindName, bindStatus, bindRevision, bindMessage = groupRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
+            homeID, resourceID, bindName, bindMode, bindStatus, bindRevision, bindUID, bindMessage = groupRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
             ownerAddressBookID = yield AddressBookObject.ownerAddressBookIDFromGroupID(home._txn, resourceID)
             ownerHome = yield home._txn.homeWithResourceID(home._homeType, ownerAddressBookID)
             names |= set([ownerHome.uid()])
@@ -1156,7 +1161,7 @@
         )
         # get ownerHomeIDs
         for dataRow in dataRows:
-            bindMode, homeID, resourceID, externalID, bindName, bindStatus, bindRevision, bindMessage = dataRow[:cls.bindColumnCount] #@UnusedVariable
+            homeID, resourceID, bindName, bindMode, bindStatus, bindRevision, bindUID, bindMessage = dataRow[:cls.bindColumnCount] #@UnusedVariable
             ownerHome = yield home.ownerHomeWithChildID(resourceID)
             ownerHomeToDataRowMap[ownerHome] = dataRow
 
@@ -1165,12 +1170,16 @@
             home._txn, homeID=home._resourceID
         )
         for groupBindRow in groupBindRows:
-            bindMode, homeID, resourceID, externalID, name, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
+            homeID, resourceID, name, bindMode, bindStatus, bindRevision, bindUID, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
             ownerAddressBookID = yield AddressBookObject.ownerAddressBookIDFromGroupID(home._txn, resourceID)
             ownerHome = yield home.ownerHomeWithChildID(ownerAddressBookID)
             if ownerHome not in ownerHomeToDataRowMap:
-                groupBindRow[0] = _BIND_MODE_INDIRECT
-                groupBindRow[3:7] = 4 * [None]  # bindName, bindStatus, bindRevision, bindMessage
+                groupBindRow[cls.bindColumns().index(cls._bindSchema.BIND_MODE)] = _BIND_MODE_INDIRECT
+                groupBindRow[cls.bindColumns().index(cls._bindSchema.RESOURCE_NAME)] = None
+                groupBindRow[cls.bindColumns().index(cls._bindSchema.BIND_STATUS)] = None
+                groupBindRow[cls.bindColumns().index(cls._bindSchema.BIND_REVISION)] = None
+                groupBindRow[cls.bindColumns().index(cls._bindSchema.BIND_UID)] = None
+                groupBindRow[cls.bindColumns().index(cls._bindSchema.MESSAGE)] = None
                 ownerHomeToDataRowMap[ownerHome] = groupBindRow
 
         if ownerHomeToDataRowMap:
@@ -1259,7 +1268,7 @@
 
     @classmethod
     @inlineCallbacks
-    def _indirectObjectWithNameOrID(cls, home, name=None, resourceID=None, externalID=None, accepted=True):
+    def _indirectObjectWithNameOrID(cls, home, name=None, resourceID=None, bindUID=None, accepted=True):
         # replaces objectWithName()
         """
         Synthesize and indirect child for matching name or id based on whether shared groups exist.
@@ -1272,7 +1281,7 @@
             exists.
         """
 
-        dbData = yield cls._getDBDataIndirect(home, name, resourceID, externalID)
+        dbData = yield cls._getDBDataIndirect(home, name, resourceID, bindUID)
         if dbData is None:
             returnValue(None)
         bindData, additionalBindData, metadataData, ownerHome = dbData
@@ -1410,7 +1419,7 @@
             readWriteGroupIDs = set()
             readOnlyGroupIDs = set()
             for groupBindRow in groupBindRows:
-                bindMode, homeID, resourceID, externalID, name, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
+                homeID, resourceID, name, bindMode, bindStatus, bindRevision, bindUID, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
                 if bindMode == _BIND_MODE_WRITE:
                     readWriteGroupIDs.add(resourceID)
                 else:
@@ -1471,7 +1480,7 @@
         readWriteGroupIDs = []
         readOnlyGroupIDs = []
         for groupBindRow in groupBindRows:
-            bindMode, homeID, resourceID, externalID, name, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
+            homeID, resourceID, name, bindMode, bindStatus, bindRevision, bindUID, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
             if bindMode == _BIND_MODE_WRITE:
                 readWriteGroupIDs.append(resourceID)
             else:
@@ -1589,10 +1598,10 @@
                 subt,
                 homeID=shareeHome._resourceID,
                 resourceID=self._resourceID,
-                externalID=None,
                 name=newName,
                 mode=mode,
                 bindStatus=status,
+                bindUID=None,
                 message=summary
             )
             returnValue(newName)
@@ -1903,11 +1912,13 @@
 
         for attr, value in zip(child._rowAttributes(), objectData):
             setattr(child, attr, value)
+        child._created = parseSQLTimestamp(child._created)
+        child._modified = parseSQLTimestamp(child._modified)
 
         yield child._loadPropertyStore(propstore)
 
         if groupBindData:
-            bindMode, homeID, resourceID, externalID, bindName, bindStatus, bindRevision, bindMessage = groupBindData[:AddressBookObject.bindColumnCount] #@UnusedVariable
+            homeID, resourceID, bindName, bindMode, bindStatus, bindRevision, bindUID, bindMessage = groupBindData[:AddressBookObject.bindColumnCount] #@UnusedVariable
             child._bindMode = bindMode
             child._bindStatus = bindStatus
             child._bindMessage = bindMessage
@@ -2008,7 +2019,7 @@
         self._bindName = None
         self._bindRevision = None
         super(AddressBookObject, self).__init__(addressbook, name, uid, resourceID, options)
-        self._externalID = None
+        self._bindUID = None
         self._options = {} if options is None else options
 
 
@@ -2217,7 +2228,7 @@
         )
         if groupBindRows:
             groupBindRow = groupBindRows[0]
-            bindMode, homeID, resourceID, externalID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
+            homeID, resourceID, bindName, bindMode, bindStatus, bindRevision, bindUID, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
 
             if accepted is not None and (bindStatus == _BIND_STATUS_ACCEPTED) != bool(accepted):
                 returnValue(None)
@@ -2258,6 +2269,8 @@
             obj.KIND,
             obj.MD5,
             Len(obj.TEXT),
+            obj.TRASHED,
+            obj.IS_IN_TRASH,
             obj.CREATED,
             obj.MODIFIED,
             obj.DATAVERSION,
@@ -2274,6 +2287,8 @@
             "_kind",
             "_md5",
             "_size",
+            "_trashed",
+            "_is_in_trash",
             "_created",
             "_modified",
             "_dataversion",
@@ -2321,7 +2336,7 @@
         if addressbook.owned() or addressbook.fullyShared():
             rows = yield super(AddressBookObject, cls)._allColumnsWithParentAndNames(addressbook, names)
             if addressbook.fullyShared() and addressbook._groupForSharedAddressBookName() in names:
-                rows.append(addressbook._groupForSharedAddressBookRow())
+                rows += (addressbook._groupForSharedAddressBookRow(),)
         else:
             acceptedGroupIDs = yield addressbook.acceptedGroupIDs()
             allowedObjectIDs = yield addressbook.expandGroupIDs(addressbook._txn, acceptedGroupIDs)
@@ -2616,6 +2631,8 @@
                     dataVersion=self._currentDataVersion,
                 )
             )[0]
+            self._created = parseSQLTimestamp(self._created)
+            self._modified = parseSQLTimestamp(self._modified)
 
             # delete foreign members table rows for this object
             groupIDRows = yield Delete(
@@ -2647,7 +2664,7 @@
                 )
 
         else:
-            self._modified = (yield Update(
+            self._modified = parseSQLTimestamp((yield Update(
                 {
                     abo.VCARD_TEXT: self._objectText,
                     abo.MD5: self._md5,
@@ -2655,7 +2672,7 @@
                     abo.MODIFIED: utcNowSQL,
                 },
                 Where=abo.RESOURCE_ID == self._resourceID,
-                Return=abo.MODIFIED).on(self._txn))[0][0]
+                Return=abo.MODIFIED).on(self._txn))[0][0])
 
         if self._kind == _ABO_KIND_GROUP:
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql_external.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql_external.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/sql_external.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -18,8 +18,6 @@
 SQL backend for CardDAV storage when resources are external.
 """
 
-from twisted.internet.defer import succeed
-
 from twext.python.log import Logger
 
 from txdav.carddav.datastore.sql import AddressBookHome, AddressBook, \
@@ -31,10 +29,10 @@
 
 class AddressBookHomeExternal(CommonHomeExternal, AddressBookHome):
 
-    def __init__(self, transaction, ownerUID, resourceID):
+    def __init__(self, transaction, homeData):
 
-        AddressBookHome.__init__(self, transaction, ownerUID)
-        CommonHomeExternal.__init__(self, transaction, ownerUID, resourceID)
+        AddressBookHome.__init__(self, transaction, homeData)
+        CommonHomeExternal.__init__(self, transaction, homeData)
 
 
     def hasAddressBookResourceUIDSomewhereElse(self, uid, ok_object, mode):
@@ -51,13 +49,6 @@
         raise AssertionError("CommonHomeExternal: not supported")
 
 
-    def createdHome(self):
-        """
-        No children - make this a no-op.
-        """
-        return succeed(None)
-
-
     def addressbook(self):
         """
         No children.

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -70,7 +70,7 @@
         populateTxn = self.storeUnderTest().newTransaction()
         for homeUID in self.requirements:
             addressbooks = self.requirements[homeUID]
-            home = yield populateTxn.addressbookHomeWithUID(homeUID, True)
+            home = yield populateTxn.addressbookHomeWithUID(homeUID, create=True)
             if addressbooks is not None:
                 addressbook = home.addressbook()
 
@@ -364,13 +364,13 @@
         txn2 = addressbookStore.newTransaction()
 
         notification_uid1_1 = yield txn1.notificationsWithUID(
-            "uid1",
+            "uid1", create=True,
         )
 
         @inlineCallbacks
         def _defer_notification_uid1_2():
             notification_uid1_2 = yield txn2.notificationsWithUID(
-                "uid1",
+                "uid1", create=True,
             )
             yield txn2.commit()
             returnValue(notification_uid1_2)
@@ -576,7 +576,7 @@
 
         aboMembers = schema.ABO_MEMBERS
         memberRows = yield Select([aboMembers.GROUP_ID, aboMembers.MEMBER_ID], From=aboMembers, Where=aboMembers.REMOVED == False).on(txn)
-        self.assertEqual(memberRows, [])
+        self.assertEqual(list(memberRows), [])
 
         aboForeignMembers = schema.ABO_FOREIGN_MEMBERS
         foreignMemberRows = yield Select([aboForeignMembers.GROUP_ID, aboForeignMembers.MEMBER_ADDRESS], From=aboForeignMembers).on(txn)
@@ -607,7 +607,7 @@
         )
 
         foreignMemberRows = yield Select([aboForeignMembers.GROUP_ID, aboForeignMembers.MEMBER_ADDRESS], From=aboForeignMembers).on(txn)
-        self.assertEqual(foreignMemberRows, [])
+        self.assertEqual(list(foreignMemberRows), [])
 
         yield subgroupObject.remove()
         memberRows = yield Select([aboMembers.GROUP_ID, aboMembers.MEMBER_ID, aboMembers.REMOVED, aboMembers.REVISION], From=aboMembers).on(txn)
@@ -917,3 +917,119 @@
         obj = yield self.addressbookObjectUnderTest(name="data1.ics", addressbook_name="addressbook")
         self.assertEqual(obj._dataversion, obj._currentDataVersion)
         yield self.commit()
+
+
+    @inlineCallbacks
+    def test_updateAfterRevisionCleanup(self):
+        """
+        Make sure L{AddressBookObject}'s can be updated or removed after revision cleanup
+        removes their revision table entry..
+        """
+        person = """BEGIN:VCARD
+VERSION:3.0
+N:Thompson;Default1;;;
+FN:Default1 Thompson
+EMAIL;type=INTERNET;type=WORK;type=pref:lthompson1 at example.com
+TEL;type=WORK;type=pref:1-555-555-5555
+TEL;type=CELL:1-444-444-4444
+item1.ADR;type=WORK;type=pref:;;1245 Test;Sesame Street;California;11111;USA
+item1.X-ABADR:us
+UID:uid-person
+X-ADDRESSBOOKSERVER-KIND:person
+END:VCARD
+"""
+        group = """BEGIN:VCARD
+VERSION:3.0
+N:Group;Fancy;;;
+FN:Fancy Group
+UID:uid-group
+X-ADDRESSBOOKSERVER-KIND:group
+X-ADDRESSBOOKSERVER-MEMBER:urn:uuid:uid-person
+END:VCARD
+"""
+        group_update = """BEGIN:VCARD
+VERSION:3.0
+N:Group2;Fancy;;;
+FN:Fancy Group2
+UID:uid-group
+X-ADDRESSBOOKSERVER-KIND:group
+X-ADDRESSBOOKSERVER-MEMBER:urn:uuid:uid-person
+END:VCARD
+"""
+
+        yield self.homeUnderTest()
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.createAddressBookObjectWithName("person.vcf", VCard.fromString(person))
+        yield adbk.createAddressBookObjectWithName("group.vcf", VCard.fromString(group))
+        yield self.commit()
+
+        # Remove the revision
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.syncToken()
+        yield self.transactionUnderTest().deleteRevisionsBefore(adbk._syncTokenRevision + 1)
+        yield self.commit()
+
+        # Update the object
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        yield obj.setComponent(VCard.fromString(group_update))
+        yield self.commit()
+
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        obj = yield self.addressbookObjectUnderTest(name="person.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        yield self.commit()
+
+
+    @inlineCallbacks
+    def test_removeAfterRevisionCleanup(self):
+        """
+        Make sure L{AddressBookObject}'s can be updated or removed after revision cleanup
+        removes their revision table entry..
+        """
+        person = """BEGIN:VCARD
+VERSION:3.0
+N:Thompson;Default1;;;
+FN:Default1 Thompson
+EMAIL;type=INTERNET;type=WORK;type=pref:lthompson1 at example.com
+TEL;type=WORK;type=pref:1-555-555-5555
+TEL;type=CELL:1-444-444-4444
+item1.ADR;type=WORK;type=pref:;;1245 Test;Sesame Street;California;11111;USA
+item1.X-ABADR:us
+UID:uid-person
+X-ADDRESSBOOKSERVER-KIND:person
+END:VCARD
+"""
+        group = """BEGIN:VCARD
+VERSION:3.0
+N:Group;Fancy;;;
+FN:Fancy Group
+UID:uid-group
+X-ADDRESSBOOKSERVER-KIND:group
+X-ADDRESSBOOKSERVER-MEMBER:urn:uuid:uid-person
+END:VCARD
+"""
+
+        yield self.homeUnderTest()
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.createAddressBookObjectWithName("person.vcf", VCard.fromString(person))
+        yield adbk.createAddressBookObjectWithName("group.vcf", VCard.fromString(group))
+        yield self.commit()
+
+        # Remove the revision
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.syncToken()
+        yield self.transactionUnderTest().deleteRevisionsBefore(adbk._syncTokenRevision + 1)
+        yield self.commit()
+
+        # Remove the object
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        yield obj.remove()
+        yield self.commit()
+
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is None)
+        obj = yield self.addressbookObjectUnderTest(name="person.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        yield self.commit()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql_sharing.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql_sharing.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/carddav/datastore/test/test_sql_sharing.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -45,7 +45,7 @@
         for homeUID in self.requirements:
             addressbooks = self.requirements[homeUID]
             if addressbooks is not None:
-                home = yield populateTxn.addressbookHomeWithUID(homeUID, True)
+                home = yield populateTxn.addressbookHomeWithUID(homeUID, create=True)
                 addressbook = home.addressbook()
 
                 addressbookObjNames = addressbooks[addressbook.name()]
@@ -198,7 +198,7 @@
 
     @inlineCallbacks
     def _check_notifications(self, home, items):
-        notifyHome = yield self.transactionUnderTest().notificationsWithUID(home)
+        notifyHome = yield self.transactionUnderTest().notificationsWithUID(home, create=True)
         notifications = yield notifyHome.listNotificationObjects()
         self.assertEqual(set(notifications), set(items))
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/file.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/file.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/file.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -44,7 +44,8 @@
 from txdav.common.icommondatastore import HomeChildNameNotAllowedError, \
     HomeChildNameAlreadyExistsError, NoSuchHomeChildError, \
     InternalDataStoreError, ObjectResourceNameNotAllowedError, \
-    ObjectResourceNameAlreadyExistsError, NoSuchObjectResourceError
+    ObjectResourceNameAlreadyExistsError, NoSuchObjectResourceError, \
+    ECALENDARTYPE, EADDRESSBOOKTYPE
 from txdav.common.idirectoryservice import IStoreDirectoryService
 from txdav.common.inotifications import INotificationCollection, \
     INotificationObject
@@ -64,16 +65,6 @@
 from twistedcaldav.sql import AbstractSQLDatabase, db_prefix
 import os
 
-ECALENDARTYPE = 0
-EADDRESSBOOKTYPE = 1
-
-# Labels used to identify the class of resource being modified, so that
-# notification systems can target the correct application
-NotifierPrefixes = {
-    ECALENDARTYPE : "CalDAV",
-    EADDRESSBOOKTYPE : "CardDAV",
-}
-
 TOPPATHS = (
     "calendars",
     "addressbooks"
@@ -343,15 +334,15 @@
         CommonStoreTransaction._homeClass[EADDRESSBOOKTYPE] = AddressBookHome
 
 
-    def calendarHomeWithUID(self, uid, create=False):
-        return self.homeWithUID(ECALENDARTYPE, uid, create=create)
+    def calendarHomeWithUID(self, uid, status=None, create=False):
+        return self.homeWithUID(ECALENDARTYPE, uid, status=status, create=create)
 
 
-    def addressbookHomeWithUID(self, uid, create=False):
-        return self.homeWithUID(EADDRESSBOOKTYPE, uid, create=create)
+    def addressbookHomeWithUID(self, uid, status=None, create=False):
+        return self.homeWithUID(EADDRESSBOOKTYPE, uid, status=status, create=create)
 
 
-    def _determineMemo(self, storeType, uid, create=False):
+    def _determineMemo(self, storeType, uid, status=None, create=False):
         """
         Determine the memo dictionary to use for homeWithUID.
         """
@@ -374,7 +365,7 @@
 
 
     @memoizedKey("uid", _determineMemo, deferredResult=False)
-    def homeWithUID(self, storeType, uid, create=False):
+    def homeWithUID(self, storeType, uid, status=None, create=False):
         if uid.startswith("."):
             return None
 
@@ -385,7 +376,7 @@
 
 
     @memoizedKey("uid", "_notificationHomes", deferredResult=False)
-    def notificationsWithUID(self, uid, home=None):
+    def notificationsWithUID(self, uid, home=None, create=False):
 
         if home is None:
             home = self.homeWithUID(self._notificationHomeType, uid, create=True)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/attachments.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/attachments.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/attachments.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -15,6 +15,9 @@
 ##
 
 from twisted.internet.defer import inlineCallbacks, returnValue
+from txdav.caldav.icalendarstore import InvalidAttachmentOperation
+from txdav.common.datastore.podding.util import UtilityConduitMixin
+from txweb2.http_headers import generateContentType
 
 
 class AttachmentsConduitMixin(object):
@@ -150,3 +153,48 @@
             request["rids"],
             request["managedID"],
         )
+
+
+    @inlineCallbacks
+    def send_get_attachment_data(self, home, attachment_id, stream):
+        """
+        Managed attachment readAttachmentData call. We are using streams on the sender and the receiver
+        side to avoid reading the whole attachment into memory.
+
+        @param home: the home whose attachment is being read
+        @type home: L{CalendarHome}
+        @param attachment_id: attachment-id to get
+        @type attachment_id: C{str}
+        @param stream: attachment data stream to write to
+        @type stream: L{IStream}
+        """
+
+        actionName = "get-attachment-data"
+        txn, request, server = yield self._getRequestForStoreObject(actionName, home, False)
+        request["attachmentID"] = attachment_id
+
+        response = yield self.sendRequestToServer(txn, server, request, writeStream=stream)
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def recv_get_attachment_data(self, txn, request, stream):
+        """
+        Process an getAttachmentData cross-pod request. Request arguments as per L{send_get_attachment_data}.
+
+        @param request: request arguments
+        @type request: C{dict}
+        """
+
+        home, _ignore = yield self._getStoreObjectForRequest(txn, request)
+        attachment = yield home.getAttachmentByID(request["attachmentID"])
+        if attachment is None:
+            raise InvalidAttachmentOperation("Attachment is missing: {}".format(request["attachmentID"]))
+
+        attachment.retrieve(stream)
+        returnValue((generateContentType(attachment.contentType()), attachment.name(),))
+
+
+# Calls on L{CommonHome} objects
+UtilityConduitMixin._make_simple_action(AttachmentsConduitMixin, "home_get_all_attachments", "getAllAttachments", classMethod=False, transform_recv_result=UtilityConduitMixin._to_serialize_list)
+UtilityConduitMixin._make_simple_action(AttachmentsConduitMixin, "home_get_attachment_links", "getAttachmentLinks", classMethod=False, transform_recv_result=UtilityConduitMixin._to_serialize_list)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/conduit.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/conduit.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/conduit.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -19,19 +19,25 @@
 from txdav.common.idirectoryservice import DirectoryRecordNotFoundError
 from txdav.common.datastore.podding.attachments import AttachmentsConduitMixin
 from txdav.common.datastore.podding.base import FailedCrossPodRequestError
-from txdav.common.datastore.podding.directory import DirectoryPoddingConduitMixin
+from txdav.common.datastore.podding.directory import (
+    DirectoryPoddingConduitMixin
+)
+from txdav.common.datastore.podding.request import ConduitRequest
+from txdav.common.datastore.podding.sharing_invites import (
+    SharingInvitesConduitMixin
+)
 from txdav.common.datastore.podding.store_api import StoreAPIConduitMixin
-from txdav.common.datastore.podding.request import ConduitRequest
-from txdav.common.datastore.podding.sharing_invites import SharingInvitesConduitMixin
+from txdav.common.datastore.podding.util import UtilityConduitMixin
 
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.python.reflect import namedClass
 
-
 log = Logger()
 
 
+
 class PoddingConduit(
+    UtilityConduitMixin,
     StoreAPIConduitMixin,
     AttachmentsConduitMixin,
     SharingInvitesConduitMixin,
@@ -40,29 +46,33 @@
     """
     This class is the API/RPC bridge between cross-pod requests and the store.
 
-    Each cross-pod request/response is described by a Python C{dict} that is serialized
-    to JSON for the HTTP request/response.
+    Each cross-pod request/response is described by a Python C{dict} that is
+    serialized to JSON for the HTTP request/response.
 
-    Each request C{dict} has an "action" key that indicates what call is being made, and
-    the other keys are arguments to that call.
+    Each request C{dict} has an "action" key that indicates what call is being
+    made, and the other keys are arguments to that call.
 
-    Each response C{dict} has a "result" key that indicates the call result, and other
-    optional keys for any values returned by the call.
+    Each response C{dict} has a "result" key that indicates the call result,
+    and other optional keys for any values returned by the call.
 
-    The conduit provides two methods for each action: one for the sending side and one for
-    the receiving side, called "send_{action}" and "recv_{action}", respectively, where
-    {action} is the action value.
+    The conduit provides two methods for each action: one for the sending side
+    and one for the receiving side, called "send_{action}" and "recv_{action}",
+    respectively, where {action} is the action value.
 
-    The "send_{action}" calls each have a set of arguments specific to the call itself. The
-    code takes care of packing that into a C{dict} and sending to the appropriate pod.
+    The "send_{action}" calls each have a set of arguments specific to the call
+    itself.
+    The code takes care of packing that into a C{dict} and sending to the
+    appropriate pod.
 
-    The "recv_{action}" calls take a single C{dict} argument that is the deserialized JSON
-    data from the incoming request. The return value is a C{dict} with the result.
+    The "recv_{action}" calls take a single C{dict} argument that is the
+    deserialized JSON data from the incoming request.
+    The return value is a C{dict} with the result.
 
-    Some simple forms of send_/recv_ methods can be auto-generated to simplify coding.
+    Some simple forms of send_/recv_ methods can be auto-generated to simplify
+    coding.
 
-    Actual implementations of this will be done via mix-ins for the different sub-systems using
-    the conduit.
+    Actual implementations of this will be done via mix-ins for the different
+    sub-systems using the conduit.
     """
 
     conduitRequestClass = ConduitRequest
@@ -72,6 +82,7 @@
         @param store: the L{CommonDataStore} in use.
         """
         self.store = store
+        self.streamingActions = ("get-attachment-data",)
 
 
     @inlineCallbacks
@@ -80,9 +91,12 @@
         Verify that the specified uids are valid for the request and return the
         matching directory records.
 
-        @param source_uid: UID for the user on whose behalf the request is being made
+        @param source_uid: UID for the user on whose behalf the request is
+            being made.
         @type source_uid: C{str}
-        @param destination_uid: UID for the user to whom the request is being sent
+
+        @param destination_uid: UID for the user to whom the request is being
+            sent.
         @type destination_uid: C{str}
 
         @return: L{Deferred} resulting in C{tuple} of L{IStoreDirectoryRecord}
@@ -90,39 +104,84 @@
 
         source = yield self.store.directoryService().recordWithUID(source_uid)
         if source is None:
-            raise DirectoryRecordNotFoundError("Cross-pod source: {}".format(source_uid))
+            raise DirectoryRecordNotFoundError(
+                "Cross-pod source: {}".format(source_uid)
+            )
         if not source.thisServer():
-            raise FailedCrossPodRequestError("Cross-pod source not on this server: {}".format(source_uid))
+            raise FailedCrossPodRequestError(
+                "Cross-pod source not on this server: {}".format(source_uid)
+            )
 
-        destination = yield self.store.directoryService().recordWithUID(destination_uid)
+        destination = yield self.store.directoryService().recordWithUID(
+            destination_uid
+        )
         if destination is None:
-            raise DirectoryRecordNotFoundError("Cross-pod destination: {}".format(destination_uid))
+            raise DirectoryRecordNotFoundError(
+                "Cross-pod destination: {}".format(destination_uid)
+            )
         if destination.thisServer():
-            raise FailedCrossPodRequestError("Cross-pod destination on this server: {}".format(destination_uid))
+            raise FailedCrossPodRequestError(
+                "Cross-pod destination on this server: {}"
+                .format(destination_uid)
+            )
 
         returnValue((source, destination,))
 
 
     def sendRequest(self, txn, recipient, data, stream=None, streamType=None):
-        return self.sendRequestToServer(txn, recipient.server(), data, stream, streamType)
+        return self.sendRequestToServer(
+            txn, recipient.server(), data, stream, streamType
+        )
 
 
     @inlineCallbacks
-    def sendRequestToServer(self, txn, server, data, stream=None, streamType=None):
+    def sendRequestToServer(
+        self, txn, server, data, stream=None, streamType=None, writeStream=None
+    ):
+        request = self.conduitRequestClass(
+            server, data, stream, streamType, writeStream
+        )
 
-        request = self.conduitRequestClass(server, data, stream, streamType)
         try:
             response = (yield request.doRequest(txn))
         except Exception as e:
-            raise FailedCrossPodRequestError("Failed cross-pod request: {}".format(e))
+            raise FailedCrossPodRequestError(
+                "Failed cross-pod request: {}".format(e)
+            )
+
         if response["result"] == "exception":
             raise namedClass(response["class"])(response["details"])
         elif response["result"] != "ok":
-            raise FailedCrossPodRequestError("Cross-pod request failed: {}".format(response))
+            raise FailedCrossPodRequestError(
+                "Cross-pod request failed: {}".format(response)
+            )
         else:
             returnValue(response.get("value"))
 
 
+    def isStreamAction(self, data):
+        """
+        Check to see if this is a request that will return a data stream rather
+        than a JSON response.
+        e.g., this is used to retrieve attachment data on another pod.
+
+        @param data: the JSON data to process
+        @type data: C{dict}
+        """
+        # Must have a dict with an "action" key
+        try:
+            action = data["action"]
+        except (KeyError, TypeError) as e:
+            log.error(
+                "JSON data must have an object as its root with an "
+                "'action' attribute: {error}\n{json}",
+                error=e, json=data
+            )
+            return False
+
+        return action in self.streamingActions
+
+
     @inlineCallbacks
     def processRequest(self, data):
         """
@@ -135,8 +194,16 @@
         try:
             action = data["action"]
         except (KeyError, TypeError) as e:
-            log.error("JSON data must have an object as its root with an 'action' attribute: {ex}\n{json}", ex=e, json=data)
-            raise FailedCrossPodRequestError("JSON data must have an object as its root with an 'action' attribute: {}\n{}".format(e, data,))
+            log.error(
+                "JSON data must have an object as its root with an "
+                "'action' attribute: {error}\n{json}",
+                error=e, json=data
+            )
+            raise FailedCrossPodRequestError(
+                "JSON data must have an object as its root with an 'action' "
+                "attribute: {}\n{}"
+                .format(e, data,)
+            )
 
         if action == "ping":
             result = {"result": "ok"}
@@ -145,7 +212,9 @@
         method = "recv_{}".format(action.replace("-", "_"))
         if not hasattr(self, method):
             log.error("Unsupported action: {action}", action=action)
-            raise FailedCrossPodRequestError("Unsupported action: {}".format(action))
+            raise FailedCrossPodRequestError(
+                "Unsupported action: {}".format(action)
+            )
 
         # Need a transaction to work with
         txn = self.store.newTransaction(repr("Conduit request"))
@@ -160,10 +229,15 @@
         except Exception as e:
             # Send the exception over to the other side
             yield txn.abort()
-            log.error("Failed action: {action}, {ex}", action=action, ex=e)
+            log.error(
+                "Failed action: {action}, {error}", action=action, error=e
+            )
             result = {
                 "result": "exception",
-                "class": ".".join((e.__class__.__module__, e.__class__.__name__,)),
+                "class": ".".join((
+                    e.__class__.__module__,
+                    e.__class__.__name__,
+                )),
                 "details": str(e),
             }
 
@@ -171,3 +245,62 @@
             yield txn.commit()
 
         returnValue(result)
+
+
+    @inlineCallbacks
+    def processRequestStream(self, data, stream):
+        """
+        Process the request.
+
+        @param data: the JSON data to process
+        @type data: C{dict}
+
+        @return: a L{tuple} of content-type and name, if successful, else a
+            L{dict} for a JSON result
+        @rtype: L{tuple} of (L{str}, L{str}), or L{dict}
+        """
+        # Must have a dict with an "action" key
+        try:
+            action = data["action"]
+        except (KeyError, TypeError) as e:
+            log.error(
+                "JSON data must have an object as its root with an "
+                "'action' attribute: {error}\n{json}",
+                error=e, json=data
+            )
+            raise FailedCrossPodRequestError(
+                "JSON data must have an object as its root with an "
+                "'action' attribute: {}\n{}".format(e, data)
+            )
+
+        method = "recv_{}".format(action.replace("-", "_"))
+        if not hasattr(self, method):
+            log.error("Unsupported action: {action}", action=action)
+            raise FailedCrossPodRequestError(
+                "Unsupported action: {}".format(action)
+            )
+
+        # Need a transaction to work with
+        txn = self.store.newTransaction(repr("Conduit request"))
+
+        # Do the actual request processing
+        try:
+            result = (yield getattr(self, method)(txn, data, stream))
+        except Exception as e:
+            # Send the exception over to the other side
+            yield txn.abort()
+            log.error(
+                "Failed action: {action}, {error}", action=action, error=e
+            )
+            result = {
+                "result": "exception",
+                "class": ".".join((
+                    e.__class__.__module__, e.__class__.__name__,
+                )),
+                "details": str(e),
+            }
+
+        else:
+            yield txn.commit()
+
+        returnValue(result)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/directory.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/directory.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/directory.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -191,3 +191,126 @@
         delegators = yield Delegates._delegatedToUIDs(txn, delegate, request["read-write"], onlyThisServer=True)
 
         returnValue(list(delegators))
+
+
+    @inlineCallbacks
+    def send_dump_individual_delegates(self, txn, delegator):
+        """
+        Get L{DelegateRecords} from another pod.
+
+        @param txn: transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param delegator: delegate to lookup
+        @type delegator: L{DirectoryRecord}
+        @param readWrite: if True, read and write access delegates are returned;
+            read-only access otherwise
+        """
+        if delegator.thisServer():
+            raise FailedCrossPodRequestError("Cross-pod destination on this server: {}".format(delegator.uid))
+
+        request = {
+            "action": "dump-individual-delegates",
+            "uid": delegator.uid,
+        }
+        response = yield self.sendRequestToServer(txn, delegator.server(), request)
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def recv_dump_individual_delegates(self, txn, request):
+        """
+        Process an delegators cross-pod request. Request arguments as per L{send_dump_individual_delegates}.
+
+        @param request: request arguments
+        @type request: C{dict}
+        """
+
+        delegator = yield txn.directoryService().recordWithUID(request["uid"])
+        if delegator is None or not delegator.thisServer():
+            raise FailedCrossPodRequestError("Cross-pod delegate missing or on this server: {}".format(delegator.uid))
+
+        delegates = yield txn.dumpIndividualDelegatesLocal(delegator.uid)
+
+        returnValue(self._to_serialize_list(delegates))
+
+
+    @inlineCallbacks
+    def send_dump_group_delegates(self, txn, delegator):
+        """
+        Get L{DelegateGroupsRecord},L{GroupsRecord} from another pod.
+
+        @param txn: transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param delegator: delegate to lookup
+        @type delegator: L{DirectoryRecord}
+        @param readWrite: if True, read and write access delegates are returned;
+            read-only access otherwise
+        """
+        if delegator.thisServer():
+            raise FailedCrossPodRequestError("Cross-pod destination on this server: {}".format(delegator.uid))
+
+        request = {
+            "action": "dump-group-delegates",
+            "uid": delegator.uid,
+        }
+        response = yield self.sendRequestToServer(txn, delegator.server(), request)
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def recv_dump_group_delegates(self, txn, request):
+        """
+        Process an delegators cross-pod request. Request arguments as per L{send_dump_group_delegates}.
+
+        @param request: request arguments
+        @type request: C{dict}
+        """
+
+        delegator = yield txn.directoryService().recordWithUID(request["uid"])
+        if delegator is None or not delegator.thisServer():
+            raise FailedCrossPodRequestError("Cross-pod delegate missing or on this server: {}".format(delegator.uid))
+
+        results = yield txn.dumpGroupDelegatesLocal(delegator.uid)
+
+        returnValue([[delegator_record.serialize(), group_record.serialize()] for delegator_record, group_record in results])
+
+
+    @inlineCallbacks
+    def send_dump_external_delegates(self, txn, delegator):
+        """
+        Get L{ExternalDelegateGroupsRecord} from another pod.
+
+        @param txn: transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param delegator: delegate to lookup
+        @type delegator: L{DirectoryRecord}
+        @param readWrite: if True, read and write access delegates are returned;
+            read-only access otherwise
+        """
+        if delegator.thisServer():
+            raise FailedCrossPodRequestError("Cross-pod destination on this server: {}".format(delegator.uid))
+
+        request = {
+            "action": "dump-external-delegates",
+            "uid": delegator.uid,
+        }
+        response = yield self.sendRequestToServer(txn, delegator.server(), request)
+        returnValue(response)
+
+
+    @inlineCallbacks
+    def recv_dump_external_delegates(self, txn, request):
+        """
+        Process an delegators cross-pod request. Request arguments as per L{send_dump_external_delegates}.
+
+        @param request: request arguments
+        @type request: C{dict}
+        """
+
+        delegator = yield txn.directoryService().recordWithUID(request["uid"])
+        if delegator is None or not delegator.thisServer():
+            raise FailedCrossPodRequestError("Cross-pod delegate missing or on this server: {}".format(delegator.uid))
+
+        delegates = yield txn.dumpExternalDelegatesLocal(delegator.uid)
+
+        returnValue(self._to_serialize_list(delegates))

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/__init__.py	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,15 +0,0 @@
-##
-# Copyright (c) 2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/__init__.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/__init__.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,15 @@
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/home_sync.py	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,1356 +0,0 @@
-##
-# Copyright (c) 2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from functools import wraps
-
-from twext.python.log import Logger
-from twisted.internet.defer import returnValue, inlineCallbacks
-from twisted.python.failure import Failure
-from twistedcaldav.accounting import emitAccounting
-from txdav.caldav.icalendarstore import ComponentUpdateState
-from txdav.common.datastore.podding.migration.sync_metadata import CalendarMigrationRecord, \
-    CalendarObjectMigrationRecord, AttachmentMigrationRecord
-from txdav.caldav.datastore.sql import ManagedAttachment, CalendarBindRecord
-from txdav.common.datastore.sql_external import NotificationCollectionExternal
-from txdav.common.datastore.sql_notification import NotificationCollection
-from txdav.common.datastore.sql_tables import _HOME_STATUS_MIGRATING, _HOME_STATUS_DISABLED, \
-    _HOME_STATUS_EXTERNAL, _HOME_STATUS_NORMAL
-from txdav.common.idirectoryservice import DirectoryRecordNotFoundError
-
-from uuid import uuid4
-import datetime
-
-log = Logger()
-
-ACCOUNTING_TYPE = "migration"
-ACCOUNTING_LOG = "migration.log"
-
-def inTransactionWrapper(operation):
-    """
-    This wrapper converts an instance method that takes a transaction as its
-    first parameter into one where the transaction parameter is an optional
-    keyword argument. If the keyword argument is present and not None, then
-    the instance method is called with that keyword as the first positional
-    argument (i.e., almost a NoOp). If the keyword argument is not present,
-    then a new transaction is created and the instance method called with
-    it as the first positional argument, plus the call is wrapped with
-    try/except/else to ensure proper commit and abort of the internally
-    created transaction is done.
-
-    So this wrapper allows for a method that requires a transaction to be run
-    with either an existing transaction or one created just for the purpose
-    of running it.
-
-    @param operation: a callable that takes an L{IAsyncTransaction} as its first
-        argument, and returns a value.
-    """
-
-    @wraps(operation)
-    @inlineCallbacks
-    def _inTxn(self, *args, **kwargs):
-        label = self.label(operation.__name__)
-        if "txn" in kwargs:
-            txn = kwargs["txn"]
-            del kwargs["txn"]
-            result = yield operation(self, txn, *args, **kwargs)
-            returnValue(result)
-        else:
-            txn = self.store.newTransaction(label=label)
-            try:
-                result = yield operation(self, txn, *args, **kwargs)
-            except Exception as ex:
-                f = Failure()
-                yield txn.abort()
-                log.error("{label} failed: {e}".format(label=label, e=str(ex)))
-                returnValue(f)
-            else:
-                yield txn.commit()
-                returnValue(result)
-
-    return _inTxn
-
-
-
-# Cross-pod synchronization of an entire calendar home
-class CrossPodHomeSync(object):
-
-    BATCH_SIZE = 50
-
-    def __init__(self, store, diruid, final=False, uselog=None):
-        """
-        @param store: the data store
-        @type store: L{CommonDataStore}
-        @param diruid: directory uid of the user whose home is to be sync'd
-        @type diruid: L{str}
-        @param final: indicates whether this is in the final sync stage with the remote home
-            already disabled
-        @type final: L{bool}
-        @param uselog: additional logging written to this object
-        @type: L{File}
-        """
-
-        self.store = store
-        self.diruid = diruid
-        self.disabledRemote = final
-        self.uselog = uselog
-        self.record = None
-        self.homeId = None
-
-
-    def label(self, detail):
-        return "Cross-pod Migration Sync for {}: {}".format(self.diruid, detail)
-
-
-    def accounting(self, logstr):
-        emitAccounting(ACCOUNTING_TYPE, self.record, "{} {}\n".format(datetime.datetime.now().isoformat(), logstr), filename=ACCOUNTING_LOG)
-        if self.uselog is not None:
-            self.uselog.write("CrossPodHomeSync: {}\n".format(logstr))
-
-
-    @inlineCallbacks
-    def migrateHere(self):
-        """
-        This is a full, serialized version of a data migration (minus any directory
-        update) that can be triggered via a command line tool. It is designed to
-        minimize down time for the migrating user.
-        """
-
-        # Step 1 - initial full sync
-        yield self.sync()
-
-        # Step 2 - increment sync (since the initial sync may take a long time
-        # to run we should do one incremental sync before bringing down the
-        # account being migrated)
-        yield self.sync()
-
-        # Step 3 - disable remote home
-        # NB Any failure from this point on will need to be caught and
-        # handled by re-enabling the old home (and fixing any sharing state
-        # that may have been changed)
-        yield self.disableRemoteHome()
-
-        # Step 4 - final incremental sync
-        yield self.sync()
-
-        # Step 5 - final overall sync of meta-data (including sharing re-linking)
-        yield self.finalSync()
-
-        # Step 6 - enable new home
-        yield self.enableLocalHome()
-
-        # Step 7 - remove remote home
-        yield self.removeRemoteHome()
-
-        # Step 8 - say phew! TODO: Actually alert everyone else
-        pass
-
-
-    @inlineCallbacks
-    def sync(self):
-        """
-        Initiate a sync of the home. This is a simple data sync that does not
-        reconcile sharing state etc. The L{finalSync} method will do a full
-        sharing reconcile as well as disable the migration source home.
-        """
-
-        yield self.loadRecord()
-        self.accounting("Starting: sync...")
-        yield self.prepareCalendarHome()
-
-        # Calendar list and calendar data
-        yield self.syncCalendarList()
-
-        # Sync home metadata such as alarms, default calendars, etc
-        yield self.syncCalendarHomeMetaData()
-
-        # Sync attachments
-        yield self.syncAttachments()
-
-        self.accounting("Completed: sync.\n")
-
-
-    @inlineCallbacks
-    def finalSync(self):
-        """
-        Do the final sync up of any additional data, re-link sharing bind
-        rows, recalculate quota etc.
-        """
-
-        yield self.loadRecord()
-        self.accounting("Starting: finalSync...")
-        yield self.prepareCalendarHome()
-
-        # Link attachments to resources: ATTACHMENT_CALENDAR_OBJECT table
-        yield self.linkAttachments()
-
-        # TODO: Re-write attachment URIs - not sure if we need this as reverse proxy may take care of it
-        pass
-
-        # Group attendee reconcile
-        yield self.groupAttendeeReconcile()
-
-        # Delegates reconcile
-        yield self.delegateReconcile()
-
-        # Shared collections reconcile (including group sharees)
-        yield self.sharedByCollectionsReconcile()
-        yield self.sharedToCollectionsReconcile()
-
-        # Notifications
-        yield self.notificationsReconcile()
-
-        # TODO: work items
-        pass
-
-        self.accounting("Completed: finalSync.\n")
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def disableRemoteHome(self, txn):
-        """
-        Mark the remote home as disabled.
-        """
-
-        yield self.loadRecord()
-        self.accounting("Starting: disableRemoteHome...")
-        yield self.prepareCalendarHome()
-
-        # Calendar home
-        remote_home = yield self._remoteHome(txn)
-        yield remote_home.setStatus(_HOME_STATUS_DISABLED)
-
-        # Notification home
-        notifications = yield self._remoteNotificationsHome(txn)
-        yield notifications.setStatus(_HOME_STATUS_DISABLED)
-
-        self.disabledRemote = True
-
-        self.accounting("Completed: disableRemoteHome.\n")
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def enableLocalHome(self, txn):
-        """
-        Mark the local home as enabled and remove any previously existing external home.
-        """
-
-        yield self.loadRecord()
-        self.accounting("Starting: enableLocalHome...")
-        yield self.prepareCalendarHome()
-
-        # Disable any local external homes
-        oldhome = yield txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_EXTERNAL)
-        if oldhome is not None:
-            yield oldhome.setLocalStatus(_HOME_STATUS_DISABLED)
-        oldnotifications = yield txn.notificationsWithUID(self.diruid, status=_HOME_STATUS_EXTERNAL)
-        if oldnotifications:
-            yield oldnotifications.setLocalStatus(_HOME_STATUS_DISABLED)
-
-        # Enable the migrating ones
-        newhome = yield txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_MIGRATING)
-        if newhome is not None:
-            yield newhome.setStatus(_HOME_STATUS_NORMAL)
-        newnotifications = yield txn.notificationsWithUID(self.diruid, status=_HOME_STATUS_MIGRATING)
-        if newnotifications:
-            yield newnotifications.setStatus(_HOME_STATUS_NORMAL)
-
-        # TODO: remove migration state
-        pass
-
-        # TODO: purge the old ones
-        pass
-
-        self.accounting("Completed: enableLocalHome.\n")
-
-
-    @inlineCallbacks
-    def removeRemoteHome(self):
-        """
-        Remove all the old data on the remote pod.
-        """
-
-        # TODO: implement API on CommonHome to purge the old data without
-        # any side-effects (scheduling, sharing etc).
-        yield self.loadRecord()
-        self.accounting("Starting: removeRemoteHome...")
-        yield self.prepareCalendarHome()
-
-        self.accounting("Completed: removeRemoteHome.\n")
-
-
-    @inlineCallbacks
-    def loadRecord(self):
-        """
-        Initiate a sync of the home.
-        """
-
-        if self.record is None:
-            self.record = yield self.store.directoryService().recordWithUID(self.diruid)
-            if self.record is None:
-                raise DirectoryRecordNotFoundError("Cross-pod Migration Sync missing directory record for {}".format(self.diruid))
-            if self.record.thisServer():
-                raise ValueError("Cross-pod Migration Sync cannot sync with user already on this server: {}".format(self.diruid))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def prepareCalendarHome(self, txn):
-        """
-        Make sure the inactive home to migrate into is present on this pod.
-        """
-
-        if self.homeId is None:
-            home = yield self._localHome(txn)
-            if home is None:
-                if self.disabledRemote:
-                    self.homeId = None
-                else:
-                    home = yield txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_MIGRATING, create=True)
-                    self.accounting("  Created new home collection to migrate into.")
-            self.homeId = home.id() if home is not None else None
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def syncCalendarHomeMetaData(self, txn):
-        """
-        Make sure the home meta-data (alarms, default calendars) is properly sync'd
-        """
-
-        self.accounting("Starting: syncCalendarHomeMetaData...")
-        remote_home = yield self._remoteHome(txn)
-        yield remote_home.readMetaData()
-
-        calendars = yield CalendarMigrationRecord.querysimple(txn, calendarHomeResourceID=self.homeId)
-        calendarIDMap = dict((item.remoteResourceID, item.localResourceID) for item in calendars)
-
-        local_home = yield self._localHome(txn)
-        yield local_home.copyMetadata(remote_home, calendarIDMap)
-
-        self.accounting("Completed: syncCalendarHomeMetaData.")
-
-
-    @inlineCallbacks
-    def _remoteHome(self, txn):
-        """
-        Create a synthetic external home object that maps to the actual remote home.
-        """
-
-        from txdav.caldav.datastore.sql_external import CalendarHomeExternal
-        resourceID = yield txn.store().conduit.send_home_resource_id(txn, self.record, migrating=True)
-        home = CalendarHomeExternal.makeSyntheticExternalHome(txn, self.record.uid, resourceID) if resourceID is not None else None
-        if self.disabledRemote:
-            home._migratingHome = True
-        returnValue(home)
-
-
-    @inlineCallbacks
-    def _remoteNotificationsHome(self, txn):
-        """
-        Create a synthetic external home object that maps to the actual remote home.
-        """
-
-        notifications = yield NotificationCollectionExternal.notificationsWithUID(txn, self.diruid, create=True)
-        if self.disabledRemote:
-            notifications._migratingHome = True
-        returnValue(notifications)
-
-
-    def _localHome(self, txn):
-        """
-        Get the home on this pod that will have data migrated to it.
-        """
-
-        return txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_MIGRATING)
-
-
-    @inlineCallbacks
-    def syncCalendarList(self):
-        """
-        Synchronize each owned calendar.
-        """
-
-        self.accounting("Starting: syncCalendarList...")
-
-        # Remote sync details
-        remote_sync_state = yield self.getCalendarSyncList()
-        self.accounting("  Found {} remote calendars to sync.".format(len(remote_sync_state)))
-
-        # Get local sync details from local DB
-        local_sync_state = yield self.getSyncState()
-        self.accounting("  Found {} local calendars to sync.".format(len(local_sync_state)))
-
-        # Remove local calendars no longer on the remote side
-        yield self.purgeLocal(local_sync_state, remote_sync_state)
-
-        # Sync each calendar that matches on both sides
-        for remoteID in remote_sync_state.keys():
-            yield self.syncCalendar(remoteID, local_sync_state, remote_sync_state)
-
-        self.accounting("Completed: syncCalendarList.")
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def getCalendarSyncList(self, txn):
-        """
-        Get the names and sync-tokens for each remote owned calendar.
-        """
-
-        # List of calendars from the remote side
-        home = yield self._remoteHome(txn)
-        if home is None:
-            returnValue(None)
-        calendars = yield home.loadChildren()
-        results = {}
-        for calendar in calendars:
-            if calendar.owned():
-                sync_token = yield calendar.syncToken()
-                results[calendar.id()] = CalendarMigrationRecord.make(
-                    calendarHomeResourceID=home.id(),
-                    remoteResourceID=calendar.id(),
-                    localResourceID=0,
-                    lastSyncToken=sync_token,
-                )
-
-        returnValue(results)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def getSyncState(self, txn):
-        """
-        Get local synchronization state for the home being migrated.
-        """
-        records = yield CalendarMigrationRecord.querysimple(
-            txn, calendarHomeResourceID=self.homeId
-        )
-        returnValue(dict([(record.remoteResourceID, record) for record in records]))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def updateSyncState(self, txn, stateRecord, newSyncToken):
-        """
-        Update or insert an L{CalendarMigrationRecord} with the new specified sync token.
-        """
-        if stateRecord.isnew():
-            stateRecord.lastSyncToken = newSyncToken
-            yield stateRecord.insert(txn)
-        else:
-            # The existing stateRecord has a stale txn, but valid column values. We have
-            # to duplicate it before we can give it a different txn.
-            stateRecord = stateRecord.duplicate()
-            stateRecord.transaction = txn
-            yield stateRecord.update(lastSyncToken=newSyncToken)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def purgeLocal(self, txn, local_sync_state, remote_sync_state):
-        """
-        Remove (silently - i.e., no scheduling) local calendars that are no longer on the remote side.
-
-        @param txn: transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param local_sync_state: local sync state
-        @type local_sync_state: L{dict}
-        @param remote_sync_state: remote sync state
-        @type remote_sync_state: L{dict}
-        """
-        home = yield self._localHome(txn)
-        for localID in set(local_sync_state.keys()) - set(remote_sync_state.keys()):
-            calendar = yield home.childWithID(local_sync_state[localID].localResourceID)
-            if calendar is not None:
-                yield calendar.purge()
-            del local_sync_state[localID]
-            self.accounting("  Purged calendar local-id={} that no longer exists on the remote pod.".format(localID))
-
-
-    @inlineCallbacks
-    def syncCalendar(self, remoteID, local_sync_state, remote_sync_state):
-        """
-        Sync the contents of a calendar from the remote side. The local calendar may need to be created
-        on initial sync. Make use of sync tokens to avoid unnecessary work.
-
-        @param remoteID: id of the remote calendar to sync
-        @type remoteID: L{int}
-        @param local_sync_state: local sync state
-        @type local_sync_state: L{dict}
-        @param remote_sync_state: remote sync state
-        @type remote_sync_state: L{dict}
-        """
-
-        self.accounting("Starting: syncCalendar.")
-
-        # See if we need to create the local one first
-        if remoteID not in local_sync_state:
-            localID = yield self.newCalendar()
-            local_sync_state[remoteID] = CalendarMigrationRecord.make(
-                calendarHomeResourceID=self.homeId,
-                remoteResourceID=remoteID,
-                localResourceID=localID,
-                lastSyncToken=None,
-            )
-            self.accounting("  Created new calendar local-id={}, remote-id={}.".format(localID, remoteID))
-        else:
-            localID = local_sync_state.get(remoteID).localResourceID
-            self.accounting("  Updating calendar local-id={}, remote-id={}.".format(localID, remoteID))
-        local_record = local_sync_state.get(remoteID)
-
-        remote_token = remote_sync_state[remoteID].lastSyncToken
-        if local_record.lastSyncToken != remote_token:
-            # Sync meta-data such as name, alarms, supported-components, transp, etc
-            yield self.syncCalendarMetaData(local_record)
-
-            # Sync object resources
-            changed, removed = yield self.findObjectsToSync(local_record)
-            self.accounting("  Calendar objects changed={}, removed={}.".format(len(changed), len(removed)))
-            yield self.purgeDeletedObjectsInBatches(local_record, removed)
-            yield self.updateChangedObjectsInBatches(local_record, changed)
-
-        yield self.updateSyncState(local_record, remote_token)
-        self.accounting("Completed: syncCalendar.")
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def newCalendar(self, txn):
-        """
-        Create a new local calendar to sync remote data to. We don't care about the name
-        of the calendar right now - it will be sync'd later.
-        """
-
-        home = yield self._localHome(txn)
-        calendar = yield home.createChildWithName(str(uuid4()))
-        returnValue(calendar.id())
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def syncCalendarMetaData(self, txn, migrationRecord):
-        """
-        Sync the metadata of a calendar from the remote side.
-
-        @param migrationRecord: current migration record
-        @type localID: L{CalendarMigrationRecord}
-        """
-
-        # Remote changes
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(migrationRecord.remoteResourceID)
-        if remote_calendar is None:
-            returnValue(None)
-
-        # Check whether the deleted set items
-        local_home = yield self._localHome(txn)
-        local_calendar = yield local_home.childWithID(migrationRecord.localResourceID)
-        yield local_calendar.copyMetadata(remote_calendar)
-        self.accounting("  Copied calendar meta-data for calendar local-id={0.localResourceID}, remote-id={0.remoteResourceID}.".format(migrationRecord))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def findObjectsToSync(self, txn, migrationRecord):
-        """
-        Find the set of object resources that need to be sync'd from the remote
-        side and the set that need to be removed locally. Take into account the
-        possibility that this is a partial sync and removals or additions might
-        be false positives.
-
-        @param migrationRecord: current migration record
-        @type localID: L{CalendarMigrationRecord}
-        """
-
-        # Remote changes
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(migrationRecord.remoteResourceID)
-        if remote_calendar is None:
-            returnValue(None)
-        changed, deleted, _ignore_invalid = yield remote_calendar.resourceNamesSinceToken(migrationRecord.lastSyncToken)
-
-        # Check whether the deleted set items
-        local_home = yield self._localHome(txn)
-        local_calendar = yield local_home.childWithID(migrationRecord.localResourceID)
-
-        # Check the md5's on each changed remote with the local one to filter out ones
-        # we don't actually need to sync
-        remote_changes = yield remote_calendar.objectResourcesWithNames(changed)
-        remote_changes = dict([(calendar.name(), calendar) for calendar in remote_changes])
-
-        local_changes = yield local_calendar.objectResourcesWithNames(changed)
-        local_changes = dict([(calendar.name(), calendar) for calendar in local_changes])
-
-        actual_changes = []
-        for name, calendar in remote_changes.items():
-            if name not in local_changes or remote_changes[name].md5() != local_changes[name].md5():
-                actual_changes.append(name)
-
-        returnValue((actual_changes, deleted,))
-
-
-    @inlineCallbacks
-    def purgeDeletedObjectsInBatches(self, migrationRecord, deleted):
-        """
-        Purge (silently remove) the specified object resources. This needs to
-        succeed in the case where some or all resources have already been deleted.
-        Do this in batches to keep transaction times small.
-
-        @param migrationRecord: local calendar migration record
-        @type migrationRecord: L{CalendarMigrationRecord}
-        @param deleted: list of names to purge
-        @type deleted: L{list} of L{str}
-        """
-
-        remaining = list(deleted)
-        while remaining:
-            yield self.purgeBatch(migrationRecord.localResourceID, remaining[:self.BATCH_SIZE])
-            del remaining[:self.BATCH_SIZE]
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def purgeBatch(self, txn, localID, purge_names):
-        """
-        Purge a bunch of object resources from the specified calendar.
-
-        @param txn: transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param localID: id of the local calendar to sync
-        @type localID: L{int}
-        @param purge_names: object resource names to purge
-        @type purge_names: L{list} of L{str}
-        """
-
-        # Check whether the deleted set items
-        local_home = yield self._localHome(txn)
-        local_calendar = yield local_home.childWithID(localID)
-        local_objects = yield local_calendar.objectResourcesWithNames(purge_names)
-
-        for local_object in local_objects:
-            yield local_object.purge()
-            self.accounting("  Purged calendar object local-id={}.".format(local_object.id()))
-
-
-    @inlineCallbacks
-    def updateChangedObjectsInBatches(self, migrationRecord, changed):
-        """
-        Update the specified object resources. This needs to succeed in the
-        case where some or all resources have already been deleted.
-        Do this in batches to keep transaction times small.
-
-        @param migrationRecord: local calendar migration record
-        @type migrationRecord: L{CalendarMigrationRecord}
-        @param changed: list of names to update
-        @type changed: L{list} of L{str}
-        """
-
-        remaining = list(changed)
-        while remaining:
-            yield self.updateBatch(
-                migrationRecord.localResourceID,
-                migrationRecord.remoteResourceID,
-                remaining[:self.BATCH_SIZE],
-            )
-            del remaining[:self.BATCH_SIZE]
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def updateBatch(self, txn, localID, remoteID, remaining):
-        """
-        Update a bunch of object resources from the specified remote calendar.
-
-        @param txn: transaction to use
-        @type txn: L{CommonStoreTransaction}
-        @param localID: id of the local calendar to sync
-        @type localID: L{int}
-        @param remoteID: id of the remote calendar to sync with
-        @type remoteID: L{int}
-        @param purge_names: object resource names to update
-        @type purge_names: L{list} of L{str}
-        """
-
-        # Get remote objects
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(remoteID)
-        if remote_calendar is None:
-            returnValue(None)
-        remote_objects = yield remote_calendar.objectResourcesWithNames(remaining)
-        remote_objects = dict([(obj.name(), obj) for obj in remote_objects])
-
-        # Get local objects
-        local_home = yield self._localHome(txn)
-        local_calendar = yield local_home.childWithID(localID)
-        local_objects = yield local_calendar.objectResourcesWithNames(remaining)
-        local_objects = dict([(obj.name(), obj) for obj in local_objects])
-
-        # Sync ones that still exist - use txn._migrating together with stuffing the remote md5
-        # value onto the component being stored to ensure that the md5 value stored locally
-        # matches the remote one (which should help reduce the need for a client to resync
-        # the data when moved from one pod to the other).
-        txn._migrating = True
-        for obj_name in remote_objects.keys():
-            remote_object = remote_objects[obj_name]
-            remote_data = yield remote_object.component()
-            remote_data.md5 = remote_object.md5()
-            if obj_name in local_objects:
-                local_object = yield local_objects[obj_name]
-                yield local_object._setComponentInternal(remote_data, internal_state=ComponentUpdateState.RAW)
-                del local_objects[obj_name]
-                log_op = "Updated"
-            else:
-                local_object = yield local_calendar._createCalendarObjectWithNameInternal(obj_name, remote_data, internal_state=ComponentUpdateState.RAW)
-
-                # Maintain the mapping from the remote to local id. Note that this mapping never changes as the ids on both
-                # sides are immutable - though it may get deleted if the local object is removed during sync (via a cascade).
-                yield CalendarObjectMigrationRecord.create(
-                    txn,
-                    calendarHomeResourceID=self.homeId,
-                    remoteResourceID=remote_object.id(),
-                    localResourceID=local_object.id()
-                )
-                log_op = "Created"
-
-            # Sync meta-data such as schedule object, schedule tags, access mode etc
-            yield local_object.copyMetadata(remote_object)
-            self.accounting("  {} calendar object local-id={}, remote-id={}.".format(log_op, local_object.id(), remote_object.id()))
-
-        # Purge the ones that remain
-        for local_object in local_objects.values():
-            yield local_object.purge()
-            self.accounting("  Purged calendar object local-id={}.".format(local_object.id()))
-
-
-    @inlineCallbacks
-    def syncAttachments(self):
-        """
-        Sync attachments (both metadata and actual attachment data) for the home being migrated.
-        """
-
-        self.accounting("Starting: syncAttachments...")
-
-        # Two steps - sync the table first in one txn, then sync each attachment's data
-        changed_ids, removed_ids = yield self.syncAttachmentTable()
-        self.accounting("  Attachments changed={}, removed={}".format(len(changed_ids), len(removed_ids)))
-
-        for local_id in changed_ids:
-            yield self.syncAttachmentData(local_id)
-
-        self.accounting("Completed: syncAttachments.")
-
-        returnValue((changed_ids, removed_ids,))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def syncAttachmentTable(self, txn):
-        """
-        Sync the ATTACHMENT table data for the home being migrated. Return the list of local attachment ids that
-        now need there attachment data sync'd from the server.
-        """
-
-        remote_home = yield self._remoteHome(txn)
-        rattachments = yield remote_home.getAllAttachments()
-        rmap = dict([(attachment.id(), attachment) for attachment in rattachments])
-
-        local_home = yield self._localHome(txn)
-        lattachments = yield local_home.getAllAttachments()
-        lmap = dict([(attachment.id(), attachment) for attachment in lattachments])
-
-        # Figure out the differences
-        records = yield AttachmentMigrationRecord.querysimple(
-            txn, calendarHomeResourceID=self.homeId
-        )
-        mapping = dict([(record.remoteResourceID, record) for record in records])
-
-        # Removed - remove attachment and migration state
-        removed = set(mapping.keys()) - set(rmap.keys())
-        for remove_id in removed:
-            record = mapping[remove_id]
-            att = yield ManagedAttachment.load(txn, None, None, attachmentID=record.localResourceID)
-            if att:
-                yield att.remove(adjustQuota=False)
-            else:
-                yield record.delete()
-
-        # Track which ones need attachment data sync'd over
-        data_ids = set()
-
-        # Added - add new attachment and migration state
-        added = set(rmap.keys()) - set(mapping.keys())
-        for added_id in added:
-            attachment = yield ManagedAttachment._create(txn, None, self.homeId)
-            yield AttachmentMigrationRecord.create(
-                txn,
-                calendarHomeResourceID=self.homeId,
-                remoteResourceID=added_id,
-                localResourceID=attachment.id(),
-            )
-            data_ids.add(attachment.id())
-
-        # Possible updates - check for md5 change and sync
-        updates = set(mapping.keys()) & set(rmap.keys())
-        for updated_id in updates:
-            local_id = mapping[updated_id].localResourceID
-            if rmap[updated_id].md5() != lmap[local_id].md5():
-                yield lmap[local_id].copyRemote(rmap[updated_id])
-                data_ids.add(local_id)
-
-        returnValue((data_ids, removed,))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def syncAttachmentData(self, txn, local_id):
-        """
-        Sync the attachment data for the home being migrated.
-        """
-
-        remote_home = yield self._remoteHome(txn)
-        local_home = yield self._localHome(txn)
-        attachment = yield local_home.getAttachmentByID(local_id)
-        if attachment is None:
-            returnValue(None)
-
-        records = yield AttachmentMigrationRecord.querysimple(
-            txn, calendarHomeResourceID=self.homeId, localResourceID=local_id
-        )
-        if records:
-            # Read the data from the conduit
-            yield remote_home.readAttachmentData(records[0].remoteResourceID, attachment)
-            self.accounting("  Read attachment local-id={0.localResourceID}, remote-id={0.remoteResourceID}".format(records[0]))
-
-
-    @inlineCallbacks
-    def linkAttachments(self):
-        """
-        Link attachments to the calendar objects they belong to.
-        """
-
-        self.accounting("Starting: linkAttachments...")
-
-        # Get the map of links for the remote home
-        links = yield self.getAttachmentLinks()
-        self.accounting("  Linking {} attachments".format(len(links)))
-
-        # Get remote->local ID mappings
-        attachmentIDMap, objectIDMap = yield self.getAttachmentMappings()
-
-        # Batch setting links for the local home
-        len_links = len(links)
-        while links:
-            yield self.makeAttachmentLinks(links[:50], attachmentIDMap, objectIDMap)
-            links = links[50:]
-
-        self.accounting("Completed: linkAttachments.")
-
-        returnValue(len_links)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def getAttachmentLinks(self, txn):
-        """
-        Get the remote link information.
-        """
-
-        # Get the map of links for the remote home
-        remote_home = yield self._remoteHome(txn)
-        links = yield remote_home.getAttachmentLinks()
-        returnValue(links)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def getAttachmentMappings(self, txn):
-        """
-        Get the remote link information.
-        """
-
-        # Get migration mappings
-        records = yield AttachmentMigrationRecord.querysimple(
-            txn, calendarHomeResourceID=self.homeId
-        )
-        attachmentIDMap = dict([(record.remoteResourceID, record) for record in records])
-
-        records = yield CalendarObjectMigrationRecord.querysimple(
-            txn, calendarHomeResourceID=self.homeId
-        )
-        objectIDMap = dict([(record.remoteResourceID, record) for record in records])
-
-        returnValue((attachmentIDMap, objectIDMap,))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def makeAttachmentLinks(self, txn, links, attachmentIDMap, objectIDMap):
-        """
-        Map remote links to local links.
-        """
-
-        for link in links:
-            # Remote link has an invalid txn at this point so replace that first
-            link._txn = txn
-
-            # Now re-map the attachment ID and calendar_object_id to the local ones
-            link._attachmentID = attachmentIDMap[link._attachmentID].localResourceID
-            link._calendarObjectID = objectIDMap[link._calendarObjectID].localResourceID
-
-            yield link.insert()
-
-
-    @inlineCallbacks
-    def delegateReconcile(self):
-        """
-        Sync the delegate assignments from the remote home to the local home. We won't use
-        a fake directory UID locally.
-        """
-
-        self.accounting("Starting: delegateReconcile...")
-
-        yield self.individualDelegateReconcile()
-        yield self.groupDelegateReconcile()
-        yield self.externalDelegateReconcile()
-
-        self.accounting("Completed: delegateReconcile.")
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def individualDelegateReconcile(self, txn):
-        """
-        Sync the delegate assignments from the remote home to the local home. We won't use
-        a fake directory UID locally.
-        """
-        remote_records = yield txn.dumpIndividualDelegatesExternal(self.record)
-        for record in remote_records:
-            yield record.insert(txn)
-
-        self.accounting("  Found {} individual delegates".format(len(remote_records)))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def groupDelegateReconcile(self, txn):
-        """
-        Sync the delegate assignments from the remote home to the local home. We won't use
-        a fake directory UID locally.
-        """
-        remote_records = yield txn.dumpGroupDelegatesExternal(self.record)
-        for delegator, group in remote_records:
-            # We need to make sure the group exists locally first and map the groupID to the local one
-            local_group = yield txn.groupByUID(group.groupUID)
-            delegator.groupID = local_group.groupID
-            yield delegator.insert(txn)
-
-        self.accounting("  Found {} group delegates".format(len(remote_records)))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def externalDelegateReconcile(self, txn):
-        """
-        Sync the external delegate assignments from the remote home to the local home. We won't use
-        a fake directory UID locally.
-        """
-        remote_records = yield txn.dumpExternalDelegatesExternal(self.record)
-        for record in remote_records:
-            yield record.insert(txn)
-
-        self.accounting("  Found {} external delegates".format(len(remote_records)))
-
-
-    @inlineCallbacks
-    def groupAttendeeReconcile(self):
-        """
-        Sync the remote group attendee links to the local store.
-        """
-
-        self.accounting("Starting: groupAttendeeReconcile...")
-
-        # Get remote data and local mapping information
-        remote_group_attendees, objectIDMap = yield self.groupAttendeeData()
-        self.accounting("  Found {} group attendees".format(len(remote_group_attendees)))
-
-        # Map each result to a local resource (in batches)
-        number_of_links = len(remote_group_attendees)
-        while remote_group_attendees:
-            yield self.groupAttendeeProcess(remote_group_attendees[:50], objectIDMap)
-            remote_group_attendees = remote_group_attendees[50:]
-
-        self.accounting("Completed: groupAttendeeReconcile.")
-
-        returnValue(number_of_links)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def groupAttendeeData(self, txn):
-        """
-        Sync the remote group attendee links to the local store.
-        """
-        remote_home = yield self._remoteHome(txn)
-        remote_group_attendees = yield remote_home.getAllGroupAttendees()
-
-        # Get all remote->local object maps
-        records = yield CalendarObjectMigrationRecord.querysimple(
-            txn, calendarHomeResourceID=self.homeId
-        )
-        objectIDMap = dict([(record.remoteResourceID, record.localResourceID) for record in records])
-
-        returnValue((remote_group_attendees, objectIDMap,))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def groupAttendeeProcess(self, txn, results, objectIDMap):
-        """
-        Sync the remote group attendee links to the local store.
-        """
-        # Map each result to a local resource
-        for groupAttendee, group in results:
-            local_group = yield txn.groupByUID(group.groupUID)
-            groupAttendee.groupID = local_group.groupID
-            try:
-                groupAttendee.resourceID = objectIDMap[groupAttendee.resourceID]
-            except KeyError:
-                continue
-            yield groupAttendee.insert(txn)
-
-
-    @inlineCallbacks
-    def notificationsReconcile(self):
-        """
-        Sync all the existing L{NotificationObject} resources from the remote store.
-        """
-
-        self.accounting("Starting: notificationsReconcile...")
-        records = yield self.notificationRecords()
-        self.accounting("  Found {} notifications".format(len(records)))
-
-        # Batch setting resources for the local home
-        len_records = len(records)
-        while records:
-            yield self.makeNotifications(records[:50])
-            records = records[50:]
-
-        self.accounting("Completed: notificationsReconcile.")
-
-        returnValue(len_records)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def notificationRecords(self, txn):
-        """
-        Get all the existing L{NotificationObjectRecord}'s from the remote store.
-        """
-
-        notifications = yield self._remoteNotificationsHome(txn)
-        records = yield notifications.notificationObjectRecords()
-        for record in records:
-            # This needs to be reset when added to the local store
-            del record.resourceID
-
-            # Map the remote id to the local one.
-            record.notificationHomeResourceID = notifications.id()
-
-        returnValue(records)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def makeNotifications(self, txn, records):
-        """
-        Create L{NotificationObjectRecord} records in the local store.
-        """
-
-        notifications = yield NotificationCollection.notificationsWithUID(txn, self.diruid, status=_HOME_STATUS_MIGRATING, create=True)
-        for record in records:
-            # Do this via the "write" API so that sync revisions are updated properly, rather than just
-            # inserting the records directly.
-            notification = yield notifications.writeNotificationObject(record.notificationUID, record.notificationType, record.notificationData)
-            self.accounting("  Added notification local-id={}.".format(notification.id()))
-
-
-    @inlineCallbacks
-    def sharedByCollectionsReconcile(self):
-        """
-        Sync all the collections shared by the migrating user from the remote store. We will do this one calendar at a time since
-        there could be a large number of sharees per calendar.
-
-        Here is the logic we need: first assume we have three pods: A, B, C, and we are migrating a user from A->B. We start
-        with a set of shares (X -> Y - where X is the sharer and Y the sharee) on pod A. We migrate the sharer to pod B. We
-        then need to have a set of bind records on pod B, and adjust the set on pod A. Note that no changes are required on pod C.
-
-        Original      |  Changes                     | Changes
-        Shares        |  on B                        | on A
-        --------------|------------------------------|---------------------
-        A -> A        |  B -> A (new)                | B -> A (modify existing)
-        A -> B        |  B -> B (modify existing)    | (removed)
-        A -> C        |  B -> C (new)                | (removed)
-        """
-
-        self.accounting("Starting: sharedByCollectionsReconcile...")
-        calendars = yield self.getSyncState()
-
-        len_records = 0
-        for calendar in calendars.values():
-            records, bindUID = yield self.sharedByCollectionRecords(calendar.remoteResourceID, calendar.localResourceID)
-            if not records:
-                continue
-            records = records.items()
-
-            self.accounting("  Found shared by calendar local-id={0.localResourceID}, remote-id={0.remoteResourceID} with {1} sharees".format(
-                calendar, len(records),
-            ))
-
-            # Batch setting resources for the local home
-            len_records += len(records)
-            while records:
-                yield self.makeSharedByCollections(records[:50], calendar.localResourceID)
-                records = records[50:]
-
-            # Get groups from remote pod
-            yield self.syncGroupSharees(calendar.remoteResourceID, calendar.localResourceID)
-
-            # Update the remote pod to switch over the shares
-            yield self.updatedRemoteSharedByCollections(calendar.remoteResourceID, bindUID)
-
-        self.accounting("Completed: sharedByCollectionsReconcile.")
-
-        returnValue(len_records)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def sharedByCollectionRecords(self, txn, remote_id, local_id):
-        """
-        Get all the existing L{CalendarBindRecord}'s from the remote store. Also make sure a
-        bindUID exists for the local calendar.
-        """
-
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(remote_id)
-        records = yield remote_calendar.sharingBindRecords()
-
-        # Check bindUID
-        local_records = yield CalendarBindRecord.querysimple(
-            txn,
-            calendarHomeResourceID=self.homeId,
-            calendarResourceID=local_id,
-        )
-        if records and not local_records[0].bindUID:
-            yield local_records[0].update(bindUID=str(uuid4()))
-
-        returnValue((records, local_records[0].bindUID,))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def makeSharedByCollections(self, txn, records, calendar_id):
-        """
-        Create L{CalendarBindRecord} records in the local store.
-        """
-
-        for shareeUID, record in records:
-            shareeHome = yield txn.calendarHomeWithUID(shareeUID, create=True)
-
-            # First look for an existing record that could be present if the migrating user had
-            # previously shared with this sharee as a cross-pod share
-            oldrecord = yield CalendarBindRecord.querysimple(
-                txn,
-                calendarHomeResourceID=shareeHome.id(),
-                calendarResourceName=record.calendarResourceName,
-            )
-
-            # FIXME: need to figure out sync-token and bind revision changes
-
-            if oldrecord:
-                # Point old record to the new local calendar being shared
-                yield oldrecord[0].update(
-                    calendarResourceID=calendar_id,
-                    bindRevision=0,
-                )
-                self.accounting("    Updating existing sharee {}".format(shareeHome.uid()))
-            else:
-                # Map the record resource ids and insert a new record
-                record.calendarHomeResourceID = shareeHome.id()
-                record.calendarResourceID = calendar_id
-                record.bindRevision = 0
-                yield record.insert(txn)
-                self.accounting("    Adding new sharee {}".format(shareeHome.uid()))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def syncGroupSharees(self, txn, remote_id, local_id):
-        """
-        Sync the group sharees for a remote share.
-        """
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(remote_id)
-        results = yield remote_calendar.groupSharees()
-        groups = dict([(group.groupID, group.groupUID,) for group in results["groups"]])
-        for share in results["sharees"]:
-            local_group = yield txn.groupByUID(groups[share.groupID])
-            share.groupID = local_group.groupID
-            share.calendarID = local_id
-            yield share.insert(txn)
-            self.accounting("    Adding group sharee {}".format(local_group.groupUID))
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def updatedRemoteSharedByCollections(self, txn, remote_id, bindUID):
-        """
-        Get all the existing L{CalendarBindRecord}'s from the remote store.
-        """
-
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(remote_id)
-        records = yield remote_calendar.migrateBindRecords(bindUID)
-        self.accounting("    Updating remote records")
-        returnValue(records)
-
-
-    @inlineCallbacks
-    def sharedToCollectionsReconcile(self):
-        """
-        Sync all the collections shared to the migrating user from the remote store.
-
-        Here is the logic we need: first assume we have three pods: A, B, C, and we are migrating a user from A->B. We start
-        with a set of shares (X -> Y - where X is the sharer and Y the sharee) with sharee on pod A. We migrate the sharee to pod B. We
-        then need to have a set of bind records on pod B, and adjust the set on pod A. Note that no changes are required on pod C.
-
-        Original      |  Changes                     | Changes
-        Shares        |  on B                        | on A
-        --------------|------------------------------|---------------------
-        A -> A        |  A -> B (new)                | A -> B (modify existing)
-        B -> A        |  B -> B (modify existing)    | (removed)
-        C -> A        |  C -> B (new)                | (removed)
-        """
-
-        self.accounting("Starting: sharedToCollectionsReconcile...")
-
-        records = yield self.sharedToCollectionRecords()
-        records = records.items()
-        len_records = len(records)
-        self.accounting("  Found {} shared to collections".format(len_records))
-
-        while records:
-            yield self.makeSharedToCollections(records[:50])
-            records = records[50:]
-
-        self.accounting("Completed: sharedToCollectionsReconcile.")
-
-        returnValue(len_records)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def sharedToCollectionRecords(self, txn):
-        """
-        Get the names and sharer UIDs for remote shared calendars.
-        """
-
-        # List of calendars from the remote side
-        home = yield self._remoteHome(txn)
-        if home is None:
-            returnValue(None)
-        results = yield home.sharedToBindRecords()
-        returnValue(results)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def makeSharedToCollections(self, txn, records):
-        """
-        Create L{CalendarBindRecord} records in the local store.
-        """
-
-        for sharerUID, (shareeRecord, ownerRecord, metadataRecord) in records:
-            sharerHome = yield txn.calendarHomeWithUID(sharerUID, create=True)
-
-            # We need to figure out the right thing to do based on whether the sharer is local to this pod
-            # (the one where the migrated user will be hosted) vs located on another pod
-
-            if sharerHome.normal():
-                # First look for an existing record that must be present if the migrating user had
-                # previously been shared with by this sharee
-                oldrecord = yield CalendarBindRecord.querysimple(
-                    txn,
-                    calendarResourceName=shareeRecord.calendarResourceName,
-                )
-                if len(oldrecord) == 1:
-                    # Point old record to the new local calendar home
-                    yield oldrecord[0].update(
-                        calendarHomeResourceID=self.homeId,
-                    )
-                    self.accounting("  Updated existing local sharer record {}".format(sharerHome.uid()))
-                else:
-                    raise AssertionError("An existing share must be present")
-            else:
-                # We have an external user. That sharer may have already shared the calendar with some other user
-                # on this pod, in which case there is already a CALENDAR table entry for it, and we need the
-                # resource ID from that to use in the new CALENDAR_BIND record we create. If a pre-existing share
-                # is not present, then we have to create the CALENDAR table entry and associated pieces
-
-                remote_id = shareeRecord.calendarResourceID
-
-                # Look for pre-existing share with the same external ID
-                oldrecord = yield CalendarBindRecord.querysimple(
-                    txn,
-                    calendarHomeResourceID=sharerHome.id(),
-                    bindUID=ownerRecord.bindUID,
-                )
-                if oldrecord:
-                    # Map the record resource ids and insert a new record
-                    calendar_id = oldrecord.calendarResourceID
-                    log_op = "Updated"
-                else:
-                    sharerView = yield sharerHome.createCollectionForExternalShare(
-                        ownerRecord.calendarResourceName,
-                        ownerRecord.bindUID,
-                        metadataRecord.supportedComponents,
-                    )
-                    calendar_id = sharerView.id()
-                    log_op = "Created"
-
-                shareeRecord.calendarHomeResourceID = self.homeId
-                shareeRecord.calendarResourceID = calendar_id
-                shareeRecord.bindRevision = 0
-                yield shareeRecord.insert(txn)
-                self.accounting("  {} remote sharer record {}".format(log_op, sharerHome.uid()))
-
-                yield self.updatedRemoteSharedToCollection(remote_id, txn=txn)
-
-
-    @inTransactionWrapper
-    @inlineCallbacks
-    def updatedRemoteSharedToCollection(self, txn, remote_id):
-        """
-        Get all the existing L{CalendarBindRecord}'s from the remote store.
-        """
-
-        remote_home = yield self._remoteHome(txn)
-        remote_calendar = yield remote_home.childWithID(remote_id)
-        records = yield remote_calendar.migrateBindRecords(None)
-        self.accounting("    Updating remote records")
-        returnValue(records)

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/home_sync.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/home_sync.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,1356 @@
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from functools import wraps
+
+from twext.python.log import Logger
+from twisted.internet.defer import returnValue, inlineCallbacks
+from twisted.python.failure import Failure
+from twistedcaldav.accounting import emitAccounting
+from txdav.caldav.icalendarstore import ComponentUpdateState
+from txdav.common.datastore.podding.migration.sync_metadata import CalendarMigrationRecord, \
+    CalendarObjectMigrationRecord, AttachmentMigrationRecord
+from txdav.caldav.datastore.sql import ManagedAttachment, CalendarBindRecord
+from txdav.common.datastore.sql_external import NotificationCollectionExternal
+from txdav.common.datastore.sql_notification import NotificationCollection
+from txdav.common.datastore.sql_tables import _HOME_STATUS_MIGRATING, _HOME_STATUS_DISABLED, \
+    _HOME_STATUS_EXTERNAL, _HOME_STATUS_NORMAL
+from txdav.common.idirectoryservice import DirectoryRecordNotFoundError
+
+from uuid import uuid4
+import datetime
+
+log = Logger()
+
+ACCOUNTING_TYPE = "migration"
+ACCOUNTING_LOG = "migration.log"
+
+def inTransactionWrapper(operation):
+    """
+    This wrapper converts an instance method that takes a transaction as its
+    first parameter into one where the transaction parameter is an optional
+    keyword argument. If the keyword argument is present and not None, then
+    the instance method is called with that keyword as the first positional
+    argument (i.e., almost a NoOp). If the keyword argument is not present,
+    then a new transaction is created and the instance method called with
+    it as the first positional argument, plus the call is wrapped with
+    try/except/else to ensure proper commit and abort of the internally
+    created transaction is done.
+
+    So this wrapper allows for a method that requires a transaction to be run
+    with either an existing transaction or one created just for the purpose
+    of running it.
+
+    @param operation: a callable that takes an L{IAsyncTransaction} as its first
+        argument, and returns a value.
+    """
+
+    @wraps(operation)
+    @inlineCallbacks
+    def _inTxn(self, *args, **kwargs):
+        label = self.label(operation.__name__)
+        if "txn" in kwargs:
+            txn = kwargs["txn"]
+            del kwargs["txn"]
+            result = yield operation(self, txn, *args, **kwargs)
+            returnValue(result)
+        else:
+            txn = self.store.newTransaction(label=label)
+            try:
+                result = yield operation(self, txn, *args, **kwargs)
+            except Exception as ex:
+                f = Failure()
+                yield txn.abort()
+                log.error("{label} failed: {e}".format(label=label, e=str(ex)))
+                returnValue(f)
+            else:
+                yield txn.commit()
+                returnValue(result)
+
+    return _inTxn
+
+
+
+# Cross-pod synchronization of an entire calendar home
+class CrossPodHomeSync(object):
+
+    BATCH_SIZE = 50
+
+    def __init__(self, store, diruid, final=False, uselog=None):
+        """
+        @param store: the data store
+        @type store: L{CommonDataStore}
+        @param diruid: directory uid of the user whose home is to be sync'd
+        @type diruid: L{str}
+        @param final: indicates whether this is in the final sync stage with the remote home
+            already disabled
+        @type final: L{bool}
+        @param uselog: additional logging written to this object
+        @type: L{File}
+        """
+
+        self.store = store
+        self.diruid = diruid
+        self.disabledRemote = final
+        self.uselog = uselog
+        self.record = None
+        self.homeId = None
+
+
+    def label(self, detail):
+        return "Cross-pod Migration Sync for {}: {}".format(self.diruid, detail)
+
+
+    def accounting(self, logstr):
+        emitAccounting(ACCOUNTING_TYPE, self.record, "{} {}\n".format(datetime.datetime.now().isoformat(), logstr), filename=ACCOUNTING_LOG)
+        if self.uselog is not None:
+            self.uselog.write("CrossPodHomeSync: {}\n".format(logstr))
+
+
+    @inlineCallbacks
+    def migrateHere(self):
+        """
+        This is a full, serialized version of a data migration (minus any directory
+        update) that can be triggered via a command line tool. It is designed to
+        minimize down time for the migrating user.
+        """
+
+        # Step 1 - initial full sync
+        yield self.sync()
+
+        # Step 2 - increment sync (since the initial sync may take a long time
+        # to run we should do one incremental sync before bringing down the
+        # account being migrated)
+        yield self.sync()
+
+        # Step 3 - disable remote home
+        # NB Any failure from this point on will need to be caught and
+        # handled by re-enabling the old home (and fixing any sharing state
+        # that may have been changed)
+        yield self.disableRemoteHome()
+
+        # Step 4 - final incremental sync
+        yield self.sync()
+
+        # Step 5 - final overall sync of meta-data (including sharing re-linking)
+        yield self.finalSync()
+
+        # Step 6 - enable new home
+        yield self.enableLocalHome()
+
+        # Step 7 - remove remote home
+        yield self.removeRemoteHome()
+
+        # Step 8 - say phew! TODO: Actually alert everyone else
+        pass
+
+
+    @inlineCallbacks
+    def sync(self):
+        """
+        Initiate a sync of the home. This is a simple data sync that does not
+        reconcile sharing state etc. The L{finalSync} method will do a full
+        sharing reconcile as well as disable the migration source home.
+        """
+
+        yield self.loadRecord()
+        self.accounting("Starting: sync...")
+        yield self.prepareCalendarHome()
+
+        # Calendar list and calendar data
+        yield self.syncCalendarList()
+
+        # Sync home metadata such as alarms, default calendars, etc
+        yield self.syncCalendarHomeMetaData()
+
+        # Sync attachments
+        yield self.syncAttachments()
+
+        self.accounting("Completed: sync.\n")
+
+
+    @inlineCallbacks
+    def finalSync(self):
+        """
+        Do the final sync up of any additional data, re-link sharing bind
+        rows, recalculate quota etc.
+        """
+
+        yield self.loadRecord()
+        self.accounting("Starting: finalSync...")
+        yield self.prepareCalendarHome()
+
+        # Link attachments to resources: ATTACHMENT_CALENDAR_OBJECT table
+        yield self.linkAttachments()
+
+        # TODO: Re-write attachment URIs - not sure if we need this as reverse proxy may take care of it
+        pass
+
+        # Group attendee reconcile
+        yield self.groupAttendeeReconcile()
+
+        # Delegates reconcile
+        yield self.delegateReconcile()
+
+        # Shared collections reconcile (including group sharees)
+        yield self.sharedByCollectionsReconcile()
+        yield self.sharedToCollectionsReconcile()
+
+        # Notifications
+        yield self.notificationsReconcile()
+
+        # TODO: work items
+        pass
+
+        self.accounting("Completed: finalSync.\n")
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def disableRemoteHome(self, txn):
+        """
+        Mark the remote home as disabled.
+        """
+
+        yield self.loadRecord()
+        self.accounting("Starting: disableRemoteHome...")
+        yield self.prepareCalendarHome()
+
+        # Calendar home
+        remote_home = yield self._remoteHome(txn)
+        yield remote_home.setStatus(_HOME_STATUS_DISABLED)
+
+        # Notification home
+        notifications = yield self._remoteNotificationsHome(txn)
+        yield notifications.setStatus(_HOME_STATUS_DISABLED)
+
+        self.disabledRemote = True
+
+        self.accounting("Completed: disableRemoteHome.\n")
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def enableLocalHome(self, txn):
+        """
+        Mark the local home as enabled and remove any previously existing external home.
+        """
+
+        yield self.loadRecord()
+        self.accounting("Starting: enableLocalHome...")
+        yield self.prepareCalendarHome()
+
+        # Disable any local external homes
+        oldhome = yield txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_EXTERNAL)
+        if oldhome is not None:
+            yield oldhome.setLocalStatus(_HOME_STATUS_DISABLED)
+        oldnotifications = yield txn.notificationsWithUID(self.diruid, status=_HOME_STATUS_EXTERNAL)
+        if oldnotifications:
+            yield oldnotifications.setLocalStatus(_HOME_STATUS_DISABLED)
+
+        # Enable the migrating ones
+        newhome = yield txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_MIGRATING)
+        if newhome is not None:
+            yield newhome.setStatus(_HOME_STATUS_NORMAL)
+        newnotifications = yield txn.notificationsWithUID(self.diruid, status=_HOME_STATUS_MIGRATING)
+        if newnotifications:
+            yield newnotifications.setStatus(_HOME_STATUS_NORMAL)
+
+        # TODO: remove migration state
+        pass
+
+        # TODO: purge the old ones
+        pass
+
+        self.accounting("Completed: enableLocalHome.\n")
+
+
+    @inlineCallbacks
+    def removeRemoteHome(self):
+        """
+        Remove all the old data on the remote pod.
+        """
+
+        # TODO: implement API on CommonHome to purge the old data without
+        # any side-effects (scheduling, sharing etc).
+        yield self.loadRecord()
+        self.accounting("Starting: removeRemoteHome...")
+        yield self.prepareCalendarHome()
+
+        self.accounting("Completed: removeRemoteHome.\n")
+
+
+    @inlineCallbacks
+    def loadRecord(self):
+        """
+        Initiate a sync of the home.
+        """
+
+        if self.record is None:
+            self.record = yield self.store.directoryService().recordWithUID(self.diruid)
+            if self.record is None:
+                raise DirectoryRecordNotFoundError("Cross-pod Migration Sync missing directory record for {}".format(self.diruid))
+            if self.record.thisServer():
+                raise ValueError("Cross-pod Migration Sync cannot sync with user already on this server: {}".format(self.diruid))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def prepareCalendarHome(self, txn):
+        """
+        Make sure the inactive home to migrate into is present on this pod.
+        """
+
+        if self.homeId is None:
+            home = yield self._localHome(txn)
+            if home is None:
+                if self.disabledRemote:
+                    self.homeId = None
+                else:
+                    home = yield txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_MIGRATING, create=True)
+                    self.accounting("  Created new home collection to migrate into.")
+            self.homeId = home.id() if home is not None else None
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def syncCalendarHomeMetaData(self, txn):
+        """
+        Make sure the home meta-data (alarms, default calendars) is properly sync'd
+        """
+
+        self.accounting("Starting: syncCalendarHomeMetaData...")
+        remote_home = yield self._remoteHome(txn)
+        yield remote_home.readMetaData()
+
+        calendars = yield CalendarMigrationRecord.querysimple(txn, calendarHomeResourceID=self.homeId)
+        calendarIDMap = dict((item.remoteResourceID, item.localResourceID) for item in calendars)
+
+        local_home = yield self._localHome(txn)
+        yield local_home.copyMetadata(remote_home, calendarIDMap)
+
+        self.accounting("Completed: syncCalendarHomeMetaData.")
+
+
+    @inlineCallbacks
+    def _remoteHome(self, txn):
+        """
+        Create a synthetic external home object that maps to the actual remote home.
+        """
+
+        from txdav.caldav.datastore.sql_external import CalendarHomeExternal
+        resourceID = yield txn.store().conduit.send_home_resource_id(txn, self.record, migrating=True)
+        home = CalendarHomeExternal.makeSyntheticExternalHome(txn, self.record.uid, resourceID) if resourceID is not None else None
+        if self.disabledRemote:
+            home._migratingHome = True
+        returnValue(home)
+
+
+    @inlineCallbacks
+    def _remoteNotificationsHome(self, txn):
+        """
+        Create a synthetic external home object that maps to the actual remote home.
+        """
+
+        notifications = yield NotificationCollectionExternal.notificationsWithUID(txn, self.diruid, create=True)
+        if self.disabledRemote:
+            notifications._migratingHome = True
+        returnValue(notifications)
+
+
+    def _localHome(self, txn):
+        """
+        Get the home on this pod that will have data migrated to it.
+        """
+
+        return txn.calendarHomeWithUID(self.diruid, status=_HOME_STATUS_MIGRATING)
+
+
+    @inlineCallbacks
+    def syncCalendarList(self):
+        """
+        Synchronize each owned calendar.
+        """
+
+        self.accounting("Starting: syncCalendarList...")
+
+        # Remote sync details
+        remote_sync_state = yield self.getCalendarSyncList()
+        self.accounting("  Found {} remote calendars to sync.".format(len(remote_sync_state)))
+
+        # Get local sync details from local DB
+        local_sync_state = yield self.getSyncState()
+        self.accounting("  Found {} local calendars to sync.".format(len(local_sync_state)))
+
+        # Remove local calendars no longer on the remote side
+        yield self.purgeLocal(local_sync_state, remote_sync_state)
+
+        # Sync each calendar that matches on both sides
+        for remoteID in remote_sync_state.keys():
+            yield self.syncCalendar(remoteID, local_sync_state, remote_sync_state)
+
+        self.accounting("Completed: syncCalendarList.")
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def getCalendarSyncList(self, txn):
+        """
+        Get the names and sync-tokens for each remote owned calendar.
+        """
+
+        # List of calendars from the remote side
+        home = yield self._remoteHome(txn)
+        if home is None:
+            returnValue(None)
+        calendars = yield home.loadChildren()
+        results = {}
+        for calendar in calendars:
+            if calendar.owned():
+                sync_token = yield calendar.syncToken()
+                results[calendar.id()] = CalendarMigrationRecord.make(
+                    calendarHomeResourceID=home.id(),
+                    remoteResourceID=calendar.id(),
+                    localResourceID=0,
+                    lastSyncToken=sync_token,
+                )
+
+        returnValue(results)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def getSyncState(self, txn):
+        """
+        Get local synchronization state for the home being migrated.
+        """
+        records = yield CalendarMigrationRecord.querysimple(
+            txn, calendarHomeResourceID=self.homeId
+        )
+        returnValue(dict([(record.remoteResourceID, record) for record in records]))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def updateSyncState(self, txn, stateRecord, newSyncToken):
+        """
+        Update or insert an L{CalendarMigrationRecord} with the new specified sync token.
+        """
+        if stateRecord.isnew():
+            stateRecord.lastSyncToken = newSyncToken
+            yield stateRecord.insert(txn)
+        else:
+            # The existing stateRecord has a stale txn, but valid column values. We have
+            # to duplicate it before we can give it a different txn.
+            stateRecord = stateRecord.duplicate()
+            stateRecord.transaction = txn
+            yield stateRecord.update(lastSyncToken=newSyncToken)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def purgeLocal(self, txn, local_sync_state, remote_sync_state):
+        """
+        Remove (silently - i.e., no scheduling) local calendars that are no longer on the remote side.
+
+        @param txn: transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param local_sync_state: local sync state
+        @type local_sync_state: L{dict}
+        @param remote_sync_state: remote sync state
+        @type remote_sync_state: L{dict}
+        """
+        home = yield self._localHome(txn)
+        for localID in set(local_sync_state.keys()) - set(remote_sync_state.keys()):
+            calendar = yield home.childWithID(local_sync_state[localID].localResourceID)
+            if calendar is not None:
+                yield calendar.purge()
+            del local_sync_state[localID]
+            self.accounting("  Purged calendar local-id={} that no longer exists on the remote pod.".format(localID))
+
+
+    @inlineCallbacks
+    def syncCalendar(self, remoteID, local_sync_state, remote_sync_state):
+        """
+        Sync the contents of a calendar from the remote side. The local calendar may need to be created
+        on initial sync. Make use of sync tokens to avoid unnecessary work.
+
+        @param remoteID: id of the remote calendar to sync
+        @type remoteID: L{int}
+        @param local_sync_state: local sync state
+        @type local_sync_state: L{dict}
+        @param remote_sync_state: remote sync state
+        @type remote_sync_state: L{dict}
+        """
+
+        self.accounting("Starting: syncCalendar.")
+
+        # See if we need to create the local one first
+        if remoteID not in local_sync_state:
+            localID = yield self.newCalendar()
+            local_sync_state[remoteID] = CalendarMigrationRecord.make(
+                calendarHomeResourceID=self.homeId,
+                remoteResourceID=remoteID,
+                localResourceID=localID,
+                lastSyncToken=None,
+            )
+            self.accounting("  Created new calendar local-id={}, remote-id={}.".format(localID, remoteID))
+        else:
+            localID = local_sync_state.get(remoteID).localResourceID
+            self.accounting("  Updating calendar local-id={}, remote-id={}.".format(localID, remoteID))
+        local_record = local_sync_state.get(remoteID)
+
+        remote_token = remote_sync_state[remoteID].lastSyncToken
+        if local_record.lastSyncToken != remote_token:
+            # Sync meta-data such as name, alarms, supported-components, transp, etc
+            yield self.syncCalendarMetaData(local_record)
+
+            # Sync object resources
+            changed, removed = yield self.findObjectsToSync(local_record)
+            self.accounting("  Calendar objects changed={}, removed={}.".format(len(changed), len(removed)))
+            yield self.purgeDeletedObjectsInBatches(local_record, removed)
+            yield self.updateChangedObjectsInBatches(local_record, changed)
+
+        yield self.updateSyncState(local_record, remote_token)
+        self.accounting("Completed: syncCalendar.")
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def newCalendar(self, txn):
+        """
+        Create a new local calendar to sync remote data to. We don't care about the name
+        of the calendar right now - it will be sync'd later.
+        """
+
+        home = yield self._localHome(txn)
+        calendar = yield home.createChildWithName(str(uuid4()))
+        returnValue(calendar.id())
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def syncCalendarMetaData(self, txn, migrationRecord):
+        """
+        Sync the metadata of a calendar from the remote side.
+
+        @param migrationRecord: current migration record
+        @type localID: L{CalendarMigrationRecord}
+        """
+
+        # Remote changes
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(migrationRecord.remoteResourceID)
+        if remote_calendar is None:
+            returnValue(None)
+
+        # Check whether the deleted set items
+        local_home = yield self._localHome(txn)
+        local_calendar = yield local_home.childWithID(migrationRecord.localResourceID)
+        yield local_calendar.copyMetadata(remote_calendar)
+        self.accounting("  Copied calendar meta-data for calendar local-id={0.localResourceID}, remote-id={0.remoteResourceID}.".format(migrationRecord))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def findObjectsToSync(self, txn, migrationRecord):
+        """
+        Find the set of object resources that need to be sync'd from the remote
+        side and the set that need to be removed locally. Take into account the
+        possibility that this is a partial sync and removals or additions might
+        be false positives.
+
+        @param migrationRecord: current migration record
+        @type localID: L{CalendarMigrationRecord}
+        """
+
+        # Remote changes
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(migrationRecord.remoteResourceID)
+        if remote_calendar is None:
+            returnValue(None)
+        changed, deleted, _ignore_invalid = yield remote_calendar.resourceNamesSinceToken(migrationRecord.lastSyncToken)
+
+        # Check whether the deleted set items
+        local_home = yield self._localHome(txn)
+        local_calendar = yield local_home.childWithID(migrationRecord.localResourceID)
+
+        # Check the md5's on each changed remote with the local one to filter out ones
+        # we don't actually need to sync
+        remote_changes = yield remote_calendar.objectResourcesWithNames(changed)
+        remote_changes = dict([(calendar.name(), calendar) for calendar in remote_changes])
+
+        local_changes = yield local_calendar.objectResourcesWithNames(changed)
+        local_changes = dict([(calendar.name(), calendar) for calendar in local_changes])
+
+        actual_changes = []
+        for name, calendar in remote_changes.items():
+            if name not in local_changes or remote_changes[name].md5() != local_changes[name].md5():
+                actual_changes.append(name)
+
+        returnValue((actual_changes, deleted,))
+
+
+    @inlineCallbacks
+    def purgeDeletedObjectsInBatches(self, migrationRecord, deleted):
+        """
+        Purge (silently remove) the specified object resources. This needs to
+        succeed in the case where some or all resources have already been deleted.
+        Do this in batches to keep transaction times small.
+
+        @param migrationRecord: local calendar migration record
+        @type migrationRecord: L{CalendarMigrationRecord}
+        @param deleted: list of names to purge
+        @type deleted: L{list} of L{str}
+        """
+
+        remaining = list(deleted)
+        while remaining:
+            yield self.purgeBatch(migrationRecord.localResourceID, remaining[:self.BATCH_SIZE])
+            del remaining[:self.BATCH_SIZE]
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def purgeBatch(self, txn, localID, purge_names):
+        """
+        Purge a bunch of object resources from the specified calendar.
+
+        @param txn: transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param localID: id of the local calendar to sync
+        @type localID: L{int}
+        @param purge_names: object resource names to purge
+        @type purge_names: L{list} of L{str}
+        """
+
+        # Check whether the deleted set items
+        local_home = yield self._localHome(txn)
+        local_calendar = yield local_home.childWithID(localID)
+        local_objects = yield local_calendar.objectResourcesWithNames(purge_names)
+
+        for local_object in local_objects:
+            yield local_object.purge()
+            self.accounting("  Purged calendar object local-id={}.".format(local_object.id()))
+
+
+    @inlineCallbacks
+    def updateChangedObjectsInBatches(self, migrationRecord, changed):
+        """
+        Update the specified object resources. This needs to succeed in the
+        case where some or all resources have already been deleted.
+        Do this in batches to keep transaction times small.
+
+        @param migrationRecord: local calendar migration record
+        @type migrationRecord: L{CalendarMigrationRecord}
+        @param changed: list of names to update
+        @type changed: L{list} of L{str}
+        """
+
+        remaining = list(changed)
+        while remaining:
+            yield self.updateBatch(
+                migrationRecord.localResourceID,
+                migrationRecord.remoteResourceID,
+                remaining[:self.BATCH_SIZE],
+            )
+            del remaining[:self.BATCH_SIZE]
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def updateBatch(self, txn, localID, remoteID, remaining):
+        """
+        Update a bunch of object resources from the specified remote calendar.
+
+        @param txn: transaction to use
+        @type txn: L{CommonStoreTransaction}
+        @param localID: id of the local calendar to sync
+        @type localID: L{int}
+        @param remoteID: id of the remote calendar to sync with
+        @type remoteID: L{int}
+        @param purge_names: object resource names to update
+        @type purge_names: L{list} of L{str}
+        """
+
+        # Get remote objects
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(remoteID)
+        if remote_calendar is None:
+            returnValue(None)
+        remote_objects = yield remote_calendar.objectResourcesWithNames(remaining)
+        remote_objects = dict([(obj.name(), obj) for obj in remote_objects])
+
+        # Get local objects
+        local_home = yield self._localHome(txn)
+        local_calendar = yield local_home.childWithID(localID)
+        local_objects = yield local_calendar.objectResourcesWithNames(remaining)
+        local_objects = dict([(obj.name(), obj) for obj in local_objects])
+
+        # Sync ones that still exist - use txn._migrating together with stuffing the remote md5
+        # value onto the component being stored to ensure that the md5 value stored locally
+        # matches the remote one (which should help reduce the need for a client to resync
+        # the data when moved from one pod to the other).
+        txn._migrating = True
+        for obj_name in remote_objects.keys():
+            remote_object = remote_objects[obj_name]
+            remote_data = yield remote_object.component()
+            remote_data.md5 = remote_object.md5()
+            if obj_name in local_objects:
+                local_object = yield local_objects[obj_name]
+                yield local_object._setComponentInternal(remote_data, internal_state=ComponentUpdateState.RAW)
+                del local_objects[obj_name]
+                log_op = "Updated"
+            else:
+                local_object = yield local_calendar._createCalendarObjectWithNameInternal(obj_name, remote_data, internal_state=ComponentUpdateState.RAW)
+
+                # Maintain the mapping from the remote to local id. Note that this mapping never changes as the ids on both
+                # sides are immutable - though it may get deleted if the local object is removed during sync (via a cascade).
+                yield CalendarObjectMigrationRecord.create(
+                    txn,
+                    calendarHomeResourceID=self.homeId,
+                    remoteResourceID=remote_object.id(),
+                    localResourceID=local_object.id()
+                )
+                log_op = "Created"
+
+            # Sync meta-data such as schedule object, schedule tags, access mode etc
+            yield local_object.copyMetadata(remote_object)
+            self.accounting("  {} calendar object local-id={}, remote-id={}.".format(log_op, local_object.id(), remote_object.id()))
+
+        # Purge the ones that remain
+        for local_object in local_objects.values():
+            yield local_object.purge()
+            self.accounting("  Purged calendar object local-id={}.".format(local_object.id()))
+
+
+    @inlineCallbacks
+    def syncAttachments(self):
+        """
+        Sync attachments (both metadata and actual attachment data) for the home being migrated.
+        """
+
+        self.accounting("Starting: syncAttachments...")
+
+        # Two steps - sync the table first in one txn, then sync each attachment's data
+        changed_ids, removed_ids = yield self.syncAttachmentTable()
+        self.accounting("  Attachments changed={}, removed={}".format(len(changed_ids), len(removed_ids)))
+
+        for local_id in changed_ids:
+            yield self.syncAttachmentData(local_id)
+
+        self.accounting("Completed: syncAttachments.")
+
+        returnValue((changed_ids, removed_ids,))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def syncAttachmentTable(self, txn):
+        """
+        Sync the ATTACHMENT table data for the home being migrated. Return the list of local attachment ids that
+        now need there attachment data sync'd from the server.
+        """
+
+        remote_home = yield self._remoteHome(txn)
+        rattachments = yield remote_home.getAllAttachments()
+        rmap = dict([(attachment.id(), attachment) for attachment in rattachments])
+
+        local_home = yield self._localHome(txn)
+        lattachments = yield local_home.getAllAttachments()
+        lmap = dict([(attachment.id(), attachment) for attachment in lattachments])
+
+        # Figure out the differences
+        records = yield AttachmentMigrationRecord.querysimple(
+            txn, calendarHomeResourceID=self.homeId
+        )
+        mapping = dict([(record.remoteResourceID, record) for record in records])
+
+        # Removed - remove attachment and migration state
+        removed = set(mapping.keys()) - set(rmap.keys())
+        for remove_id in removed:
+            record = mapping[remove_id]
+            att = yield ManagedAttachment.load(txn, None, None, attachmentID=record.localResourceID)
+            if att:
+                yield att.remove(adjustQuota=False)
+            else:
+                yield record.delete()
+
+        # Track which ones need attachment data sync'd over
+        data_ids = set()
+
+        # Added - add new attachment and migration state
+        added = set(rmap.keys()) - set(mapping.keys())
+        for added_id in added:
+            attachment = yield ManagedAttachment._create(txn, None, self.homeId)
+            yield AttachmentMigrationRecord.create(
+                txn,
+                calendarHomeResourceID=self.homeId,
+                remoteResourceID=added_id,
+                localResourceID=attachment.id(),
+            )
+            data_ids.add(attachment.id())
+
+        # Possible updates - check for md5 change and sync
+        updates = set(mapping.keys()) & set(rmap.keys())
+        for updated_id in updates:
+            local_id = mapping[updated_id].localResourceID
+            if rmap[updated_id].md5() != lmap[local_id].md5():
+                yield lmap[local_id].copyRemote(rmap[updated_id])
+                data_ids.add(local_id)
+
+        returnValue((data_ids, removed,))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def syncAttachmentData(self, txn, local_id):
+        """
+        Sync the attachment data for the home being migrated.
+        """
+
+        remote_home = yield self._remoteHome(txn)
+        local_home = yield self._localHome(txn)
+        attachment = yield local_home.getAttachmentByID(local_id)
+        if attachment is None:
+            returnValue(None)
+
+        records = yield AttachmentMigrationRecord.querysimple(
+            txn, calendarHomeResourceID=self.homeId, localResourceID=local_id
+        )
+        if records:
+            # Read the data from the conduit
+            yield remote_home.readAttachmentData(records[0].remoteResourceID, attachment)
+            self.accounting("  Read attachment local-id={0.localResourceID}, remote-id={0.remoteResourceID}".format(records[0]))
+
+
+    @inlineCallbacks
+    def linkAttachments(self):
+        """
+        Link attachments to the calendar objects they belong to.
+        """
+
+        self.accounting("Starting: linkAttachments...")
+
+        # Get the map of links for the remote home
+        links = yield self.getAttachmentLinks()
+        self.accounting("  Linking {} attachments".format(len(links)))
+
+        # Get remote->local ID mappings
+        attachmentIDMap, objectIDMap = yield self.getAttachmentMappings()
+
+        # Batch setting links for the local home
+        len_links = len(links)
+        while links:
+            yield self.makeAttachmentLinks(links[:50], attachmentIDMap, objectIDMap)
+            links = links[50:]
+
+        self.accounting("Completed: linkAttachments.")
+
+        returnValue(len_links)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def getAttachmentLinks(self, txn):
+        """
+        Get the remote link information.
+        """
+
+        # Get the map of links for the remote home
+        remote_home = yield self._remoteHome(txn)
+        links = yield remote_home.getAttachmentLinks()
+        returnValue(links)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def getAttachmentMappings(self, txn):
+        """
+        Get the remote link information.
+        """
+
+        # Get migration mappings
+        records = yield AttachmentMigrationRecord.querysimple(
+            txn, calendarHomeResourceID=self.homeId
+        )
+        attachmentIDMap = dict([(record.remoteResourceID, record) for record in records])
+
+        records = yield CalendarObjectMigrationRecord.querysimple(
+            txn, calendarHomeResourceID=self.homeId
+        )
+        objectIDMap = dict([(record.remoteResourceID, record) for record in records])
+
+        returnValue((attachmentIDMap, objectIDMap,))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def makeAttachmentLinks(self, txn, links, attachmentIDMap, objectIDMap):
+        """
+        Map remote links to local links.
+        """
+
+        for link in links:
+            # Remote link has an invalid txn at this point so replace that first
+            link._txn = txn
+
+            # Now re-map the attachment ID and calendar_object_id to the local ones
+            link._attachmentID = attachmentIDMap[link._attachmentID].localResourceID
+            link._calendarObjectID = objectIDMap[link._calendarObjectID].localResourceID
+
+            yield link.insert()
+
+
+    @inlineCallbacks
+    def delegateReconcile(self):
+        """
+        Sync the delegate assignments from the remote home to the local home. We won't use
+        a fake directory UID locally.
+        """
+
+        self.accounting("Starting: delegateReconcile...")
+
+        yield self.individualDelegateReconcile()
+        yield self.groupDelegateReconcile()
+        yield self.externalDelegateReconcile()
+
+        self.accounting("Completed: delegateReconcile.")
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def individualDelegateReconcile(self, txn):
+        """
+        Sync the delegate assignments from the remote home to the local home. We won't use
+        a fake directory UID locally.
+        """
+        remote_records = yield txn.dumpIndividualDelegatesExternal(self.record)
+        for record in remote_records:
+            yield record.insert(txn)
+
+        self.accounting("  Found {} individual delegates".format(len(remote_records)))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def groupDelegateReconcile(self, txn):
+        """
+        Sync the delegate assignments from the remote home to the local home. We won't use
+        a fake directory UID locally.
+        """
+        remote_records = yield txn.dumpGroupDelegatesExternal(self.record)
+        for delegator, group in remote_records:
+            # We need to make sure the group exists locally first and map the groupID to the local one
+            local_group = yield txn.groupByUID(group.groupUID)
+            delegator.groupID = local_group.groupID
+            yield delegator.insert(txn)
+
+        self.accounting("  Found {} group delegates".format(len(remote_records)))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def externalDelegateReconcile(self, txn):
+        """
+        Sync the external delegate assignments from the remote home to the local home. We won't use
+        a fake directory UID locally.
+        """
+        remote_records = yield txn.dumpExternalDelegatesExternal(self.record)
+        for record in remote_records:
+            yield record.insert(txn)
+
+        self.accounting("  Found {} external delegates".format(len(remote_records)))
+
+
+    @inlineCallbacks
+    def groupAttendeeReconcile(self):
+        """
+        Sync the remote group attendee links to the local store.
+        """
+
+        self.accounting("Starting: groupAttendeeReconcile...")
+
+        # Get remote data and local mapping information
+        remote_group_attendees, objectIDMap = yield self.groupAttendeeData()
+        self.accounting("  Found {} group attendees".format(len(remote_group_attendees)))
+
+        # Map each result to a local resource (in batches)
+        number_of_links = len(remote_group_attendees)
+        while remote_group_attendees:
+            yield self.groupAttendeeProcess(remote_group_attendees[:50], objectIDMap)
+            remote_group_attendees = remote_group_attendees[50:]
+
+        self.accounting("Completed: groupAttendeeReconcile.")
+
+        returnValue(number_of_links)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def groupAttendeeData(self, txn):
+        """
+        Sync the remote group attendee links to the local store.
+        """
+        remote_home = yield self._remoteHome(txn)
+        remote_group_attendees = yield remote_home.getAllGroupAttendees()
+
+        # Get all remote->local object maps
+        records = yield CalendarObjectMigrationRecord.querysimple(
+            txn, calendarHomeResourceID=self.homeId
+        )
+        objectIDMap = dict([(record.remoteResourceID, record.localResourceID) for record in records])
+
+        returnValue((remote_group_attendees, objectIDMap,))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def groupAttendeeProcess(self, txn, results, objectIDMap):
+        """
+        Sync the remote group attendee links to the local store.
+        """
+        # Map each result to a local resource
+        for groupAttendee, group in results:
+            local_group = yield txn.groupByUID(group.groupUID)
+            groupAttendee.groupID = local_group.groupID
+            try:
+                groupAttendee.resourceID = objectIDMap[groupAttendee.resourceID]
+            except KeyError:
+                continue
+            yield groupAttendee.insert(txn)
+
+
+    @inlineCallbacks
+    def notificationsReconcile(self):
+        """
+        Sync all the existing L{NotificationObject} resources from the remote store.
+        """
+
+        self.accounting("Starting: notificationsReconcile...")
+        records = yield self.notificationRecords()
+        self.accounting("  Found {} notifications".format(len(records)))
+
+        # Batch setting resources for the local home
+        len_records = len(records)
+        while records:
+            yield self.makeNotifications(records[:50])
+            records = records[50:]
+
+        self.accounting("Completed: notificationsReconcile.")
+
+        returnValue(len_records)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def notificationRecords(self, txn):
+        """
+        Get all the existing L{NotificationObjectRecord}'s from the remote store.
+        """
+
+        notifications = yield self._remoteNotificationsHome(txn)
+        records = yield notifications.notificationObjectRecords()
+        for record in records:
+            # This needs to be reset when added to the local store
+            del record.resourceID
+
+            # Map the remote id to the local one.
+            record.notificationHomeResourceID = notifications.id()
+
+        returnValue(records)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def makeNotifications(self, txn, records):
+        """
+        Create L{NotificationObjectRecord} records in the local store.
+        """
+
+        notifications = yield NotificationCollection.notificationsWithUID(txn, self.diruid, status=_HOME_STATUS_MIGRATING, create=True)
+        for record in records:
+            # Do this via the "write" API so that sync revisions are updated properly, rather than just
+            # inserting the records directly.
+            notification = yield notifications.writeNotificationObject(record.notificationUID, record.notificationType, record.notificationData)
+            self.accounting("  Added notification local-id={}.".format(notification.id()))
+
+
+    @inlineCallbacks
+    def sharedByCollectionsReconcile(self):
+        """
+        Sync all the collections shared by the migrating user from the remote store. We will do this one calendar at a time since
+        there could be a large number of sharees per calendar.
+
+        Here is the logic we need: first assume we have three pods: A, B, C, and we are migrating a user from A->B. We start
+        with a set of shares (X -> Y - where X is the sharer and Y the sharee) on pod A. We migrate the sharer to pod B. We
+        then need to have a set of bind records on pod B, and adjust the set on pod A. Note that no changes are required on pod C.
+
+        Original      |  Changes                     | Changes
+        Shares        |  on B                        | on A
+        --------------|------------------------------|---------------------
+        A -> A        |  B -> A (new)                | B -> A (modify existing)
+        A -> B        |  B -> B (modify existing)    | (removed)
+        A -> C        |  B -> C (new)                | (removed)
+        """
+
+        self.accounting("Starting: sharedByCollectionsReconcile...")
+        calendars = yield self.getSyncState()
+
+        len_records = 0
+        for calendar in calendars.values():
+            records, bindUID = yield self.sharedByCollectionRecords(calendar.remoteResourceID, calendar.localResourceID)
+            if not records:
+                continue
+            records = records.items()
+
+            self.accounting("  Found shared by calendar local-id={0.localResourceID}, remote-id={0.remoteResourceID} with {1} sharees".format(
+                calendar, len(records),
+            ))
+
+            # Batch setting resources for the local home
+            len_records += len(records)
+            while records:
+                yield self.makeSharedByCollections(records[:50], calendar.localResourceID)
+                records = records[50:]
+
+            # Get groups from remote pod
+            yield self.syncGroupSharees(calendar.remoteResourceID, calendar.localResourceID)
+
+            # Update the remote pod to switch over the shares
+            yield self.updatedRemoteSharedByCollections(calendar.remoteResourceID, bindUID)
+
+        self.accounting("Completed: sharedByCollectionsReconcile.")
+
+        returnValue(len_records)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def sharedByCollectionRecords(self, txn, remote_id, local_id):
+        """
+        Get all the existing L{CalendarBindRecord}'s from the remote store. Also make sure a
+        bindUID exists for the local calendar.
+        """
+
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(remote_id)
+        records = yield remote_calendar.sharingBindRecords()
+
+        # Check bindUID
+        local_records = yield CalendarBindRecord.querysimple(
+            txn,
+            calendarHomeResourceID=self.homeId,
+            calendarResourceID=local_id,
+        )
+        if records and not local_records[0].bindUID:
+            yield local_records[0].update(bindUID=str(uuid4()))
+
+        returnValue((records, local_records[0].bindUID,))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def makeSharedByCollections(self, txn, records, calendar_id):
+        """
+        Create L{CalendarBindRecord} records in the local store.
+        """
+
+        for shareeUID, record in records:
+            shareeHome = yield txn.calendarHomeWithUID(shareeUID, create=True)
+
+            # First look for an existing record that could be present if the migrating user had
+            # previously shared with this sharee as a cross-pod share
+            oldrecord = yield CalendarBindRecord.querysimple(
+                txn,
+                calendarHomeResourceID=shareeHome.id(),
+                calendarResourceName=record.calendarResourceName,
+            )
+
+            # FIXME: need to figure out sync-token and bind revision changes
+
+            if oldrecord:
+                # Point old record to the new local calendar being shared
+                yield oldrecord[0].update(
+                    calendarResourceID=calendar_id,
+                    bindRevision=0,
+                )
+                self.accounting("    Updating existing sharee {}".format(shareeHome.uid()))
+            else:
+                # Map the record resource ids and insert a new record
+                record.calendarHomeResourceID = shareeHome.id()
+                record.calendarResourceID = calendar_id
+                record.bindRevision = 0
+                yield record.insert(txn)
+                self.accounting("    Adding new sharee {}".format(shareeHome.uid()))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def syncGroupSharees(self, txn, remote_id, local_id):
+        """
+        Sync the group sharees for a remote share.
+        """
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(remote_id)
+        results = yield remote_calendar.groupSharees()
+        groups = dict([(group.groupID, group.groupUID,) for group in results["groups"]])
+        for share in results["sharees"]:
+            local_group = yield txn.groupByUID(groups[share.groupID])
+            share.groupID = local_group.groupID
+            share.calendarID = local_id
+            yield share.insert(txn)
+            self.accounting("    Adding group sharee {}".format(local_group.groupUID))
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def updatedRemoteSharedByCollections(self, txn, remote_id, bindUID):
+        """
+        Get all the existing L{CalendarBindRecord}'s from the remote store.
+        """
+
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(remote_id)
+        records = yield remote_calendar.migrateBindRecords(bindUID)
+        self.accounting("    Updating remote records")
+        returnValue(records)
+
+
+    @inlineCallbacks
+    def sharedToCollectionsReconcile(self):
+        """
+        Sync all the collections shared to the migrating user from the remote store.
+
+        Here is the logic we need: first assume we have three pods: A, B, C, and we are migrating a user from A->B. We start
+        with a set of shares (X -> Y - where X is the sharer and Y the sharee) with sharee on pod A. We migrate the sharee to pod B. We
+        then need to have a set of bind records on pod B, and adjust the set on pod A. Note that no changes are required on pod C.
+
+        Original      |  Changes                     | Changes
+        Shares        |  on B                        | on A
+        --------------|------------------------------|---------------------
+        A -> A        |  A -> B (new)                | A -> B (modify existing)
+        B -> A        |  B -> B (modify existing)    | (removed)
+        C -> A        |  C -> B (new)                | (removed)
+        """
+
+        self.accounting("Starting: sharedToCollectionsReconcile...")
+
+        records = yield self.sharedToCollectionRecords()
+        records = records.items()
+        len_records = len(records)
+        self.accounting("  Found {} shared to collections".format(len_records))
+
+        while records:
+            yield self.makeSharedToCollections(records[:50])
+            records = records[50:]
+
+        self.accounting("Completed: sharedToCollectionsReconcile.")
+
+        returnValue(len_records)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def sharedToCollectionRecords(self, txn):
+        """
+        Get the names and sharer UIDs for remote shared calendars.
+        """
+
+        # List of calendars from the remote side
+        home = yield self._remoteHome(txn)
+        if home is None:
+            returnValue(None)
+        results = yield home.sharedToBindRecords()
+        returnValue(results)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def makeSharedToCollections(self, txn, records):
+        """
+        Create L{CalendarBindRecord} records in the local store.
+        """
+
+        for sharerUID, (shareeRecord, ownerRecord, metadataRecord) in records:
+            sharerHome = yield txn.calendarHomeWithUID(sharerUID, create=True)
+
+            # We need to figure out the right thing to do based on whether the sharer is local to this pod
+            # (the one where the migrated user will be hosted) vs located on another pod
+
+            if sharerHome.normal():
+                # First look for an existing record that must be present if the migrating user had
+                # previously been shared with by this sharee
+                oldrecord = yield CalendarBindRecord.querysimple(
+                    txn,
+                    calendarResourceName=shareeRecord.calendarResourceName,
+                )
+                if len(oldrecord) == 1:
+                    # Point old record to the new local calendar home
+                    yield oldrecord[0].update(
+                        calendarHomeResourceID=self.homeId,
+                    )
+                    self.accounting("  Updated existing local sharer record {}".format(sharerHome.uid()))
+                else:
+                    raise AssertionError("An existing share must be present")
+            else:
+                # We have an external user. That sharer may have already shared the calendar with some other user
+                # on this pod, in which case there is already a CALENDAR table entry for it, and we need the
+                # resource ID from that to use in the new CALENDAR_BIND record we create. If a pre-existing share
+                # is not present, then we have to create the CALENDAR table entry and associated pieces
+
+                remote_id = shareeRecord.calendarResourceID
+
+                # Look for pre-existing share with the same external ID
+                oldrecord = yield CalendarBindRecord.querysimple(
+                    txn,
+                    calendarHomeResourceID=sharerHome.id(),
+                    bindUID=ownerRecord.bindUID,
+                )
+                if oldrecord:
+                    # Map the record resource ids and insert a new record
+                    calendar_id = oldrecord.calendarResourceID
+                    log_op = "Updated"
+                else:
+                    sharerView = yield sharerHome.createCollectionForExternalShare(
+                        ownerRecord.calendarResourceName,
+                        ownerRecord.bindUID,
+                        metadataRecord.supportedComponents,
+                    )
+                    calendar_id = sharerView.id()
+                    log_op = "Created"
+
+                shareeRecord.calendarHomeResourceID = self.homeId
+                shareeRecord.calendarResourceID = calendar_id
+                shareeRecord.bindRevision = 0
+                yield shareeRecord.insert(txn)
+                self.accounting("  {} remote sharer record {}".format(log_op, sharerHome.uid()))
+
+                yield self.updatedRemoteSharedToCollection(remote_id, txn=txn)
+
+
+    @inTransactionWrapper
+    @inlineCallbacks
+    def updatedRemoteSharedToCollection(self, txn, remote_id):
+        """
+        Get all the existing L{CalendarBindRecord}'s from the remote store.
+        """
+
+        remote_home = yield self._remoteHome(txn)
+        remote_calendar = yield remote_home.childWithID(remote_id)
+        records = yield remote_calendar.migrateBindRecords(None)
+        self.accounting("    Updating remote records")
+        returnValue(records)

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/sync_metadata.py	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,58 +0,0 @@
-##
-# Copyright (c) 2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from twext.enterprise.dal.record import Record, fromTable
-from txdav.common.datastore.sql_tables import schema
-from twext.enterprise.dal.syntax import Parameter, Delete
-from twisted.internet.defer import inlineCallbacks
-
-"""
-Module that manages store-level metadata objects used during the migration process.
-"""
-
-class CalendarMigrationRecord(Record, fromTable(schema.CALENDAR_MIGRATION)):
-    """
-    @DynamicAttrs
-    L{Record} for L{schema.CALENDAR_MIGRATION}.
-    """
-
-    @classmethod
-    @inlineCallbacks
-    def deleteremotes(cls, txn, homeid, remotes):
-        return Delete(
-            From=cls.table,
-            Where=(cls.calendarHomeResourceID == homeid).And(
-                cls.remoteResourceID.In(Parameter("remotes", len(remotes)))
-            ),
-        ).on(txn, remotes=remotes)
-
-
-
-class CalendarObjectMigrationRecord(Record, fromTable(schema.CALENDAR_OBJECT_MIGRATION)):
-    """
-    @DynamicAttrs
-    L{Record} for L{schema.CALENDAR_OBJECT_MIGRATION}.
-    """
-    pass
-
-
-
-class AttachmentMigrationRecord(Record, fromTable(schema.ATTACHMENT_MIGRATION)):
-    """
-    @DynamicAttrs
-    L{Record} for L{schema.ATTACHMENT_MIGRATION}.
-    """
-    pass

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/sync_metadata.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/sync_metadata.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,58 @@
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.record import Record, fromTable
+from twext.enterprise.dal.syntax import Parameter, Delete
+from txdav.common.datastore.sql_tables import schema
+from twisted.internet.defer import inlineCallbacks
+
+"""
+Module that manages store-level metadata objects used during the migration process.
+"""
+
+class CalendarMigrationRecord(Record, fromTable(schema.CALENDAR_MIGRATION)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.CALENDAR_MIGRATION}.
+    """
+
+    @classmethod
+    @inlineCallbacks
+    def deleteremotes(cls, txn, homeid, remotes):
+        return Delete(
+            From=cls.table,
+            Where=(cls.calendarHomeResourceID == homeid).And(
+                cls.remoteResourceID.In(Parameter("remotes", len(remotes)))
+            ),
+        ).on(txn, remotes=remotes)
+
+
+
+class CalendarObjectMigrationRecord(Record, fromTable(schema.CALENDAR_OBJECT_MIGRATION)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.CALENDAR_OBJECT_MIGRATION}.
+    """
+    pass
+
+
+
+class AttachmentMigrationRecord(Record, fromTable(schema.ATTACHMENT_MIGRATION)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.ATTACHMENT_MIGRATION}.
+    """
+    pass

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/test/__init__.py	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,15 +0,0 @@
-##
-# Copyright (c) 2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/test/__init__.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/__init__.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,15 @@
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/test/accounts/augments.xml	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,142 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-
-<!--
-Copyright (c) 2009-2015 Apple Inc. All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
- -->
-
-<!DOCTYPE augments SYSTEM "../../../conf/auth/augments.dtd">
-
-<augments>
-	<record>
-	    <uid>user01</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user02</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user03</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user04</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user05</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user06</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user07</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user08</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user09</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>user10</uid>
-	    <server-id>A</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser01</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser02</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser03</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser04</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser05</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser06</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser07</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser08</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser09</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-	<record>
-	    <uid>puser10</uid>
-	    <server-id>B</server-id>
-	    <enable-calendar>true</enable-calendar>
-	    <enable-addressbook>true</enable-addressbook>
-	</record>
-</augments>

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/test/accounts/augments.xml)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/augments.xml	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,142 @@
+<?xml version="1.0" encoding="utf-8"?>
+
+<!--
+Copyright (c) 2009-2015 Apple Inc. All rights reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+ -->
+
+<!DOCTYPE augments SYSTEM "../../../conf/auth/augments.dtd">
+
+<augments>
+	<record>
+	    <uid>user01</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user02</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user03</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user04</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user05</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user06</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user07</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user08</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user09</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>user10</uid>
+	    <server-id>A</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser01</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser02</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser03</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser04</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser05</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser06</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser07</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser08</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser09</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+	<record>
+	    <uid>puser10</uid>
+	    <server-id>B</server-id>
+	    <enable-calendar>true</enable-calendar>
+	    <enable-addressbook>true</enable-addressbook>
+	</record>
+</augments>

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,211 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-
-<!--
-Copyright (c) 2006-2015 Apple Inc. All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
- -->
-
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-
-<directory realm="Test Realm">
-	<record type="user">
-	    <short-name>user01</short-name>
-	    <uid>user01</uid>
-	    <guid>10000000-0000-0000-0000-000000000001</guid>
-	    <password>user01</password>
-	    <full-name>User 01</full-name>
-	    <email>user01 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user02</short-name>
-	    <uid>user02</uid>
-	    <guid>10000000-0000-0000-0000-000000000002</guid>
-	    <password>user02</password>
-	    <full-name>User 02</full-name>
-	    <email>user02 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user03</short-name>
-	    <uid>user03</uid>
-	    <guid>10000000-0000-0000-0000-000000000003</guid>
-	    <password>user03</password>
-	    <full-name>User 03</full-name>
-	    <email>user03 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user04</short-name>
-	    <uid>user04</uid>
-	    <guid>10000000-0000-0000-0000-000000000004</guid>
-	    <password>user04</password>
-	    <full-name>User 04</full-name>
-	    <email>user04 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user05</short-name>
-	    <uid>user05</uid>
-	    <guid>10000000-0000-0000-0000-000000000005</guid>
-	    <password>user05</password>
-	    <full-name>User 05</full-name>
-	    <email>user05 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user06</short-name>
-	    <uid>user06</uid>
-	    <guid>10000000-0000-0000-0000-000000000006</guid>
-	    <password>user06</password>
-	    <full-name>User 06</full-name>
-	    <email>user06 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user07</short-name>
-	    <uid>user07</uid>
-	    <guid>10000000-0000-0000-0000-000000000007</guid>
-	    <password>user07</password>
-	    <full-name>User 07</full-name>
-	    <email>user07 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user08</short-name>
-	    <uid>user08</uid>
-	    <guid>10000000-0000-0000-0000-000000000008</guid>
-	    <password>user08</password>
-	    <full-name>User 08</full-name>
-	    <email>user08 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user09</short-name>
-	    <uid>user09</uid>
-	    <guid>10000000-0000-0000-0000-000000000009</guid>
-	    <password>user09</password>
-	    <full-name>User 09</full-name>
-	    <email>user09 at example.com</email>
-	</record>
-	<record type="user">
-	    <short-name>user10</short-name>
-	    <uid>user10</uid>
-	    <guid>10000000-0000-0000-0000-000000000010</guid>
-	    <password>user10</password>
-	    <full-name>User 10</full-name>
-	    <email>user10 at example.com</email>
-	</record>
-	<record type="group">
-	    <short-name>group01</short-name>
-	    <uid>group01</uid>
-	    <guid>20000000-0000-0000-0000-000000000001</guid>
-	    <full-name>Group 01</full-name>
-	    <email>group01 at example.com</email>
-	    <member-uid>user01</member-uid>
-	    <member-uid>puser01</member-uid>
-	</record>
-	<record type="group">
-	    <short-name>group02</short-name>
-	    <uid>group02</uid>
-	    <guid>20000000-0000-0000-0000-000000000002</guid>
-	    <full-name>Group 02</full-name>
-	    <email>group02 at example.com</email>
-	    <member-uid>user06</member-uid>
-	    <member-uid>user07</member-uid>
-	    <member-uid>user08</member-uid>
-	</record>
-	<record type="group">
-	    <short-name>group03</short-name>
-	    <uid>group03</uid>
-	    <guid>20000000-0000-0000-0000-000000000003</guid>
-	    <full-name>Group 03</full-name>
-	    <email>group03 at example.com</email>
-	    <member-uid>user07</member-uid>
-	    <member-uid>user08</member-uid>
-	    <member-uid>user09</member-uid>
-	</record>
-	<record type="group">
-	    <short-name>group04</short-name>
-	    <uid>group04</uid>
-	    <guid>20000000-0000-0000-0000-000000000004</guid>
-	    <full-name>Group 04</full-name>
-	    <email>group04 at example.com</email>
-	    <member-uid>group02</member-uid>
-	    <member-uid>group03</member-uid>
-	    <member-uid>user10</member-uid>
-	</record>
-	<record type="user">
-	    <uid>puser01</uid>
-	    <short-name>puser01</short-name>
-	    <password>puser01</password>
-	    <full-name>Puser 01</full-name>
-	    <email>puser01 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser02</uid>
-	    <short-name>puser02</short-name>
-	    <password>puser02</password>
-	    <full-name>Puser 02</full-name>
-	    <email>puser02 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser03</uid>
-	    <short-name>puser03</short-name>
-	    <password>puser03</password>
-	    <full-name>Puser 03</full-name>
-	    <email>puser03 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser04</uid>
-	    <short-name>puser04</short-name>
-	    <password>puser04</password>
-	    <full-name>Puser 04</full-name>
-	    <email>puser04 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser05</uid>
-	    <short-name>puser05</short-name>
-	    <password>puser05</password>
-	    <full-name>Puser 05</full-name>
-	    <email>puser05 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser06</uid>
-	    <short-name>puser06</short-name>
-	    <password>puser06</password>
-	    <full-name>Puser 06</full-name>
-	    <email>puser06 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser07</uid>
-	    <short-name>puser07</short-name>
-	    <password>puser07</password>
-	    <full-name>Puser 07</full-name>
-	    <email>puser07 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser08</uid>
-	    <short-name>puser08</short-name>
-	    <password>puser08</password>
-	    <full-name>Puser 08</full-name>
-	    <email>puser08 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser09</uid>
-	    <short-name>puser09</short-name>
-	    <password>puser09</password>
-	    <full-name>Puser 09</full-name>
-	    <email>puser09 at example.com</email>
-	</record>
-	<record type="user">
-	    <uid>puser10</uid>
-	    <short-name>puser10</short-name>
-	    <password>puser10</password>
-	    <full-name>Puser 10</full-name>
-	    <email>puser10 at example.com</email>
-	</record>
-</directory>

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/accounts/groupAccounts.xml	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,211 @@
+<?xml version="1.0" encoding="utf-8"?>
+
+<!--
+Copyright (c) 2006-2015 Apple Inc. All rights reserved.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+ -->
+
+<!DOCTYPE accounts SYSTEM "accounts.dtd">
+
+<directory realm="Test Realm">
+	<record type="user">
+	    <short-name>user01</short-name>
+	    <uid>user01</uid>
+	    <guid>10000000-0000-0000-0000-000000000001</guid>
+	    <password>user01</password>
+	    <full-name>User 01</full-name>
+	    <email>user01 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user02</short-name>
+	    <uid>user02</uid>
+	    <guid>10000000-0000-0000-0000-000000000002</guid>
+	    <password>user02</password>
+	    <full-name>User 02</full-name>
+	    <email>user02 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user03</short-name>
+	    <uid>user03</uid>
+	    <guid>10000000-0000-0000-0000-000000000003</guid>
+	    <password>user03</password>
+	    <full-name>User 03</full-name>
+	    <email>user03 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user04</short-name>
+	    <uid>user04</uid>
+	    <guid>10000000-0000-0000-0000-000000000004</guid>
+	    <password>user04</password>
+	    <full-name>User 04</full-name>
+	    <email>user04 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user05</short-name>
+	    <uid>user05</uid>
+	    <guid>10000000-0000-0000-0000-000000000005</guid>
+	    <password>user05</password>
+	    <full-name>User 05</full-name>
+	    <email>user05 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user06</short-name>
+	    <uid>user06</uid>
+	    <guid>10000000-0000-0000-0000-000000000006</guid>
+	    <password>user06</password>
+	    <full-name>User 06</full-name>
+	    <email>user06 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user07</short-name>
+	    <uid>user07</uid>
+	    <guid>10000000-0000-0000-0000-000000000007</guid>
+	    <password>user07</password>
+	    <full-name>User 07</full-name>
+	    <email>user07 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user08</short-name>
+	    <uid>user08</uid>
+	    <guid>10000000-0000-0000-0000-000000000008</guid>
+	    <password>user08</password>
+	    <full-name>User 08</full-name>
+	    <email>user08 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user09</short-name>
+	    <uid>user09</uid>
+	    <guid>10000000-0000-0000-0000-000000000009</guid>
+	    <password>user09</password>
+	    <full-name>User 09</full-name>
+	    <email>user09 at example.com</email>
+	</record>
+	<record type="user">
+	    <short-name>user10</short-name>
+	    <uid>user10</uid>
+	    <guid>10000000-0000-0000-0000-000000000010</guid>
+	    <password>user10</password>
+	    <full-name>User 10</full-name>
+	    <email>user10 at example.com</email>
+	</record>
+	<record type="group">
+	    <short-name>group01</short-name>
+	    <uid>group01</uid>
+	    <guid>20000000-0000-0000-0000-000000000001</guid>
+	    <full-name>Group 01</full-name>
+	    <email>group01 at example.com</email>
+	    <member-uid>user01</member-uid>
+	    <member-uid>puser01</member-uid>
+	</record>
+	<record type="group">
+	    <short-name>group02</short-name>
+	    <uid>group02</uid>
+	    <guid>20000000-0000-0000-0000-000000000002</guid>
+	    <full-name>Group 02</full-name>
+	    <email>group02 at example.com</email>
+	    <member-uid>user06</member-uid>
+	    <member-uid>user07</member-uid>
+	    <member-uid>user08</member-uid>
+	</record>
+	<record type="group">
+	    <short-name>group03</short-name>
+	    <uid>group03</uid>
+	    <guid>20000000-0000-0000-0000-000000000003</guid>
+	    <full-name>Group 03</full-name>
+	    <email>group03 at example.com</email>
+	    <member-uid>user07</member-uid>
+	    <member-uid>user08</member-uid>
+	    <member-uid>user09</member-uid>
+	</record>
+	<record type="group">
+	    <short-name>group04</short-name>
+	    <uid>group04</uid>
+	    <guid>20000000-0000-0000-0000-000000000004</guid>
+	    <full-name>Group 04</full-name>
+	    <email>group04 at example.com</email>
+	    <member-uid>group02</member-uid>
+	    <member-uid>group03</member-uid>
+	    <member-uid>user10</member-uid>
+	</record>
+	<record type="user">
+	    <uid>puser01</uid>
+	    <short-name>puser01</short-name>
+	    <password>puser01</password>
+	    <full-name>Puser 01</full-name>
+	    <email>puser01 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser02</uid>
+	    <short-name>puser02</short-name>
+	    <password>puser02</password>
+	    <full-name>Puser 02</full-name>
+	    <email>puser02 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser03</uid>
+	    <short-name>puser03</short-name>
+	    <password>puser03</password>
+	    <full-name>Puser 03</full-name>
+	    <email>puser03 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser04</uid>
+	    <short-name>puser04</short-name>
+	    <password>puser04</password>
+	    <full-name>Puser 04</full-name>
+	    <email>puser04 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser05</uid>
+	    <short-name>puser05</short-name>
+	    <password>puser05</password>
+	    <full-name>Puser 05</full-name>
+	    <email>puser05 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser06</uid>
+	    <short-name>puser06</short-name>
+	    <password>puser06</password>
+	    <full-name>Puser 06</full-name>
+	    <email>puser06 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser07</uid>
+	    <short-name>puser07</short-name>
+	    <password>puser07</password>
+	    <full-name>Puser 07</full-name>
+	    <email>puser07 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser08</uid>
+	    <short-name>puser08</short-name>
+	    <password>puser08</password>
+	    <full-name>Puser 08</full-name>
+	    <email>puser08 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser09</uid>
+	    <short-name>puser09</short-name>
+	    <password>puser09</password>
+	    <full-name>Puser 09</full-name>
+	    <email>puser09 at example.com</email>
+	</record>
+	<record type="user">
+	    <uid>puser10</uid>
+	    <short-name>puser10</short-name>
+	    <password>puser10</password>
+	    <full-name>Puser 10</full-name>
+	    <email>puser10 at example.com</email>
+	</record>
+</directory>

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/test/test_home_sync.py	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,1307 +0,0 @@
-##
-# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from pycalendar.datetime import DateTime
-from twext.enterprise.dal.syntax import Select
-from twext.enterprise.jobqueue import JobItem
-from twisted.internet import reactor
-from twisted.internet.defer import inlineCallbacks, returnValue
-from twisted.python.filepath import FilePath
-from twistedcaldav.config import config
-from twistedcaldav.ical import Component, normalize_iCalStr
-from txdav.caldav.datastore.sql import ManagedAttachment
-from txdav.caldav.datastore.sql_directory import GroupShareeRecord
-from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync
-from txdav.common.datastore.podding.migration.sync_metadata import CalendarMigrationRecord, \
-    AttachmentMigrationRecord
-from txdav.common.datastore.podding.test.util import MultiStoreConduitTest
-from txdav.common.datastore.sql_directory import DelegateRecord, \
-    ExternalDelegateGroupsRecord, DelegateGroupsRecord, GroupsRecord
-from txdav.common.datastore.sql_notification import NotificationCollection
-from txdav.common.datastore.sql_tables import schema, _HOME_STATUS_EXTERNAL, \
-    _BIND_MODE_READ, _HOME_STATUS_MIGRATING, _HOME_STATUS_NORMAL, \
-    _HOME_STATUS_DISABLED
-from txdav.common.datastore.test.util import populateCalendarsFrom
-from txdav.who.delegates import Delegates
-from txweb2.http_headers import MimeType
-from txweb2.stream import MemoryStream
-from uuid import uuid4
-import json
-
-
-class TestCrossPodHomeSync(MultiStoreConduitTest):
-    """
-    Test that L{CrossPodHomeSync} works.
-    """
-
-    nowYear = {"now": DateTime.getToday().getYear()}
-
-    caldata1 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid1
-DTSTART:{now:04d}0102T140000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:instance
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**nowYear)
-
-    caldata1_changed = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid1
-DTSTART:{now:04d}0102T150000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:instance changed
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**nowYear)
-
-    caldata2 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid2
-DTSTART:{now:04d}0102T160000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:instance
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**nowYear)
-
-    caldata3 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid3
-DTSTART:{now:04d}0102T160000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:instance
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**nowYear)
-
-    caldata4 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid4
-DTSTART:{now:04d}0102T180000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=DAILY
-SUMMARY:instance
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**nowYear)
-
-
-    @inlineCallbacks
-    def test_remote_home(self):
-        """
-        Test that a remote home can be accessed.
-        """
-
-        home01 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        self.assertTrue(home01 is not None)
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        home = yield syncer._remoteHome(self.theTransactionUnderTest(1))
-        self.assertTrue(home is not None)
-        self.assertEqual(home.id(), home01.id())
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_prepare_home(self):
-        """
-        Test that L{prepareCalendarHome} creates a home.
-        """
-
-        # No home present
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home is None)
-        yield self.commitTransaction(1)
-
-        yield syncer.prepareCalendarHome()
-
-        # Home is present
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home is not None)
-        children = yield home.listChildren()
-        self.assertEqual(len(children), 0)
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_prepare_home_external_txn(self):
-        """
-        Test that L{prepareCalendarHome} creates a home.
-        """
-
-        # No home present
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home is None)
-        yield self.commitTransaction(1)
-
-        yield syncer.prepareCalendarHome(txn=self.theTransactionUnderTest(1))
-        yield self.commitTransaction(1)
-
-        # Home is present
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home is not None)
-        children = yield home.listChildren()
-        self.assertEqual(len(children), 0)
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_home_metadata(self):
-        """
-        Test that L{syncCalendarHomeMetaData} sync home metadata correctly.
-        """
-
-        alarm_event_timed = """BEGIN:VALARM
-ACTION:DISPLAY
-DESCRIPTION:alarm_event_timed
-TRIGGER:-PT10M
-END:VALARM
-"""
-        alarm_event_allday = """BEGIN:VALARM
-ACTION:DISPLAY
-DESCRIPTION:alarm_event_allday
-TRIGGER:-PT10M
-END:VALARM
-"""
-        alarm_todo_timed = """BEGIN:VALARM
-ACTION:DISPLAY
-DESCRIPTION:alarm_todo_timed
-TRIGGER:-PT10M
-END:VALARM
-"""
-        alarm_todo_allday = """BEGIN:VALARM
-ACTION:DISPLAY
-DESCRIPTION:alarm_todo_allday
-TRIGGER:-PT10M
-END:VALARM
-"""
-        availability = """BEGIN:VCALENDAR
-VERSION:2.0
-PRODID:-//Example Inc.//Example Calendar//EN
-BEGIN:VAVAILABILITY
-UID:20061005T133225Z-00001-availability at example.com
-DTSTART:20060101T000000Z
-DTEND:20060108T000000Z
-DTSTAMP:20061005T133225Z
-ORGANIZER:mailto:bernard at example.com
-BEGIN:AVAILABLE
-UID:20061005T133225Z-00001-A-availability at example.com
-DTSTART:20060102T090000Z
-DTEND:20060102T120000Z
-DTSTAMP:20061005T133225Z
-RRULE:FREQ=WEEKLY;BYDAY=MO,TU,WE,TH,FR
-SUMMARY:Weekdays from 9:00 to 12:00
-END:AVAILABLE
-END:VAVAILABILITY
-END:VCALENDAR
-"""
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        events0 = yield home0.createChildWithName("events")
-        yield home0.setDefaultCalendar(events0, "VEVENT")
-        yield home0.setDefaultAlarm(alarm_event_timed, True, True)
-        yield home0.setDefaultAlarm(alarm_event_allday, True, False)
-        yield home0.setDefaultAlarm(alarm_todo_timed, False, True)
-        yield home0.setDefaultAlarm(alarm_todo_allday, False, False)
-        yield home0.setAvailability(Component.fromString(availability))
-        yield self.commitTransaction(0)
-
-        # Trigger sync
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.sync()
-
-        # Home is present with correct metadata
-        home1 = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home1 is not None)
-        calendar1 = yield home1.childWithName("calendar")
-        events1 = yield home1.childWithName("events")
-        tasks1 = yield home1.childWithName("tasks")
-        self.assertFalse(home1.isDefaultCalendar(calendar1))
-        self.assertTrue(home1.isDefaultCalendar(events1))
-        self.assertTrue(home1.isDefaultCalendar(tasks1))
-        self.assertEqual(home1.getDefaultAlarm(True, True), alarm_event_timed)
-        self.assertEqual(home1.getDefaultAlarm(True, False), alarm_event_allday)
-        self.assertEqual(home1.getDefaultAlarm(False, True), alarm_todo_timed)
-        self.assertEqual(home1.getDefaultAlarm(False, False), alarm_todo_allday)
-        self.assertEqual(normalize_iCalStr(home1.getAvailability()), normalize_iCalStr(availability))
-        yield self.commitTransaction(1)
-
-        # Make some changes
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("calendar")
-        yield home0.setDefaultCalendar(calendar0, "VEVENT")
-        yield home0.setDefaultAlarm(None, True, True)
-        yield home0.setDefaultAlarm(None, False, True)
-        yield self.commitTransaction(0)
-
-        # Trigger sync again
-        yield syncer.sync()
-
-        # Home is present with correct metadata
-        home1 = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home1 is not None)
-        calendar1 = yield home1.childWithName("calendar")
-        events1 = yield home1.childWithName("events")
-        tasks1 = yield home1.childWithName("tasks")
-        self.assertTrue(home1.isDefaultCalendar(calendar1))
-        self.assertFalse(home1.isDefaultCalendar(events1))
-        self.assertTrue(home1.isDefaultCalendar(tasks1))
-        self.assertEqual(home1.getDefaultAlarm(True, True), None)
-        self.assertEqual(home1.getDefaultAlarm(True, False), alarm_event_allday)
-        self.assertEqual(home1.getDefaultAlarm(False, True), None)
-        self.assertEqual(home1.getDefaultAlarm(False, False), alarm_todo_allday)
-        self.assertEqual(normalize_iCalStr(home1.getAvailability()), normalize_iCalStr(availability))
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_get_calendar_sync_list(self):
-        """
-        Test that L{getCalendarSyncList} returns the correct results.
-        """
-
-        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.commitTransaction(0)
-        home01 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01")
-        self.assertTrue(home01 is not None)
-        calendars01 = yield home01.loadChildren()
-        results01 = {}
-        for calendar in calendars01:
-            if calendar.owned():
-                sync_token = yield calendar.syncToken()
-                results01[calendar.id()] = CalendarMigrationRecord.make(
-                    calendarHomeResourceID=home01.id(),
-                    remoteResourceID=calendar.id(),
-                    localResourceID=0,
-                    lastSyncToken=sync_token,
-                )
-
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        results = yield syncer.getCalendarSyncList()
-        self.assertEqual(results, results01)
-
-
-    @inlineCallbacks
-    def test_sync_calendar_initial_empty(self):
-        """
-        Test that L{syncCalendar} syncs an initially non-existent local calendar with
-        an empty remote calendar.
-        """
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("calendar")
-        remote_id = calendar0.id()
-        remote_sync_token = yield calendar0.syncToken()
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-
-        # No local calendar exists yet
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        children = yield home1.listChildren()
-        self.assertEqual(len(children), 0)
-        yield self.commitTransaction(1)
-
-        # Trigger sync of the one calendar
-        local_sync_state = {}
-        remote_sync_state = {remote_id: CalendarMigrationRecord.make(
-            calendarHomeResourceID=home0.id(),
-            remoteResourceID=remote_id,
-            localResourceID=0,
-            lastSyncToken=remote_sync_token,
-        )}
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-        self.assertEqual(len(local_sync_state), 1)
-        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
-
-        # Local calendar exists
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        self.assertTrue(calendar1 is not None)
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_sync_calendar_initial_with_data(self):
-        """
-        Test that L{syncCalendar} syncs an initially non-existent local calendar with
-        a remote calendar containing data. Also check a change to one event is then
-        sync'd the second time.
-        """
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("calendar")
-        o1 = yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
-        o2 = yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.caldata2))
-        o3 = yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.caldata3))
-        remote_id = calendar0.id()
-        mapping0 = dict([(o.name(), o.id()) for o in (o1, o2, o3)])
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-
-        # No local calendar exists yet
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        self.assertTrue(calendar1 is None)
-        yield self.commitTransaction(1)
-
-        # Trigger sync of the one calendar
-        local_sync_state = {}
-        remote_sync_state = yield syncer.getCalendarSyncList()
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-        self.assertEqual(len(local_sync_state), 1)
-        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
-
-        @inlineCallbacks
-        def _checkCalendarObjectMigrationState(home, mapping1):
-            com = schema.CALENDAR_OBJECT_MIGRATION
-            mappings = yield Select(
-                columns=[com.REMOTE_RESOURCE_ID, com.LOCAL_RESOURCE_ID],
-                From=com,
-                Where=(com.CALENDAR_HOME_RESOURCE_ID == home.id())
-            ).on(self.theTransactionUnderTest(1))
-            expected_mappings = dict([(mapping0[name], mapping1[name]) for name in mapping0.keys()])
-            self.assertEqual(dict(mappings), expected_mappings)
-
-
-        # Local calendar exists
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        self.assertTrue(calendar1 is not None)
-        children = yield calendar1.objectResources()
-        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
-        mapping1 = dict([(o.name(), o.id()) for o in children])
-        yield _checkCalendarObjectMigrationState(home1, mapping1)
-        yield self.commitTransaction(1)
-
-        # Change one resource
-        object0 = yield self.calendarObjectUnderTest(
-            txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics"
-        )
-        yield object0.setComponent(Component.fromString(self.caldata1_changed))
-        yield self.commitTransaction(0)
-
-        remote_sync_state = yield syncer.getCalendarSyncList()
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-
-        object1 = yield self.calendarObjectUnderTest(
-            txn=self.theTransactionUnderTest(1), home="user01", status=_HOME_STATUS_MIGRATING, calendar_name="calendar", name="1.ics"
-        )
-        caldata = yield object1.component()
-        self.assertEqual(normalize_iCalStr(caldata), normalize_iCalStr(self.caldata1_changed))
-        yield self.commitTransaction(1)
-
-        # Remove one resource
-        object0 = yield self.calendarObjectUnderTest(
-            txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="2.ics"
-        )
-        yield object0.remove()
-        del mapping0["2.ics"]
-        yield self.commitTransaction(0)
-
-        remote_sync_state = yield syncer.getCalendarSyncList()
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-
-        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="user01", status=_HOME_STATUS_MIGRATING, name="calendar")
-        children = yield calendar1.objectResources()
-        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "3.ics",)))
-        mapping1 = dict([(o.name(), o.id()) for o in children])
-        yield _checkCalendarObjectMigrationState(home1, mapping1)
-        yield self.commitTransaction(1)
-
-        # Add one resource
-        calendar0 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
-        o4 = yield calendar0.createCalendarObjectWithName("4.ics", Component.fromString(self.caldata4))
-        mapping0[o4.name()] = o4.id()
-        yield self.commitTransaction(0)
-
-        remote_sync_state = yield syncer.getCalendarSyncList()
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-
-        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="user01", status=_HOME_STATUS_MIGRATING, name="calendar")
-        children = yield calendar1.objectResources()
-        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "3.ics", "4.ics")))
-        mapping1 = dict([(o.name(), o.id()) for o in children])
-        yield _checkCalendarObjectMigrationState(home1, mapping1)
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_sync_calendars_add_remove(self):
-        """
-        Test that L{syncCalendar} syncs an initially non-existent local calendar with
-        a remote calendar containing data. Also check a change to one event is then
-        sync'd the second time.
-        """
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        children0 = yield home0.loadChildren()
-        details0 = dict([(child.id(), child.name()) for child in children0])
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-
-        # No local calendar exists yet
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        children1 = yield home1.loadChildren()
-        self.assertEqual(len(children1), 0)
-        yield self.commitTransaction(1)
-
-        # Trigger sync
-        yield syncer.syncCalendarList()
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        children1 = yield home1.loadChildren()
-        details1 = dict([(child.id(), child.name()) for child in children1])
-        self.assertEqual(set(details1.values()), set(details0.values()))
-        yield self.commitTransaction(1)
-
-        # Add a calendar
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        newcalendar0 = yield home0.createCalendarWithName("new-calendar")
-        details0[newcalendar0.id()] = newcalendar0.name()
-        yield self.commitTransaction(0)
-
-        # Trigger sync
-        yield syncer.syncCalendarList()
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        children1 = yield home1.loadChildren()
-        details1 = dict([(child.id(), child.name()) for child in children1])
-        self.assertTrue("new-calendar" in details1.values())
-        self.assertEqual(set(details1.values()), set(details0.values()))
-        yield self.commitTransaction(1)
-
-        # Remove a calendar
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("new-calendar")
-        del details0[calendar0.id()]
-        yield calendar0.remove()
-        yield self.commitTransaction(0)
-
-        # Trigger sync
-        yield syncer.syncCalendarList()
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        children1 = yield home1.loadChildren()
-        details1 = dict([(child.id(), child.name()) for child in children1])
-        self.assertTrue("new-calendar" not in details1.values())
-        self.assertEqual(set(details1.values()), set(details0.values()))
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_sync_attachments_add_remove(self):
-        """
-        Test that L{syncAttachments} syncs attachment data, then an update to the data,
-        and finally a removal of the data.
-        """
-
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("calendar")
-        yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
-        yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.caldata2))
-        yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.caldata3))
-        remote_id = calendar0.id()
-        mapping0 = dict()
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-
-        # Trigger sync of the one calendar
-        local_sync_state = {}
-        remote_sync_state = yield syncer.getCalendarSyncList()
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-        self.assertEqual(len(local_sync_state), 1)
-        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
-
-        @inlineCallbacks
-        def _mapLocalIDToRemote(remote_id):
-            records = yield AttachmentMigrationRecord.all(self.theTransactionUnderTest(1))
-            yield self.commitTransaction(1)
-            for record in records:
-                if record.remoteResourceID == remote_id:
-                    returnValue(record.localResourceID)
-            else:
-                returnValue(None)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-        self.assertEqual(changed, set())
-        self.assertEqual(removed, set())
-
-        @inlineCallbacks
-        def _checkAttachmentObjectMigrationState(home, mapping1):
-            am = schema.ATTACHMENT_MIGRATION
-            mappings = yield Select(
-                columns=[am.REMOTE_RESOURCE_ID, am.LOCAL_RESOURCE_ID],
-                From=am,
-                Where=(am.CALENDAR_HOME_RESOURCE_ID == home.id())
-            ).on(self.theTransactionUnderTest(1))
-            expected_mappings = dict([(mapping0[name], mapping1[name]) for name in mapping0.keys()])
-            self.assertEqual(dict(mappings), expected_mappings)
-
-
-        # Local calendar exists
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        self.assertTrue(calendar1 is not None)
-        children = yield calendar1.objectResources()
-        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
-
-        attachments = yield home1.getAllAttachments()
-        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
-        yield _checkAttachmentObjectMigrationState(home1, mapping1)
-        yield self.commitTransaction(1)
-
-        # Add one attachment
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
-        attachment, _ignore_location = yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1."))
-        id0_1 = attachment.id()
-        md50_1 = attachment.md5()
-        managedid0_1 = attachment.managedID()
-        mapping0[md50_1] = id0_1
-        yield self.commitTransaction(0)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1)),)))
-        self.assertEqual(removed, set())
-
-        # Validate changes
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        attachments = yield home1.getAllAttachments()
-        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
-        yield _checkAttachmentObjectMigrationState(home1, mapping1)
-
-        # Add another attachment
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="2.ics")
-        attachment, _ignore_location = yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test2.txt", MemoryStream("Here is some text #2."))
-        id0_2 = attachment.id()
-        md50_2 = attachment.md5()
-        mapping0[md50_2] = id0_2
-        yield self.commitTransaction(0)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_2)),)))
-        self.assertEqual(removed, set())
-
-        # Validate changes
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        attachments = yield home1.getAllAttachments()
-        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
-        yield _checkAttachmentObjectMigrationState(home1, mapping1)
-
-        # Change original attachment (this is actually a remove and a create all in one)
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
-        attachment, _ignore_location = yield object1.updateAttachment(managedid0_1, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1 - changed."))
-        del mapping0[md50_1]
-        id0_1_changed = attachment.id()
-        md50_1_changed = attachment.md5()
-        managedid0_1_changed = attachment.managedID()
-        mapping0[md50_1_changed] = id0_1_changed
-        yield self.commitTransaction(0)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1_changed)),)))
-        self.assertEqual(removed, set((id0_1,)))
-
-        # Validate changes
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        attachments = yield home1.getAllAttachments()
-        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
-        yield _checkAttachmentObjectMigrationState(home1, mapping1)
-
-        # Add original to a different resource
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
-        component = yield object1.componentForUser()
-        attach = component.mainComponent().getProperty("ATTACH")
-
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="3.ics")
-        component = yield object1.componentForUser()
-        attach = component.mainComponent().addProperty(attach)
-        yield object1.setComponent(component)
-        yield self.commitTransaction(0)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-        self.assertEqual(changed, set())
-        self.assertEqual(removed, set())
-
-        # Validate changes
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        attachments = yield home1.getAllAttachments()
-        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
-        yield _checkAttachmentObjectMigrationState(home1, mapping1)
-
-        # Change original attachment in original resource (this creates a new one and does not remove the old)
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
-        attachment, _ignore_location = yield object1.updateAttachment(managedid0_1_changed, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1 - changed again."))
-        id0_1_changed_again = attachment.id()
-        md50_1_changed_again = attachment.md5()
-        mapping0[md50_1_changed_again] = id0_1_changed_again
-        yield self.commitTransaction(0)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1_changed_again)),)))
-        self.assertEqual(removed, set())
-
-        # Validate changes
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        attachments = yield home1.getAllAttachments()
-        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
-        yield _checkAttachmentObjectMigrationState(home1, mapping1)
-
-
-    @inlineCallbacks
-    def test_link_attachments(self):
-        """
-        Test that L{linkAttachments} links attachment data to the associated calendar object.
-        """
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("calendar")
-        object0_1 = yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
-        object0_2 = yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.caldata2))
-        yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.caldata3))
-        remote_id = calendar0.id()
-
-        attachment, _ignore_location = yield object0_1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1."))
-        id0_1 = attachment.id()
-        md50_1 = attachment.md5()
-        managedid0_1 = attachment.managedID()
-        pathID0_1 = ManagedAttachment.lastSegmentOfUriPath(managedid0_1, attachment.name())
-
-        attachment, _ignore_location = yield object0_2.addAttachment(None, MimeType.fromString("text/plain"), "test2.txt", MemoryStream("Here is some text #2."))
-        id0_2 = attachment.id()
-        md50_2 = attachment.md5()
-        managedid0_2 = attachment.managedID()
-        pathID0_2 = ManagedAttachment.lastSegmentOfUriPath(managedid0_2, attachment.name())
-
-        yield self.commitTransaction(0)
-
-        # Add original to a different resource
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
-        component = yield object1.componentForUser()
-        attach = component.mainComponent().getProperty("ATTACH")
-
-        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="3.ics")
-        component = yield object1.componentForUser()
-        attach = component.mainComponent().addProperty(attach)
-        yield object1.setComponent(component)
-        yield self.commitTransaction(0)
-
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-
-        # Trigger sync of the one calendar
-        local_sync_state = {}
-        remote_sync_state = yield syncer.getCalendarSyncList()
-        yield syncer.syncCalendar(
-            remote_id,
-            local_sync_state,
-            remote_sync_state,
-        )
-        self.assertEqual(len(local_sync_state), 1)
-        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
-
-        # Sync attachments
-        changed, removed = yield syncer.syncAttachments()
-
-        @inlineCallbacks
-        def _mapLocalIDToRemote(remote_id):
-            records = yield AttachmentMigrationRecord.all(self.theTransactionUnderTest(1))
-            yield self.commitTransaction(1)
-            for record in records:
-                if record.remoteResourceID == remote_id:
-                    returnValue(record.localResourceID)
-            else:
-                returnValue(None)
-
-        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1)), (yield _mapLocalIDToRemote(id0_2)),)))
-        self.assertEqual(removed, set())
-
-        # Link attachments (after home is disabled)
-        yield syncer.disableRemoteHome()
-        len_links = yield syncer.linkAttachments()
-        self.assertEqual(len_links, 3)
-
-        # Local calendar exists
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        self.assertTrue(calendar1 is not None)
-        children = yield calendar1.objectResources()
-        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
-
-        # Make sure calendar object is associated with attachment
-        object1 = yield calendar1.objectResourceWithName("1.ics")
-        attachments = yield object1.managedAttachmentList()
-        self.assertEqual(attachments, [pathID0_1, ])
-
-        attachment = yield object1.attachmentWithManagedID(managedid0_1)
-        self.assertTrue(attachment is not None)
-        self.assertEqual(attachment.md5(), md50_1)
-
-        # Make sure calendar object is associated with attachment
-        object1 = yield calendar1.objectResourceWithName("2.ics")
-        attachments = yield object1.managedAttachmentList()
-        self.assertEqual(attachments, [pathID0_2, ])
-
-        attachment = yield object1.attachmentWithManagedID(managedid0_2)
-        self.assertTrue(attachment is not None)
-        self.assertEqual(attachment.md5(), md50_2)
-
-        # Make sure calendar object is associated with attachment
-        object1 = yield calendar1.objectResourceWithName("3.ics")
-        attachments = yield object1.managedAttachmentList()
-        self.assertEqual(attachments, [pathID0_1, ])
-
-        attachment = yield object1.attachmentWithManagedID(managedid0_1)
-        self.assertTrue(attachment is not None)
-        self.assertEqual(attachment.md5(), md50_1)
-
-
-    @inlineCallbacks
-    def test_delegate_reconcile(self):
-        """
-        Test that L{delegateReconcile} copies over the full set of delegates and caches associated groups..
-        """
-
-        # Create remote home
-        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.commitTransaction(0)
-
-        # Add some delegates
-        txn = self.theTransactionUnderTest(0)
-        record01 = yield txn.directoryService().recordWithUID(u"user01")
-        record02 = yield txn.directoryService().recordWithUID(u"user02")
-        record03 = yield txn.directoryService().recordWithUID(u"user03")
-
-        group01 = yield txn.directoryService().recordWithUID(u"__top_group_1__")
-        group02 = yield txn.directoryService().recordWithUID(u"right_coast")
-
-        # Add user02 and user03 as individual delegates
-        yield Delegates.addDelegate(txn, record01, record02, True)
-        yield Delegates.addDelegate(txn, record01, record03, False)
-
-        # Add group delegates
-        yield Delegates.addDelegate(txn, record01, group01, True)
-        yield Delegates.addDelegate(txn, record01, group02, False)
-
-        # Add external delegates
-        yield txn.assignExternalDelegates(u"user01", None, None, u"external1", u"external2")
-
-        yield self.commitTransaction(0)
-
-
-        # Initially no local delegates
-        txn = self.theTransactionUnderTest(1)
-        delegates = yield txn.dumpIndividualDelegatesLocal(u"user01")
-        self.assertEqual(len(delegates), 0)
-        delegates = yield txn.dumpGroupDelegatesLocal(u"user04")
-        self.assertEqual(len(delegates), 0)
-        externals = yield txn.dumpExternalDelegatesLocal(u"user01")
-        self.assertEqual(len(externals), 0)
-        yield self.commitTransaction(1)
-
-        # Sync from remote side
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.disableRemoteHome()
-        yield syncer.delegateReconcile()
-
-        # Now have local delegates
-        txn = self.theTransactionUnderTest(1)
-
-        delegates = yield txn.dumpIndividualDelegatesLocal(u"user01")
-        self.assertEqual(
-            set(delegates),
-            set((
-                DelegateRecord.make(delegator="user01", delegate="user02", readWrite=1),
-                DelegateRecord.make(delegator="user01", delegate="user03", readWrite=0),
-            )),
-        )
-
-        delegateGroups = yield txn.dumpGroupDelegatesLocal(u"user01")
-        group_top = yield txn.groupByUID(u"__top_group_1__")
-        group_right = yield txn.groupByUID(u"right_coast")
-        self.assertEqual(
-            set([item[0] for item in delegateGroups]),
-            set((
-                DelegateGroupsRecord.make(delegator="user01", groupID=group_top.groupID, readWrite=1, isExternal=False),
-                DelegateGroupsRecord.make(delegator="user01", groupID=group_right.groupID, readWrite=0, isExternal=False),
-            )),
-        )
-
-        externals = yield txn.dumpExternalDelegatesLocal(u"user01")
-        self.assertEqual(
-            set(externals),
-            set((
-                ExternalDelegateGroupsRecord.make(
-                    delegator="user01",
-                    groupUIDRead="external1",
-                    groupUIDWrite="external2",
-                ),
-            )),
-        )
-
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_notifications_reconcile(self):
-        """
-        Test that L{delegateReconcile} copies over the full set of delegates and caches associated groups..
-        """
-
-        # Create remote home - and add some fake notifications
-        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        notifications = yield self.theTransactionUnderTest(0).notificationsWithUID("user01", create=True)
-        uid1 = str(uuid4())
-        obj1 = yield notifications.writeNotificationObject(uid1, "type1", "data1")
-        id1 = obj1.id()
-        uid2 = str(uuid4())
-        obj2 = yield notifications.writeNotificationObject(uid2, "type2", "data2")
-        id2 = obj2.id()
-        yield self.commitTransaction(0)
-
-        # Sync from remote side
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-        yield syncer.disableRemoteHome()
-        changes = yield syncer.notificationsReconcile()
-        self.assertEqual(changes, 2)
-
-        # Now have local notifications
-        notifications = yield NotificationCollection.notificationsWithUID(
-            self.theTransactionUnderTest(1),
-            "user01",
-            status=_HOME_STATUS_MIGRATING,
-        )
-        results = yield notifications.notificationObjects()
-        self.assertEqual(len(results), 2)
-        for result in results:
-            for test_uid, test_id, test_type, test_data in ((uid1, id1, "type1", "data1",), (uid2, id2, "type2", "data2",),):
-                if result.uid() == test_uid:
-                    self.assertNotEqual(result.id(), test_id)
-                    self.assertEqual(json.loads(result.notificationType()), test_type)
-                    data = yield result.notificationData()
-                    self.assertEqual(json.loads(data), test_data)
-                    break
-            else:
-                self.fail("Notification uid {} not found".format(result.uid()))
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_disable_remote_home(self):
-        """
-        Test that L{disableRemoteHome} changes the remote status and prevents a normal state
-        home from being created.
-        """
-
-        # Create remote home - and add some fake notifications
-        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.theTransactionUnderTest(0).notificationsWithUID("user01", create=True)
-        yield self.commitTransaction(0)
-
-        # Sync from remote side
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.prepareCalendarHome()
-        yield syncer.disableRemoteHome()
-
-        # It is disabled
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01")
-        self.assertTrue(home is None)
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_NORMAL)
-        self.assertTrue(home is None)
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_DISABLED)
-        self.assertTrue(home is not None)
-        yield self.commitTransaction(0)
-
-
-
-class TestSharingSync(MultiStoreConduitTest):
-    """
-    Test that L{CrossPodHomeSync} sharing sync works.
-    """
-
-    @inlineCallbacks
-    def setUp(self):
-        self.accounts = FilePath(__file__).sibling("accounts").child("groupAccounts.xml")
-        self.augments = FilePath(__file__).sibling("accounts").child("augments.xml")
-        yield super(TestSharingSync, self).setUp()
-        yield self.populate()
-
-
-    def configure(self):
-        super(TestSharingSync, self).configure()
-        config.Sharing.Enabled = True
-        config.Sharing.Calendars.Enabled = True
-        config.Sharing.Calendars.Groups.Enabled = True
-        config.Sharing.Calendars.Groups.ReconciliationDelaySeconds = 0
-
-
-    @inlineCallbacks
-    def populate(self):
-        yield populateCalendarsFrom(self.requirements, self.theStoreUnderTest(0))
-
-    requirements = {
-        "user01" : None,
-        "user02" : None,
-        "user06" : None,
-        "user07" : None,
-        "user08" : None,
-        "user09" : None,
-        "user10" : None,
-    }
-
-
-    @inlineCallbacks
-    def _createShare(self, shareFrom, shareTo, accept=True):
-        # Invite
-        txnindex = 1 if shareFrom[0] == "p" else 0
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareFrom, create=True)
-        calendar = yield home.childWithName("calendar")
-        shareeView = yield calendar.inviteUIDToShare(shareTo, _BIND_MODE_READ, "summary")
-        yield self.commitTransaction(txnindex)
-
-        # Accept
-        if accept:
-            inviteUID = shareeView.shareUID()
-            txnindex = 1 if shareTo[0] == "p" else 0
-            shareeHome = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareTo)
-            shareeView = yield shareeHome.acceptShare(inviteUID)
-            sharedName = shareeView.name()
-            yield self.commitTransaction(txnindex)
-        else:
-            sharedName = None
-
-        returnValue(sharedName)
-
-
-    @inlineCallbacks
-    def test_shared_collections_reconcile(self):
-        """
-        Test that L{sharedCollectionsReconcile} copies over the full set of delegates and caches associated groups..
-        """
-
-        # Create home
-        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.commitTransaction(0)
-
-        # Shared by migrating user
-        shared_name_02 = yield self._createShare("user01", "user02")
-        shared_name_03 = yield self._createShare("user01", "puser03")
-
-        # Shared to migrating user
-        shared_name_04 = yield self._createShare("user04", "user01")
-        shared_name_05 = yield self._createShare("puser05", "user01")
-
-        # Sync from remote side
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.sync()
-        yield syncer.disableRemoteHome()
-        changes = yield syncer.sharedByCollectionsReconcile()
-        self.assertEqual(changes, 2)
-        changes = yield syncer.sharedToCollectionsReconcile()
-        self.assertEqual(changes, 2)
-
-        # Local calendar exists with shares
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        invites1 = yield calendar1.sharingInvites()
-        self.assertEqual(len(invites1), 2)
-        self.assertEqual(set([invite.uid for invite in invites1]), set((shared_name_02, shared_name_03,)))
-        yield self.commitTransaction(1)
-
-        # Remote sharee can access it
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user02")
-        calendar0 = yield home0.childWithName(shared_name_02)
-        self.assertTrue(calendar0 is not None)
-
-        # Local sharee can access it
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="puser03")
-        calendar1 = yield home1.childWithName(shared_name_03)
-        self.assertTrue(calendar1 is not None)
-
-        # Local shared calendars exist
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName(shared_name_04)
-        self.assertTrue(calendar1 is not None)
-        calendar1 = yield home1.childWithName(shared_name_05)
-        self.assertTrue(calendar1 is not None)
-        yield self.commitTransaction(1)
-
-        # Sharers see migrated user as sharee
-        externalHome0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_EXTERNAL)
-        calendar0 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user04", name="calendar")
-        invites = yield calendar0.allInvitations()
-        self.assertEqual(len(invites), 1)
-        self.assertEqual(invites[0].shareeUID, "user01")
-        self.assertEqual(invites[0].shareeHomeID, externalHome0.id())
-        yield self.commitTransaction(0)
-
-        shareeHome1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="puser05", name="calendar")
-        invites = yield calendar1.allInvitations()
-        self.assertEqual(len(invites), 1)
-        self.assertEqual(invites[0].shareeUID, "user01")
-        self.assertEqual(invites[0].shareeHomeID, shareeHome1.id())
-        yield self.commitTransaction(1)
-
-
-    @inlineCallbacks
-    def test_group_shared_collections_reconcile(self):
-        """
-        Test that L{sharedCollectionsReconcile} copies over the full set of delegates and caches associated groups..
-        """
-
-        # Create home
-        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        yield self.commitTransaction(0)
-
-        # Shared by migrating user
-        yield self._createShare("user01", "group02", accept=False)
-
-        # Sync from remote side
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.loadRecord()
-        yield syncer.sync()
-        yield syncer.disableRemoteHome()
-        changes = yield syncer.sharedByCollectionsReconcile()
-        self.assertEqual(changes, 3)
-        changes = yield syncer.sharedToCollectionsReconcile()
-        self.assertEqual(changes, 0)
-
-        # Local calendar exists with shares
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        invites1 = yield calendar1.sharingInvites()
-        self.assertEqual(len(invites1), 3)
-        sharee = yield GroupShareeRecord.querysimple(self.theTransactionUnderTest(1), calendarID=calendar1.id())
-        self.assertEqual(len(sharee), 1)
-        group = yield GroupsRecord.querysimple(self.theTransactionUnderTest(1), groupID=sharee[0].groupID)
-        self.assertEqual(len(group), 1)
-        self.assertEqual(group[0].groupUID, "group02")
-        yield self.commitTransaction(1)
-
-
-
-class TestGroupAttendeeSync(MultiStoreConduitTest):
-    """
-    GroupAttendeeReconciliation tests
-    """
-
-    now = {"now1": DateTime.getToday().getYear() + 1}
-
-    groupdata1 = """BEGIN:VCALENDAR
-CALSCALE:GREGORIAN
-PRODID:-//Example Inc.//Example Calendar//EN
-VERSION:2.0
-BEGIN:VEVENT
-DTSTAMP:20051222T205953Z
-CREATED:20060101T150000Z
-DTSTART:{now1:04d}0101T100000Z
-DURATION:PT1H
-SUMMARY:event 1
-UID:event1 at ninevah.local
-END:VEVENT
-END:VCALENDAR""".format(**now)
-
-    groupdata2 = """BEGIN:VCALENDAR
-CALSCALE:GREGORIAN
-PRODID:-//Example Inc.//Example Calendar//EN
-VERSION:2.0
-BEGIN:VEVENT
-DTSTAMP:20051222T205953Z
-CREATED:20060101T150000Z
-DTSTART:{now1:04d}0101T100000Z
-DURATION:PT1H
-SUMMARY:event 2
-UID:event2 at ninevah.local
-ORGANIZER:mailto:user01 at example.com
-ATTENDEE:mailto:user01 at example.com
-ATTENDEE:mailto:group02 at example.com
-END:VEVENT
-END:VCALENDAR""".format(**now)
-
-    groupdata3 = """BEGIN:VCALENDAR
-CALSCALE:GREGORIAN
-PRODID:-//Example Inc.//Example Calendar//EN
-VERSION:2.0
-BEGIN:VEVENT
-DTSTAMP:20051222T205953Z
-CREATED:20060101T150000Z
-DTSTART:{now1:04d}0101T100000Z
-DURATION:PT1H
-SUMMARY:event 3
-UID:event3 at ninevah.local
-ORGANIZER:mailto:user01 at example.com
-ATTENDEE:mailto:user01 at example.com
-ATTENDEE:mailto:group04 at example.com
-END:VEVENT
-END:VCALENDAR""".format(**now)
-
-    @inlineCallbacks
-    def setUp(self):
-        self.accounts = FilePath(__file__).sibling("accounts").child("groupAccounts.xml")
-        yield super(TestGroupAttendeeSync, self).setUp()
-        yield self.populate()
-
-
-    def configure(self):
-        super(TestGroupAttendeeSync, self).configure()
-        config.GroupAttendees.Enabled = True
-        config.GroupAttendees.ReconciliationDelaySeconds = 0
-        config.GroupAttendees.AutoUpdateSecondsFromNow = 0
-
-
-    @inlineCallbacks
-    def populate(self):
-        yield populateCalendarsFrom(self.requirements, self.theStoreUnderTest(0))
-
-    requirements = {
-        "user01" : None,
-        "user02" : None,
-        "user06" : None,
-        "user07" : None,
-        "user08" : None,
-        "user09" : None,
-        "user10" : None,
-    }
-
-    @inlineCallbacks
-    def test_group_attendees(self):
-        """
-        Test that L{groupAttendeeReconcile} links groups to the associated calendar object.
-        """
-
-        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar0 = yield home0.childWithName("calendar")
-        yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.groupdata1))
-        yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.groupdata2))
-        yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.groupdata3))
-        yield self.commitTransaction(0)
-
-        yield JobItem.waitEmpty(self.theStoreUnderTest(0).newTransaction, reactor, 60.0)
-
-        # Trigger sync
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.sync()
-
-        # Link groups
-        len_links = yield syncer.groupAttendeeReconcile()
-        self.assertEqual(len_links, 2)
-
-        # Local calendar exists
-        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        calendar1 = yield home1.childWithName("calendar")
-        self.assertTrue(calendar1 is not None)
-        children = yield calendar1.objectResources()
-        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
-
-        object2 = yield calendar1.objectResourceWithName("2.ics")
-        record = (yield object2.groupEventLinks()).values()[0]
-        group02 = yield self.theTransactionUnderTest(1).groupByUID(u"group02")
-        self.assertEqual(record.groupID, group02.groupID)
-        self.assertEqual(record.membershipHash, group02.membershipHash)
-
-        object3 = yield calendar1.objectResourceWithName("3.ics")
-        record = (yield object3.groupEventLinks()).values()[0]
-        group04 = yield self.theTransactionUnderTest(1).groupByUID(u"group04")
-        self.assertEqual(record.groupID, group04.groupID)
-        self.assertEqual(record.membershipHash, group04.membershipHash)

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/test/test_home_sync.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_home_sync.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,1307 @@
+##
+# Copyright (c) 2005-2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from pycalendar.datetime import DateTime
+from twext.enterprise.dal.syntax import Select
+from twext.enterprise.jobqueue import JobItem
+from twisted.internet import reactor
+from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.python.filepath import FilePath
+from twistedcaldav.config import config
+from twistedcaldav.ical import Component, normalize_iCalStr
+from txdav.caldav.datastore.sql import ManagedAttachment
+from txdav.caldav.datastore.sql_directory import GroupShareeRecord
+from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync
+from txdav.common.datastore.podding.migration.sync_metadata import CalendarMigrationRecord, \
+    AttachmentMigrationRecord
+from txdav.common.datastore.podding.test.util import MultiStoreConduitTest
+from txdav.common.datastore.sql_directory import DelegateRecord, \
+    ExternalDelegateGroupsRecord, DelegateGroupsRecord, GroupsRecord
+from txdav.common.datastore.sql_notification import NotificationCollection
+from txdav.common.datastore.sql_tables import schema, _HOME_STATUS_EXTERNAL, \
+    _BIND_MODE_READ, _HOME_STATUS_MIGRATING, _HOME_STATUS_NORMAL, \
+    _HOME_STATUS_DISABLED
+from txdav.common.datastore.test.util import populateCalendarsFrom
+from txdav.who.delegates import Delegates
+from txweb2.http_headers import MimeType
+from txweb2.stream import MemoryStream
+from uuid import uuid4
+import json
+
+
+class TestCrossPodHomeSync(MultiStoreConduitTest):
+    """
+    Test that L{CrossPodHomeSync} works.
+    """
+
+    nowYear = {"now": DateTime.getToday().getYear()}
+
+    caldata1 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid1
+DTSTART:{now:04d}0102T140000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:instance
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**nowYear)
+
+    caldata1_changed = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid1
+DTSTART:{now:04d}0102T150000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:instance changed
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**nowYear)
+
+    caldata2 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid2
+DTSTART:{now:04d}0102T160000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:instance
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**nowYear)
+
+    caldata3 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid3
+DTSTART:{now:04d}0102T160000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:instance
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**nowYear)
+
+    caldata4 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid4
+DTSTART:{now:04d}0102T180000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=DAILY
+SUMMARY:instance
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**nowYear)
+
+
+    @inlineCallbacks
+    def test_remote_home(self):
+        """
+        Test that a remote home can be accessed.
+        """
+
+        home01 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        self.assertTrue(home01 is not None)
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        home = yield syncer._remoteHome(self.theTransactionUnderTest(1))
+        self.assertTrue(home is not None)
+        self.assertEqual(home.id(), home01.id())
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_prepare_home(self):
+        """
+        Test that L{prepareCalendarHome} creates a home.
+        """
+
+        # No home present
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home is None)
+        yield self.commitTransaction(1)
+
+        yield syncer.prepareCalendarHome()
+
+        # Home is present
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home is not None)
+        children = yield home.listChildren()
+        self.assertEqual(len(children), 0)
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_prepare_home_external_txn(self):
+        """
+        Test that L{prepareCalendarHome} creates a home.
+        """
+
+        # No home present
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home is None)
+        yield self.commitTransaction(1)
+
+        yield syncer.prepareCalendarHome(txn=self.theTransactionUnderTest(1))
+        yield self.commitTransaction(1)
+
+        # Home is present
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home is not None)
+        children = yield home.listChildren()
+        self.assertEqual(len(children), 0)
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_home_metadata(self):
+        """
+        Test that L{syncCalendarHomeMetaData} sync home metadata correctly.
+        """
+
+        alarm_event_timed = """BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:alarm_event_timed
+TRIGGER:-PT10M
+END:VALARM
+"""
+        alarm_event_allday = """BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:alarm_event_allday
+TRIGGER:-PT10M
+END:VALARM
+"""
+        alarm_todo_timed = """BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:alarm_todo_timed
+TRIGGER:-PT10M
+END:VALARM
+"""
+        alarm_todo_allday = """BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:alarm_todo_allday
+TRIGGER:-PT10M
+END:VALARM
+"""
+        availability = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//Example Inc.//Example Calendar//EN
+BEGIN:VAVAILABILITY
+UID:20061005T133225Z-00001-availability at example.com
+DTSTART:20060101T000000Z
+DTEND:20060108T000000Z
+DTSTAMP:20061005T133225Z
+ORGANIZER:mailto:bernard at example.com
+BEGIN:AVAILABLE
+UID:20061005T133225Z-00001-A-availability at example.com
+DTSTART:20060102T090000Z
+DTEND:20060102T120000Z
+DTSTAMP:20061005T133225Z
+RRULE:FREQ=WEEKLY;BYDAY=MO,TU,WE,TH,FR
+SUMMARY:Weekdays from 9:00 to 12:00
+END:AVAILABLE
+END:VAVAILABILITY
+END:VCALENDAR
+"""
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        events0 = yield home0.createChildWithName("events")
+        yield home0.setDefaultCalendar(events0, "VEVENT")
+        yield home0.setDefaultAlarm(alarm_event_timed, True, True)
+        yield home0.setDefaultAlarm(alarm_event_allday, True, False)
+        yield home0.setDefaultAlarm(alarm_todo_timed, False, True)
+        yield home0.setDefaultAlarm(alarm_todo_allday, False, False)
+        yield home0.setAvailability(Component.fromString(availability))
+        yield self.commitTransaction(0)
+
+        # Trigger sync
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.sync()
+
+        # Home is present with correct metadata
+        home1 = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home1 is not None)
+        calendar1 = yield home1.childWithName("calendar")
+        events1 = yield home1.childWithName("events")
+        tasks1 = yield home1.childWithName("tasks")
+        self.assertFalse(home1.isDefaultCalendar(calendar1))
+        self.assertTrue(home1.isDefaultCalendar(events1))
+        self.assertTrue(home1.isDefaultCalendar(tasks1))
+        self.assertEqual(home1.getDefaultAlarm(True, True), alarm_event_timed)
+        self.assertEqual(home1.getDefaultAlarm(True, False), alarm_event_allday)
+        self.assertEqual(home1.getDefaultAlarm(False, True), alarm_todo_timed)
+        self.assertEqual(home1.getDefaultAlarm(False, False), alarm_todo_allday)
+        self.assertEqual(normalize_iCalStr(home1.getAvailability()), normalize_iCalStr(availability))
+        yield self.commitTransaction(1)
+
+        # Make some changes
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("calendar")
+        yield home0.setDefaultCalendar(calendar0, "VEVENT")
+        yield home0.setDefaultAlarm(None, True, True)
+        yield home0.setDefaultAlarm(None, False, True)
+        yield self.commitTransaction(0)
+
+        # Trigger sync again
+        yield syncer.sync()
+
+        # Home is present with correct metadata
+        home1 = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home1 is not None)
+        calendar1 = yield home1.childWithName("calendar")
+        events1 = yield home1.childWithName("events")
+        tasks1 = yield home1.childWithName("tasks")
+        self.assertTrue(home1.isDefaultCalendar(calendar1))
+        self.assertFalse(home1.isDefaultCalendar(events1))
+        self.assertTrue(home1.isDefaultCalendar(tasks1))
+        self.assertEqual(home1.getDefaultAlarm(True, True), None)
+        self.assertEqual(home1.getDefaultAlarm(True, False), alarm_event_allday)
+        self.assertEqual(home1.getDefaultAlarm(False, True), None)
+        self.assertEqual(home1.getDefaultAlarm(False, False), alarm_todo_allday)
+        self.assertEqual(normalize_iCalStr(home1.getAvailability()), normalize_iCalStr(availability))
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_get_calendar_sync_list(self):
+        """
+        Test that L{getCalendarSyncList} returns the correct results.
+        """
+
+        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.commitTransaction(0)
+        home01 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01")
+        self.assertTrue(home01 is not None)
+        calendars01 = yield home01.loadChildren()
+        results01 = {}
+        for calendar in calendars01:
+            if calendar.owned():
+                sync_token = yield calendar.syncToken()
+                results01[calendar.id()] = CalendarMigrationRecord.make(
+                    calendarHomeResourceID=home01.id(),
+                    remoteResourceID=calendar.id(),
+                    localResourceID=0,
+                    lastSyncToken=sync_token,
+                )
+
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        results = yield syncer.getCalendarSyncList()
+        self.assertEqual(results, results01)
+
+
+    @inlineCallbacks
+    def test_sync_calendar_initial_empty(self):
+        """
+        Test that L{syncCalendar} syncs an initially non-existent local calendar with
+        an empty remote calendar.
+        """
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("calendar")
+        remote_id = calendar0.id()
+        remote_sync_token = yield calendar0.syncToken()
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+
+        # No local calendar exists yet
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        children = yield home1.listChildren()
+        self.assertEqual(len(children), 0)
+        yield self.commitTransaction(1)
+
+        # Trigger sync of the one calendar
+        local_sync_state = {}
+        remote_sync_state = {remote_id: CalendarMigrationRecord.make(
+            calendarHomeResourceID=home0.id(),
+            remoteResourceID=remote_id,
+            localResourceID=0,
+            lastSyncToken=remote_sync_token,
+        )}
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+        self.assertEqual(len(local_sync_state), 1)
+        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
+
+        # Local calendar exists
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        self.assertTrue(calendar1 is not None)
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_sync_calendar_initial_with_data(self):
+        """
+        Test that L{syncCalendar} syncs an initially non-existent local calendar with
+        a remote calendar containing data. Also check a change to one event is then
+        sync'd the second time.
+        """
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("calendar")
+        o1 = yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
+        o2 = yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.caldata2))
+        o3 = yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.caldata3))
+        remote_id = calendar0.id()
+        mapping0 = dict([(o.name(), o.id()) for o in (o1, o2, o3)])
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+
+        # No local calendar exists yet
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        self.assertTrue(calendar1 is None)
+        yield self.commitTransaction(1)
+
+        # Trigger sync of the one calendar
+        local_sync_state = {}
+        remote_sync_state = yield syncer.getCalendarSyncList()
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+        self.assertEqual(len(local_sync_state), 1)
+        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
+
+        @inlineCallbacks
+        def _checkCalendarObjectMigrationState(home, mapping1):
+            com = schema.CALENDAR_OBJECT_MIGRATION
+            mappings = yield Select(
+                columns=[com.REMOTE_RESOURCE_ID, com.LOCAL_RESOURCE_ID],
+                From=com,
+                Where=(com.CALENDAR_HOME_RESOURCE_ID == home.id())
+            ).on(self.theTransactionUnderTest(1))
+            expected_mappings = dict([(mapping0[name], mapping1[name]) for name in mapping0.keys()])
+            self.assertEqual(dict(mappings), expected_mappings)
+
+
+        # Local calendar exists
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        self.assertTrue(calendar1 is not None)
+        children = yield calendar1.objectResources()
+        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
+        mapping1 = dict([(o.name(), o.id()) for o in children])
+        yield _checkCalendarObjectMigrationState(home1, mapping1)
+        yield self.commitTransaction(1)
+
+        # Change one resource
+        object0 = yield self.calendarObjectUnderTest(
+            txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics"
+        )
+        yield object0.setComponent(Component.fromString(self.caldata1_changed))
+        yield self.commitTransaction(0)
+
+        remote_sync_state = yield syncer.getCalendarSyncList()
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+
+        object1 = yield self.calendarObjectUnderTest(
+            txn=self.theTransactionUnderTest(1), home="user01", status=_HOME_STATUS_MIGRATING, calendar_name="calendar", name="1.ics"
+        )
+        caldata = yield object1.component()
+        self.assertEqual(normalize_iCalStr(caldata), normalize_iCalStr(self.caldata1_changed))
+        yield self.commitTransaction(1)
+
+        # Remove one resource
+        object0 = yield self.calendarObjectUnderTest(
+            txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="2.ics"
+        )
+        yield object0.remove()
+        del mapping0["2.ics"]
+        yield self.commitTransaction(0)
+
+        remote_sync_state = yield syncer.getCalendarSyncList()
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+
+        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="user01", status=_HOME_STATUS_MIGRATING, name="calendar")
+        children = yield calendar1.objectResources()
+        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "3.ics",)))
+        mapping1 = dict([(o.name(), o.id()) for o in children])
+        yield _checkCalendarObjectMigrationState(home1, mapping1)
+        yield self.commitTransaction(1)
+
+        # Add one resource
+        calendar0 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
+        o4 = yield calendar0.createCalendarObjectWithName("4.ics", Component.fromString(self.caldata4))
+        mapping0[o4.name()] = o4.id()
+        yield self.commitTransaction(0)
+
+        remote_sync_state = yield syncer.getCalendarSyncList()
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+
+        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="user01", status=_HOME_STATUS_MIGRATING, name="calendar")
+        children = yield calendar1.objectResources()
+        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "3.ics", "4.ics")))
+        mapping1 = dict([(o.name(), o.id()) for o in children])
+        yield _checkCalendarObjectMigrationState(home1, mapping1)
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_sync_calendars_add_remove(self):
+        """
+        Test that L{syncCalendar} syncs an initially non-existent local calendar with
+        a remote calendar containing data. Also check a change to one event is then
+        sync'd the second time.
+        """
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        children0 = yield home0.loadChildren()
+        details0 = dict([(child.id(), child.name()) for child in children0])
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+
+        # No local calendar exists yet
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        children1 = yield home1.loadChildren()
+        self.assertEqual(len(children1), 0)
+        yield self.commitTransaction(1)
+
+        # Trigger sync
+        yield syncer.syncCalendarList()
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        children1 = yield home1.loadChildren()
+        details1 = dict([(child.id(), child.name()) for child in children1])
+        self.assertEqual(set(details1.values()), set(details0.values()))
+        yield self.commitTransaction(1)
+
+        # Add a calendar
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        newcalendar0 = yield home0.createCalendarWithName("new-calendar")
+        details0[newcalendar0.id()] = newcalendar0.name()
+        yield self.commitTransaction(0)
+
+        # Trigger sync
+        yield syncer.syncCalendarList()
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        children1 = yield home1.loadChildren()
+        details1 = dict([(child.id(), child.name()) for child in children1])
+        self.assertTrue("new-calendar" in details1.values())
+        self.assertEqual(set(details1.values()), set(details0.values()))
+        yield self.commitTransaction(1)
+
+        # Remove a calendar
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("new-calendar")
+        del details0[calendar0.id()]
+        yield calendar0.remove()
+        yield self.commitTransaction(0)
+
+        # Trigger sync
+        yield syncer.syncCalendarList()
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        children1 = yield home1.loadChildren()
+        details1 = dict([(child.id(), child.name()) for child in children1])
+        self.assertTrue("new-calendar" not in details1.values())
+        self.assertEqual(set(details1.values()), set(details0.values()))
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_sync_attachments_add_remove(self):
+        """
+        Test that L{syncAttachments} syncs attachment data, then an update to the data,
+        and finally a removal of the data.
+        """
+
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("calendar")
+        yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
+        yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.caldata2))
+        yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.caldata3))
+        remote_id = calendar0.id()
+        mapping0 = dict()
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+
+        # Trigger sync of the one calendar
+        local_sync_state = {}
+        remote_sync_state = yield syncer.getCalendarSyncList()
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+        self.assertEqual(len(local_sync_state), 1)
+        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
+
+        @inlineCallbacks
+        def _mapLocalIDToRemote(remote_id):
+            records = yield AttachmentMigrationRecord.all(self.theTransactionUnderTest(1))
+            yield self.commitTransaction(1)
+            for record in records:
+                if record.remoteResourceID == remote_id:
+                    returnValue(record.localResourceID)
+            else:
+                returnValue(None)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+        self.assertEqual(changed, set())
+        self.assertEqual(removed, set())
+
+        @inlineCallbacks
+        def _checkAttachmentObjectMigrationState(home, mapping1):
+            am = schema.ATTACHMENT_MIGRATION
+            mappings = yield Select(
+                columns=[am.REMOTE_RESOURCE_ID, am.LOCAL_RESOURCE_ID],
+                From=am,
+                Where=(am.CALENDAR_HOME_RESOURCE_ID == home.id())
+            ).on(self.theTransactionUnderTest(1))
+            expected_mappings = dict([(mapping0[name], mapping1[name]) for name in mapping0.keys()])
+            self.assertEqual(dict(mappings), expected_mappings)
+
+
+        # Local calendar exists
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        self.assertTrue(calendar1 is not None)
+        children = yield calendar1.objectResources()
+        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
+
+        attachments = yield home1.getAllAttachments()
+        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
+        yield _checkAttachmentObjectMigrationState(home1, mapping1)
+        yield self.commitTransaction(1)
+
+        # Add one attachment
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        attachment, _ignore_location = yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1."))
+        id0_1 = attachment.id()
+        md50_1 = attachment.md5()
+        managedid0_1 = attachment.managedID()
+        mapping0[md50_1] = id0_1
+        yield self.commitTransaction(0)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1)),)))
+        self.assertEqual(removed, set())
+
+        # Validate changes
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        attachments = yield home1.getAllAttachments()
+        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
+        yield _checkAttachmentObjectMigrationState(home1, mapping1)
+
+        # Add another attachment
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="2.ics")
+        attachment, _ignore_location = yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test2.txt", MemoryStream("Here is some text #2."))
+        id0_2 = attachment.id()
+        md50_2 = attachment.md5()
+        mapping0[md50_2] = id0_2
+        yield self.commitTransaction(0)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_2)),)))
+        self.assertEqual(removed, set())
+
+        # Validate changes
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        attachments = yield home1.getAllAttachments()
+        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
+        yield _checkAttachmentObjectMigrationState(home1, mapping1)
+
+        # Change original attachment (this is actually a remove and a create all in one)
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        attachment, _ignore_location = yield object1.updateAttachment(managedid0_1, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1 - changed."))
+        del mapping0[md50_1]
+        id0_1_changed = attachment.id()
+        md50_1_changed = attachment.md5()
+        managedid0_1_changed = attachment.managedID()
+        mapping0[md50_1_changed] = id0_1_changed
+        yield self.commitTransaction(0)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1_changed)),)))
+        self.assertEqual(removed, set((id0_1,)))
+
+        # Validate changes
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        attachments = yield home1.getAllAttachments()
+        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
+        yield _checkAttachmentObjectMigrationState(home1, mapping1)
+
+        # Add original to a different resource
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        component = yield object1.componentForUser()
+        attach = component.mainComponent().getProperty("ATTACH")
+
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="3.ics")
+        component = yield object1.componentForUser()
+        attach = component.mainComponent().addProperty(attach)
+        yield object1.setComponent(component)
+        yield self.commitTransaction(0)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+        self.assertEqual(changed, set())
+        self.assertEqual(removed, set())
+
+        # Validate changes
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        attachments = yield home1.getAllAttachments()
+        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
+        yield _checkAttachmentObjectMigrationState(home1, mapping1)
+
+        # Change original attachment in original resource (this creates a new one and does not remove the old)
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        attachment, _ignore_location = yield object1.updateAttachment(managedid0_1_changed, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1 - changed again."))
+        id0_1_changed_again = attachment.id()
+        md50_1_changed_again = attachment.md5()
+        mapping0[md50_1_changed_again] = id0_1_changed_again
+        yield self.commitTransaction(0)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1_changed_again)),)))
+        self.assertEqual(removed, set())
+
+        # Validate changes
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        attachments = yield home1.getAllAttachments()
+        mapping1 = dict([(o.md5(), o.id()) for o in attachments])
+        yield _checkAttachmentObjectMigrationState(home1, mapping1)
+
+
+    @inlineCallbacks
+    def test_link_attachments(self):
+        """
+        Test that L{linkAttachments} links attachment data to the associated calendar object.
+        """
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("calendar")
+        object0_1 = yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
+        object0_2 = yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.caldata2))
+        yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.caldata3))
+        remote_id = calendar0.id()
+
+        attachment, _ignore_location = yield object0_1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1."))
+        id0_1 = attachment.id()
+        md50_1 = attachment.md5()
+        managedid0_1 = attachment.managedID()
+        pathID0_1 = ManagedAttachment.lastSegmentOfUriPath(managedid0_1, attachment.name())
+
+        attachment, _ignore_location = yield object0_2.addAttachment(None, MimeType.fromString("text/plain"), "test2.txt", MemoryStream("Here is some text #2."))
+        id0_2 = attachment.id()
+        md50_2 = attachment.md5()
+        managedid0_2 = attachment.managedID()
+        pathID0_2 = ManagedAttachment.lastSegmentOfUriPath(managedid0_2, attachment.name())
+
+        yield self.commitTransaction(0)
+
+        # Add original to a different resource
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        component = yield object1.componentForUser()
+        attach = component.mainComponent().getProperty("ATTACH")
+
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="3.ics")
+        component = yield object1.componentForUser()
+        attach = component.mainComponent().addProperty(attach)
+        yield object1.setComponent(component)
+        yield self.commitTransaction(0)
+
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+
+        # Trigger sync of the one calendar
+        local_sync_state = {}
+        remote_sync_state = yield syncer.getCalendarSyncList()
+        yield syncer.syncCalendar(
+            remote_id,
+            local_sync_state,
+            remote_sync_state,
+        )
+        self.assertEqual(len(local_sync_state), 1)
+        self.assertEqual(local_sync_state[remote_id].lastSyncToken, remote_sync_state[remote_id].lastSyncToken)
+
+        # Sync attachments
+        changed, removed = yield syncer.syncAttachments()
+
+        @inlineCallbacks
+        def _mapLocalIDToRemote(remote_id):
+            records = yield AttachmentMigrationRecord.all(self.theTransactionUnderTest(1))
+            yield self.commitTransaction(1)
+            for record in records:
+                if record.remoteResourceID == remote_id:
+                    returnValue(record.localResourceID)
+            else:
+                returnValue(None)
+
+        self.assertEqual(changed, set(((yield _mapLocalIDToRemote(id0_1)), (yield _mapLocalIDToRemote(id0_2)),)))
+        self.assertEqual(removed, set())
+
+        # Link attachments (after home is disabled)
+        yield syncer.disableRemoteHome()
+        len_links = yield syncer.linkAttachments()
+        self.assertEqual(len_links, 3)
+
+        # Local calendar exists
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        self.assertTrue(calendar1 is not None)
+        children = yield calendar1.objectResources()
+        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
+
+        # Make sure calendar object is associated with attachment
+        object1 = yield calendar1.objectResourceWithName("1.ics")
+        attachments = yield object1.managedAttachmentList()
+        self.assertEqual(attachments, [pathID0_1, ])
+
+        attachment = yield object1.attachmentWithManagedID(managedid0_1)
+        self.assertTrue(attachment is not None)
+        self.assertEqual(attachment.md5(), md50_1)
+
+        # Make sure calendar object is associated with attachment
+        object1 = yield calendar1.objectResourceWithName("2.ics")
+        attachments = yield object1.managedAttachmentList()
+        self.assertEqual(attachments, [pathID0_2, ])
+
+        attachment = yield object1.attachmentWithManagedID(managedid0_2)
+        self.assertTrue(attachment is not None)
+        self.assertEqual(attachment.md5(), md50_2)
+
+        # Make sure calendar object is associated with attachment
+        object1 = yield calendar1.objectResourceWithName("3.ics")
+        attachments = yield object1.managedAttachmentList()
+        self.assertEqual(attachments, [pathID0_1, ])
+
+        attachment = yield object1.attachmentWithManagedID(managedid0_1)
+        self.assertTrue(attachment is not None)
+        self.assertEqual(attachment.md5(), md50_1)
+
+
+    @inlineCallbacks
+    def test_delegate_reconcile(self):
+        """
+        Test that L{delegateReconcile} copies over the full set of delegates and caches associated groups..
+        """
+
+        # Create remote home
+        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.commitTransaction(0)
+
+        # Add some delegates
+        txn = self.theTransactionUnderTest(0)
+        record01 = yield txn.directoryService().recordWithUID(u"user01")
+        record02 = yield txn.directoryService().recordWithUID(u"user02")
+        record03 = yield txn.directoryService().recordWithUID(u"user03")
+
+        group01 = yield txn.directoryService().recordWithUID(u"__top_group_1__")
+        group02 = yield txn.directoryService().recordWithUID(u"right_coast")
+
+        # Add user02 and user03 as individual delegates
+        yield Delegates.addDelegate(txn, record01, record02, True)
+        yield Delegates.addDelegate(txn, record01, record03, False)
+
+        # Add group delegates
+        yield Delegates.addDelegate(txn, record01, group01, True)
+        yield Delegates.addDelegate(txn, record01, group02, False)
+
+        # Add external delegates
+        yield txn.assignExternalDelegates(u"user01", None, None, u"external1", u"external2")
+
+        yield self.commitTransaction(0)
+
+
+        # Initially no local delegates
+        txn = self.theTransactionUnderTest(1)
+        delegates = yield txn.dumpIndividualDelegatesLocal(u"user01")
+        self.assertEqual(len(delegates), 0)
+        delegates = yield txn.dumpGroupDelegatesLocal(u"user04")
+        self.assertEqual(len(delegates), 0)
+        externals = yield txn.dumpExternalDelegatesLocal(u"user01")
+        self.assertEqual(len(externals), 0)
+        yield self.commitTransaction(1)
+
+        # Sync from remote side
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.disableRemoteHome()
+        yield syncer.delegateReconcile()
+
+        # Now have local delegates
+        txn = self.theTransactionUnderTest(1)
+
+        delegates = yield txn.dumpIndividualDelegatesLocal(u"user01")
+        self.assertEqual(
+            set(delegates),
+            set((
+                DelegateRecord.make(delegator="user01", delegate="user02", readWrite=1),
+                DelegateRecord.make(delegator="user01", delegate="user03", readWrite=0),
+            )),
+        )
+
+        delegateGroups = yield txn.dumpGroupDelegatesLocal(u"user01")
+        group_top = yield txn.groupByUID(u"__top_group_1__")
+        group_right = yield txn.groupByUID(u"right_coast")
+        self.assertEqual(
+            set([item[0] for item in delegateGroups]),
+            set((
+                DelegateGroupsRecord.make(delegator="user01", groupID=group_top.groupID, readWrite=1, isExternal=False),
+                DelegateGroupsRecord.make(delegator="user01", groupID=group_right.groupID, readWrite=0, isExternal=False),
+            )),
+        )
+
+        externals = yield txn.dumpExternalDelegatesLocal(u"user01")
+        self.assertEqual(
+            set(externals),
+            set((
+                ExternalDelegateGroupsRecord.make(
+                    delegator="user01",
+                    groupUIDRead="external1",
+                    groupUIDWrite="external2",
+                ),
+            )),
+        )
+
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_notifications_reconcile(self):
+        """
+        Test that L{delegateReconcile} copies over the full set of delegates and caches associated groups..
+        """
+
+        # Create remote home - and add some fake notifications
+        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        notifications = yield self.theTransactionUnderTest(0).notificationsWithUID("user01", create=True)
+        uid1 = str(uuid4())
+        obj1 = yield notifications.writeNotificationObject(uid1, "type1", "data1")
+        id1 = obj1.id()
+        uid2 = str(uuid4())
+        obj2 = yield notifications.writeNotificationObject(uid2, "type2", "data2")
+        id2 = obj2.id()
+        yield self.commitTransaction(0)
+
+        # Sync from remote side
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+        yield syncer.disableRemoteHome()
+        changes = yield syncer.notificationsReconcile()
+        self.assertEqual(changes, 2)
+
+        # Now have local notifications
+        notifications = yield NotificationCollection.notificationsWithUID(
+            self.theTransactionUnderTest(1),
+            "user01",
+            status=_HOME_STATUS_MIGRATING,
+        )
+        results = yield notifications.notificationObjects()
+        self.assertEqual(len(results), 2)
+        for result in results:
+            for test_uid, test_id, test_type, test_data in ((uid1, id1, "type1", "data1",), (uid2, id2, "type2", "data2",),):
+                if result.uid() == test_uid:
+                    self.assertNotEqual(result.id(), test_id)
+                    self.assertEqual(json.loads(result.notificationType()), test_type)
+                    data = yield result.notificationData()
+                    self.assertEqual(json.loads(data), test_data)
+                    break
+            else:
+                self.fail("Notification uid {} not found".format(result.uid()))
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_disable_remote_home(self):
+        """
+        Test that L{disableRemoteHome} changes the remote status and prevents a normal state
+        home from being created.
+        """
+
+        # Create remote home - and add some fake notifications
+        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.theTransactionUnderTest(0).notificationsWithUID("user01", create=True)
+        yield self.commitTransaction(0)
+
+        # Sync from remote side
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.prepareCalendarHome()
+        yield syncer.disableRemoteHome()
+
+        # It is disabled
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01")
+        self.assertTrue(home is None)
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_NORMAL)
+        self.assertTrue(home is None)
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_DISABLED)
+        self.assertTrue(home is not None)
+        yield self.commitTransaction(0)
+
+
+
+class TestSharingSync(MultiStoreConduitTest):
+    """
+    Test that L{CrossPodHomeSync} sharing sync works.
+    """
+
+    @inlineCallbacks
+    def setUp(self):
+        self.accounts = FilePath(__file__).sibling("accounts").child("groupAccounts.xml")
+        self.augments = FilePath(__file__).sibling("accounts").child("augments.xml")
+        yield super(TestSharingSync, self).setUp()
+        yield self.populate()
+
+
+    def configure(self):
+        super(TestSharingSync, self).configure()
+        config.Sharing.Enabled = True
+        config.Sharing.Calendars.Enabled = True
+        config.Sharing.Calendars.Groups.Enabled = True
+        config.Sharing.Calendars.Groups.ReconciliationDelaySeconds = 0
+
+
+    @inlineCallbacks
+    def populate(self):
+        yield populateCalendarsFrom(self.requirements, self.theStoreUnderTest(0))
+
+    requirements = {
+        "user01" : None,
+        "user02" : None,
+        "user06" : None,
+        "user07" : None,
+        "user08" : None,
+        "user09" : None,
+        "user10" : None,
+    }
+
+
+    @inlineCallbacks
+    def _createShare(self, shareFrom, shareTo, accept=True):
+        # Invite
+        txnindex = 1 if shareFrom[0] == "p" else 0
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareFrom, create=True)
+        calendar = yield home.childWithName("calendar")
+        shareeView = yield calendar.inviteUIDToShare(shareTo, _BIND_MODE_READ, "summary")
+        yield self.commitTransaction(txnindex)
+
+        # Accept
+        if accept:
+            inviteUID = shareeView.shareUID()
+            txnindex = 1 if shareTo[0] == "p" else 0
+            shareeHome = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareTo)
+            shareeView = yield shareeHome.acceptShare(inviteUID)
+            sharedName = shareeView.name()
+            yield self.commitTransaction(txnindex)
+        else:
+            sharedName = None
+
+        returnValue(sharedName)
+
+
+    @inlineCallbacks
+    def test_shared_collections_reconcile(self):
+        """
+        Test that L{sharedCollectionsReconcile} copies over the full set of delegates and caches associated groups..
+        """
+
+        # Create home
+        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.commitTransaction(0)
+
+        # Shared by migrating user
+        shared_name_02 = yield self._createShare("user01", "user02")
+        shared_name_03 = yield self._createShare("user01", "puser03")
+
+        # Shared to migrating user
+        shared_name_04 = yield self._createShare("user04", "user01")
+        shared_name_05 = yield self._createShare("puser05", "user01")
+
+        # Sync from remote side
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.sync()
+        yield syncer.disableRemoteHome()
+        changes = yield syncer.sharedByCollectionsReconcile()
+        self.assertEqual(changes, 2)
+        changes = yield syncer.sharedToCollectionsReconcile()
+        self.assertEqual(changes, 2)
+
+        # Local calendar exists with shares
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        invites1 = yield calendar1.sharingInvites()
+        self.assertEqual(len(invites1), 2)
+        self.assertEqual(set([invite.uid for invite in invites1]), set((shared_name_02, shared_name_03,)))
+        yield self.commitTransaction(1)
+
+        # Remote sharee can access it
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user02")
+        calendar0 = yield home0.childWithName(shared_name_02)
+        self.assertTrue(calendar0 is not None)
+
+        # Local sharee can access it
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="puser03")
+        calendar1 = yield home1.childWithName(shared_name_03)
+        self.assertTrue(calendar1 is not None)
+
+        # Local shared calendars exist
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName(shared_name_04)
+        self.assertTrue(calendar1 is not None)
+        calendar1 = yield home1.childWithName(shared_name_05)
+        self.assertTrue(calendar1 is not None)
+        yield self.commitTransaction(1)
+
+        # Sharers see migrated user as sharee
+        externalHome0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_EXTERNAL)
+        calendar0 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user04", name="calendar")
+        invites = yield calendar0.allInvitations()
+        self.assertEqual(len(invites), 1)
+        self.assertEqual(invites[0].shareeUID, "user01")
+        self.assertEqual(invites[0].shareeHomeID, externalHome0.id())
+        yield self.commitTransaction(0)
+
+        shareeHome1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="puser05", name="calendar")
+        invites = yield calendar1.allInvitations()
+        self.assertEqual(len(invites), 1)
+        self.assertEqual(invites[0].shareeUID, "user01")
+        self.assertEqual(invites[0].shareeHomeID, shareeHome1.id())
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_group_shared_collections_reconcile(self):
+        """
+        Test that L{sharedCollectionsReconcile} copies over the full set of delegates and caches associated groups..
+        """
+
+        # Create home
+        yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.notificationCollectionUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        yield self.commitTransaction(0)
+
+        # Shared by migrating user
+        yield self._createShare("user01", "group02", accept=False)
+
+        # Sync from remote side
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.loadRecord()
+        yield syncer.sync()
+        yield syncer.disableRemoteHome()
+        changes = yield syncer.sharedByCollectionsReconcile()
+        self.assertEqual(changes, 3)
+        changes = yield syncer.sharedToCollectionsReconcile()
+        self.assertEqual(changes, 0)
+
+        # Local calendar exists with shares
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        invites1 = yield calendar1.sharingInvites()
+        self.assertEqual(len(invites1), 3)
+        sharee = yield GroupShareeRecord.querysimple(self.theTransactionUnderTest(1), calendarID=calendar1.id())
+        self.assertEqual(len(sharee), 1)
+        group = yield GroupsRecord.querysimple(self.theTransactionUnderTest(1), groupID=sharee[0].groupID)
+        self.assertEqual(len(group), 1)
+        self.assertEqual(group[0].groupUID, "group02")
+        yield self.commitTransaction(1)
+
+
+
+class TestGroupAttendeeSync(MultiStoreConduitTest):
+    """
+    GroupAttendeeReconciliation tests
+    """
+
+    now = {"now1": DateTime.getToday().getYear() + 1}
+
+    groupdata1 = """BEGIN:VCALENDAR
+CALSCALE:GREGORIAN
+PRODID:-//Example Inc.//Example Calendar//EN
+VERSION:2.0
+BEGIN:VEVENT
+DTSTAMP:20051222T205953Z
+CREATED:20060101T150000Z
+DTSTART:{now1:04d}0101T100000Z
+DURATION:PT1H
+SUMMARY:event 1
+UID:event1 at ninevah.local
+END:VEVENT
+END:VCALENDAR""".format(**now)
+
+    groupdata2 = """BEGIN:VCALENDAR
+CALSCALE:GREGORIAN
+PRODID:-//Example Inc.//Example Calendar//EN
+VERSION:2.0
+BEGIN:VEVENT
+DTSTAMP:20051222T205953Z
+CREATED:20060101T150000Z
+DTSTART:{now1:04d}0101T100000Z
+DURATION:PT1H
+SUMMARY:event 2
+UID:event2 at ninevah.local
+ORGANIZER:mailto:user01 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE:mailto:group02 at example.com
+END:VEVENT
+END:VCALENDAR""".format(**now)
+
+    groupdata3 = """BEGIN:VCALENDAR
+CALSCALE:GREGORIAN
+PRODID:-//Example Inc.//Example Calendar//EN
+VERSION:2.0
+BEGIN:VEVENT
+DTSTAMP:20051222T205953Z
+CREATED:20060101T150000Z
+DTSTART:{now1:04d}0101T100000Z
+DURATION:PT1H
+SUMMARY:event 3
+UID:event3 at ninevah.local
+ORGANIZER:mailto:user01 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE:mailto:group04 at example.com
+END:VEVENT
+END:VCALENDAR""".format(**now)
+
+    @inlineCallbacks
+    def setUp(self):
+        self.accounts = FilePath(__file__).sibling("accounts").child("groupAccounts.xml")
+        yield super(TestGroupAttendeeSync, self).setUp()
+        yield self.populate()
+
+
+    def configure(self):
+        super(TestGroupAttendeeSync, self).configure()
+        config.GroupAttendees.Enabled = True
+        config.GroupAttendees.ReconciliationDelaySeconds = 0
+        config.GroupAttendees.AutoUpdateSecondsFromNow = 0
+
+
+    @inlineCallbacks
+    def populate(self):
+        yield populateCalendarsFrom(self.requirements, self.theStoreUnderTest(0))
+
+    requirements = {
+        "user01" : None,
+        "user02" : None,
+        "user06" : None,
+        "user07" : None,
+        "user08" : None,
+        "user09" : None,
+        "user10" : None,
+    }
+
+    @inlineCallbacks
+    def test_group_attendees(self):
+        """
+        Test that L{groupAttendeeReconcile} links groups to the associated calendar object.
+        """
+
+        home0 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar0 = yield home0.childWithName("calendar")
+        yield calendar0.createCalendarObjectWithName("1.ics", Component.fromString(self.groupdata1))
+        yield calendar0.createCalendarObjectWithName("2.ics", Component.fromString(self.groupdata2))
+        yield calendar0.createCalendarObjectWithName("3.ics", Component.fromString(self.groupdata3))
+        yield self.commitTransaction(0)
+
+        yield JobItem.waitEmpty(self.theStoreUnderTest(0).newTransaction, reactor, 60.0)
+
+        # Trigger sync
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.sync()
+
+        # Link groups
+        len_links = yield syncer.groupAttendeeReconcile()
+        self.assertEqual(len_links, 2)
+
+        # Local calendar exists
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        calendar1 = yield home1.childWithName("calendar")
+        self.assertTrue(calendar1 is not None)
+        children = yield calendar1.objectResources()
+        self.assertEqual(set([child.name() for child in children]), set(("1.ics", "2.ics", "3.ics",)))
+
+        object2 = yield calendar1.objectResourceWithName("2.ics")
+        record = (yield object2.groupEventLinks()).values()[0]
+        group02 = yield self.theTransactionUnderTest(1).groupByUID(u"group02")
+        self.assertEqual(record.groupID, group02.groupID)
+        self.assertEqual(record.membershipHash, group02.membershipHash)
+
+        object3 = yield calendar1.objectResourceWithName("3.ics")
+        record = (yield object3.groupEventLinks()).values()[0]
+        group04 = yield self.theTransactionUnderTest(1).groupByUID(u"group04")
+        self.assertEqual(record.groupID, group04.groupID)
+        self.assertEqual(record.membershipHash, group04.membershipHash)

Deleted: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py
===================================================================
--- CalendarServer/trunk/txdav/common/datastore/podding/migration/test/test_migration.py	2015-03-10 15:32:00 UTC (rev 14551)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,693 +0,0 @@
-##
-# Copyright (c) 2015 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from pycalendar.datetime import DateTime
-from twisted.internet.defer import inlineCallbacks, returnValue
-from twisted.python.filepath import FilePath
-from twistedcaldav.config import config
-from twistedcaldav.ical import Component
-from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync
-from txdav.common.datastore.podding.test.util import MultiStoreConduitTest
-from txdav.common.datastore.sql_tables import _BIND_MODE_READ, \
-    _HOME_STATUS_DISABLED, _HOME_STATUS_NORMAL, _HOME_STATUS_EXTERNAL, \
-    _HOME_STATUS_MIGRATING
-from txdav.common.datastore.test.util import populateCalendarsFrom
-from txdav.who.delegates import Delegates
-from txweb2.http_headers import MimeType
-from txweb2.stream import MemoryStream
-from txdav.caldav.datastore.scheduling.ischedule.delivery import IScheduleRequest
-from txdav.caldav.datastore.scheduling.ischedule.resource import IScheduleInboxResource
-from txweb2.dav.test.util import SimpleRequest
-from txdav.caldav.datastore.test.common import CaptureProtocol
-
-
-class TestCompleteMigrationCycle(MultiStoreConduitTest):
-    """
-    Test that a full migration cycle using L{CrossPodHomeSync} works.
-    """
-
-    def __init__(self, methodName='runTest'):
-        super(TestCompleteMigrationCycle, self).__init__(methodName)
-        self.stash = {}
-
-
-    @inlineCallbacks
-    def setUp(self):
-        @inlineCallbacks
-        def _fakeSubmitRequest(iself, ssl, host, port, request):
-            pod = (port - 8008) / 100
-            inbox = IScheduleInboxResource(self.site.resource, self.theStoreUnderTest(pod), podding=True)
-            response = yield inbox.http_POST(SimpleRequest(
-                self.site,
-                "POST",
-                "http://{host}:{port}/podding".format(host=host, port=port),
-                request.headers,
-                request.stream.mem,
-            ))
-            returnValue(response)
-
-
-        self.patch(IScheduleRequest, "_submitRequest", _fakeSubmitRequest)
-        self.accounts = FilePath(__file__).sibling("accounts").child("groupAccounts.xml")
-        self.augments = FilePath(__file__).sibling("accounts").child("augments.xml")
-        yield super(TestCompleteMigrationCycle, self).setUp()
-        yield self.populate()
-
-
-    def configure(self):
-        super(TestCompleteMigrationCycle, self).configure()
-        config.GroupAttendees.Enabled = True
-        config.GroupAttendees.ReconciliationDelaySeconds = 0
-        config.GroupAttendees.AutoUpdateSecondsFromNow = 0
-        config.AccountingCategories.migration = True
-        config.AccountingPrincipals = ["*"]
-
-
-    @inlineCallbacks
-    def populate(self):
-        yield populateCalendarsFrom(self.requirements0, self.theStoreUnderTest(0))
-        yield populateCalendarsFrom(self.requirements1, self.theStoreUnderTest(1))
-
-    requirements0 = {
-        "user01" : None,
-        "user02" : None,
-        "user03" : None,
-        "user04" : None,
-        "user05" : None,
-        "user06" : None,
-        "user07" : None,
-        "user08" : None,
-        "user09" : None,
-        "user10" : None,
-    }
-
-    requirements1 = {
-        "puser01" : None,
-        "puser02" : None,
-        "puser03" : None,
-        "puser04" : None,
-        "puser05" : None,
-        "puser06" : None,
-        "puser07" : None,
-        "puser08" : None,
-        "puser09" : None,
-        "puser10" : None,
-    }
-
-
-    @inlineCallbacks
-    def _createShare(self, shareFrom, shareTo, accept=True):
-        # Invite
-        txnindex = 1 if shareFrom[0] == "p" else 0
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareFrom, create=True)
-        calendar = yield home.childWithName("calendar")
-        shareeView = yield calendar.inviteUIDToShare(shareTo, _BIND_MODE_READ, "summary")
-        yield self.commitTransaction(txnindex)
-
-        # Accept
-        if accept:
-            inviteUID = shareeView.shareUID()
-            txnindex = 1 if shareTo[0] == "p" else 0
-            shareeHome = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareTo)
-            shareeView = yield shareeHome.acceptShare(inviteUID)
-            sharedName = shareeView.name()
-            yield self.commitTransaction(txnindex)
-        else:
-            sharedName = None
-
-        returnValue(sharedName)
-
-
-    def attachmentToString(self, attachment):
-        """
-        Convenience to convert an L{IAttachment} to a string.
-
-        @param attachment: an L{IAttachment} provider to convert into a string.
-
-        @return: a L{Deferred} that fires with the contents of the attachment.
-
-        @rtype: L{Deferred} firing C{bytes}
-        """
-        capture = CaptureProtocol()
-        attachment.retrieve(capture)
-        return capture.deferred
-
-
-    now = {
-        "now": DateTime.getToday().getYear(),
-        "now1": DateTime.getToday().getYear() + 1,
-    }
-
-    data01_1 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data01_1
-DTSTART:{now1:04d}0102T140000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:data01_1
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    data01_1_changed = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data01_1
-DTSTART:{now1:04d}0102T140000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:data01_1_changed
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    data01_2 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data01_2
-DTSTART:{now1:04d}0102T160000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-SUMMARY:data01_2
-ORGANIZER:mailto:user01 at example.com
-ATTENDEE:mailto:user01 at example.com
-ATTENDEE:mailto:user02 at example.com
-ATTENDEE:mailto:puser02 at example.com
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    data01_3 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data01_3
-DTSTART:{now1:04d}0102T180000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-SUMMARY:data01_3
-ORGANIZER:mailto:user01 at example.com
-ATTENDEE:mailto:user01 at example.com
-ATTENDEE:mailto:group02 at example.com
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    data02_1 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data02_1
-DTSTART:{now1:04d}0103T140000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:data02_1
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    data02_2 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data02_2
-DTSTART:{now1:04d}0103T160000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-SUMMARY:data02_2
-ORGANIZER:mailto:user02 at example.com
-ATTENDEE:mailto:user02 at example.com
-ATTENDEE:mailto:user01 at example.com
-ATTENDEE:mailto:puser02 at example.com
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    data02_3 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_data02_3
-DTSTART:{now1:04d}0103T180000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-SUMMARY:data02_3
-ORGANIZER:mailto:user02 at example.com
-ATTENDEE:mailto:user02 at example.com
-ATTENDEE:mailto:group01 at example.com
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    datap02_1 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_datap02_1
-DTSTART:{now1:04d}0103T140000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-RRULE:FREQ=WEEKLY
-SUMMARY:datap02_1
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    datap02_2 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_datap02_2
-DTSTART:{now1:04d}0103T160000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-SUMMARY:datap02_2
-ORGANIZER:mailto:puser02 at example.com
-ATTENDEE:mailto:puser02 at example.com
-ATTENDEE:mailto:user01 at example.com
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-    datap02_3 = """BEGIN:VCALENDAR
-VERSION:2.0
-CALSCALE:GREGORIAN
-PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-BEGIN:VEVENT
-UID:uid_datap02_3
-DTSTART:{now1:04d}0103T180000Z
-DURATION:PT1H
-CREATED:20060102T190000Z
-DTSTAMP:20051222T210507Z
-SUMMARY:datap02_3
-ORGANIZER:mailto:puser02 at example.com
-ATTENDEE:mailto:puser02 at example.com
-ATTENDEE:mailto:group01 at example.com
-END:VEVENT
-END:VCALENDAR
-""".replace("\n", "\r\n").format(**now)
-
-
-    @inlineCallbacks
-    def preCheck(self):
-        """
-        Checks prior to starting any tests
-        """
-
-        for i in range(self.numberOfStores):
-            txn = self.theTransactionUnderTest(i)
-            record = yield txn.directoryService().recordWithUID(u"user01")
-            self.assertEqual(record.serviceNodeUID, "A")
-            self.assertEqual(record.thisServer(), i == 0)
-            record = yield txn.directoryService().recordWithUID(u"user02")
-            self.assertEqual(record.serviceNodeUID, "A")
-            self.assertEqual(record.thisServer(), i == 0)
-            record = yield txn.directoryService().recordWithUID(u"puser02")
-            self.assertEqual(record.serviceNodeUID, "B")
-            self.assertEqual(record.thisServer(), i == 1)
-            yield self.commitTransaction(i)
-
-
-    @inlineCallbacks
-    def initialState(self):
-        """
-        Setup the server with an initial set of data
-
-        user01 - migrating user
-        user02 - has a calendar shared with user01
-        user03 - shared to by user01
-
-        puser01 - user on other pod
-        puser02 - has a calendar shared with user01
-        puser03 - shared to by user01
-        """
-
-        # Data for user01
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
-        calendar = yield home.childWithName("calendar")
-        yield calendar.createCalendarObjectWithName("01_1.ics", Component.fromString(self.data01_1))
-        yield calendar.createCalendarObjectWithName("01_2.ics", Component.fromString(self.data01_2))
-        obj3 = yield calendar.createCalendarObjectWithName("01_3.ics", Component.fromString(self.data01_3))
-        attachment, _ignore_location = yield obj3.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1."))
-        self.stash["user01_attachment_id"] = attachment.id()
-        self.stash["user01_attachment_md5"] = attachment.md5()
-        self.stash["user01_attachment_mid"] = attachment.managedID()
-        yield self.commitTransaction(0)
-
-        # Data for user02
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user02", create=True)
-        calendar = yield home.childWithName("calendar")
-        yield calendar.createCalendarObjectWithName("02_1.ics", Component.fromString(self.data02_1))
-        yield calendar.createCalendarObjectWithName("02_2.ics", Component.fromString(self.data02_2))
-        yield calendar.createCalendarObjectWithName("02_3.ics", Component.fromString(self.data02_3))
-        yield self.commitTransaction(0)
-
-        # Data for puser02
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="puser02", create=True)
-        calendar = yield home.childWithName("calendar")
-        yield calendar.createCalendarObjectWithName("p02_1.ics", Component.fromString(self.datap02_1))
-        yield calendar.createCalendarObjectWithName("p02_2.ics", Component.fromString(self.datap02_2))
-        yield calendar.createCalendarObjectWithName("p02_3.ics", Component.fromString(self.datap02_3))
-        yield self.commitTransaction(1)
-
-        # Share calendars
-        self.stash["sharename_user01_to_user03"] = yield self._createShare("user01", "user03")
-        self.stash["sharename_user01_to_puser03"] = yield self._createShare("user01", "puser03")
-        self.stash["sharename_user02_to_user01"] = yield self._createShare("user02", "user01")
-        self.stash["sharename_puser02_to_user01"] = yield self._createShare("puser02", "user01")
-
-        # Add some delegates
-        txn = self.theTransactionUnderTest(0)
-        record01 = yield txn.directoryService().recordWithUID(u"user01")
-        record02 = yield txn.directoryService().recordWithUID(u"user02")
-        record03 = yield txn.directoryService().recordWithUID(u"user03")
-        precord01 = yield txn.directoryService().recordWithUID(u"puser01")
-
-        group02 = yield txn.directoryService().recordWithUID(u"group02")
-        group03 = yield txn.directoryService().recordWithUID(u"group03")
-
-        # Add user02 and user03 as individual delegates
-        yield Delegates.addDelegate(txn, record01, record02, True)
-        yield Delegates.addDelegate(txn, record01, record03, False)
-        yield Delegates.addDelegate(txn, record01, precord01, False)
-
-        # Add group delegates
-        yield Delegates.addDelegate(txn, record01, group02, True)
-        yield Delegates.addDelegate(txn, record01, group03, False)
-
-        # Add external delegates
-        yield txn.assignExternalDelegates(u"user01", None, None, u"external1", u"external2")
-
-        yield self.commitTransaction(0)
-
-        yield self.waitAllEmpty()
-
-
-    @inlineCallbacks
-    def secondState(self):
-        """
-        Setup the server with data changes appearing after the first sync
-        """
-        txn = self.theTransactionUnderTest(0)
-        obj = yield self.calendarObjectUnderTest(txn, name="01_1.ics", calendar_name="calendar", home="user01")
-        yield obj.setComponent(self.data01_1_changed)
-
-        obj = yield self.calendarObjectUnderTest(txn, name="02_2.ics", calendar_name="calendar", home="user02")
-        attachment, _ignore_location = yield obj.addAttachment(None, MimeType.fromString("text/plain"), "test_02.txt", MemoryStream("Here is some text #02."))
-        self.stash["user02_attachment_id"] = attachment.id()
-        self.stash["user02_attachment_md5"] = attachment.md5()
-        self.stash["user02_attachment_mid"] = attachment.managedID()
-
-        yield self.commitTransaction(0)
-
-        yield self.waitAllEmpty()
-
-
-    @inlineCallbacks
-    def finalState(self):
-        """
-        Setup the server with data changes appearing before the final sync
-        """
-        txn = self.theTransactionUnderTest(1)
-        obj = yield self.calendarObjectUnderTest(txn, name="p02_2.ics", calendar_name="calendar", home="puser02")
-        attachment, _ignore_location = yield obj.addAttachment(None, MimeType.fromString("text/plain"), "test_p02.txt", MemoryStream("Here is some text #p02."))
-        self.stash["puser02_attachment_id"] = attachment.id()
-        self.stash["puser02_attachment_mid"] = attachment.managedID()
-        self.stash["puser02_attachment_md5"] = attachment.md5()
-
-        yield self.commitTransaction(1)
-
-        yield self.waitAllEmpty()
-
-
-    @inlineCallbacks
-    def switchAccounts(self):
-        """
-        Switch the migrated user accounts to point to the new pod
-        """
-
-        for i in range(self.numberOfStores):
-            txn = self.theTransactionUnderTest(i)
-            record = yield txn.directoryService().recordWithUID(u"user01")
-            yield self.changeRecord(record, txn.directoryService().fieldName.serviceNodeUID, u"B", directory=txn.directoryService())
-            yield self.commitTransaction(i)
-
-        for i in range(self.numberOfStores):
-            txn = self.theTransactionUnderTest(i)
-            record = yield txn.directoryService().recordWithUID(u"user01")
-            self.assertEqual(record.serviceNodeUID, "B")
-            self.assertEqual(record.thisServer(), i == 1)
-            record = yield txn.directoryService().recordWithUID(u"user02")
-            self.assertEqual(record.serviceNodeUID, "A")
-            self.assertEqual(record.thisServer(), i == 0)
-            record = yield txn.directoryService().recordWithUID(u"puser02")
-            self.assertEqual(record.serviceNodeUID, "B")
-            self.assertEqual(record.thisServer(), i == 1)
-            yield self.commitTransaction(i)
-
-
-    @inlineCallbacks
-    def postCheck(self):
-        """
-        Checks after migration is done
-        """
-
-        # Check that the home has been moved
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01")
-        self.assertTrue(home.external())
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_NORMAL)
-        self.assertTrue(home is None)
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_EXTERNAL)
-        self.assertTrue(home is not None)
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_DISABLED)
-        self.assertTrue(home is not None)
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home is None)
-        yield self.commitTransaction(0)
-
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01")
-        self.assertTrue(home.normal())
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_NORMAL)
-        self.assertTrue(home is not None)
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_EXTERNAL)
-        self.assertTrue(home is None)
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_DISABLED)
-        self.assertTrue(home is not None)
-        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
-        self.assertTrue(home is None)
-        yield self.commitTransaction(1)
-
-        # Check that the notifications have been moved
-        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_NORMAL)
-        self.assertTrue(notifications is None)
-        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_EXTERNAL)
-        self.assertTrue(notifications is None)
-        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_DISABLED)
-        self.assertTrue(notifications is not None)
-        yield self.commitTransaction(0)
-
-        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_NORMAL)
-        self.assertTrue(notifications is not None)
-        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_EXTERNAL)
-        self.assertTrue(notifications is None)
-        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_DISABLED)
-        self.assertTrue(notifications is not None)
-        yield self.commitTransaction(1)
-
-        # New pod data
-        homes = {}
-        homes["user01"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01")
-        homes["user02"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user02")
-        self.assertTrue(homes["user02"].external())
-        homes["user03"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user03")
-        self.assertTrue(homes["user03"].external())
-        homes["puser01"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="puser01")
-        self.assertTrue(homes["puser01"].normal())
-        homes["puser02"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="puser02")
-        self.assertTrue(homes["puser02"].normal())
-        homes["puser03"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="puser03")
-        self.assertTrue(homes["puser03"].normal())
-
-        # Check calendar data on new pod
-        calendars = yield homes["user01"].loadChildren()
-        calnames = dict([(calendar.name(), calendar) for calendar in calendars])
-        self.assertEqual(
-            set(calnames.keys()),
-            set(("calendar", "tasks", "inbox", self.stash["sharename_user02_to_user01"], self.stash["sharename_puser02_to_user01"],))
-        )
-
-        # Check shared-by user01 on new pod
-        shared = calnames["calendar"]
-        invitations = yield shared.sharingInvites()
-        by_sharee = dict([(invitation.shareeUID, invitation) for invitation in invitations])
-        self.assertEqual(len(invitations), 2)
-        self.assertEqual(set(by_sharee.keys()), set(("user03", "puser03",)))
-        self.assertEqual(by_sharee["user03"].shareeHomeID, homes["user03"].id())
-        self.assertEqual(by_sharee["puser03"].shareeHomeID, homes["puser03"].id())
-
-        # Check shared-to user01 on new pod
-        shared = calnames[self.stash["sharename_user02_to_user01"]]
-        self.assertEqual(shared.ownerHome().uid(), "user02")
-        self.assertEqual(shared.ownerHome().id(), homes["user02"].id())
-
-        shared = calnames[self.stash["sharename_puser02_to_user01"]]
-        self.assertEqual(shared.ownerHome().uid(), "puser02")
-        self.assertEqual(shared.ownerHome().id(), homes["puser02"].id())
-
-        shared = yield homes["puser02"].calendarWithName("calendar")
-        invitations = yield shared.sharingInvites()
-        self.assertEqual(len(invitations), 1)
-        self.assertEqual(invitations[0].shareeHomeID, homes["user01"].id())
-
-        yield self.commitTransaction(1)
-
-        # Old pod data
-        homes = {}
-        homes["user01"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01")
-        homes["user02"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user02")
-        self.assertTrue(homes["user02"].normal())
-        homes["user03"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user03")
-        self.assertTrue(homes["user03"].normal())
-        homes["puser01"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="puser01")
-        self.assertTrue(homes["puser01"] is None)
-        homes["puser02"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="puser02")
-        self.assertTrue(homes["puser02"].external())
-        homes["puser03"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="puser03")
-        self.assertTrue(homes["puser03"].external())
-
-        # Check shared-by user01 on old pod
-        shared = yield homes["user03"].calendarWithName(self.stash["sharename_user01_to_user03"])
-        self.assertEqual(shared.ownerHome().uid(), "user01")
-        self.assertEqual(shared.ownerHome().id(), homes["user01"].id())
-
-        # Check shared-to user01 on old pod
-        shared = yield homes["user02"].calendarWithName("calendar")
-        invitations = yield shared.sharingInvites()
-        self.assertEqual(len(invitations), 1)
-        self.assertEqual(invitations[0].shareeHomeID, homes["user01"].id())
-
-        yield self.commitTransaction(0)
-
-        # Delegates on each pod
-        for pod in range(self.numberOfStores):
-            txn = self.theTransactionUnderTest(pod)
-            records = {}
-            for ctr in range(10):
-                uid = u"user{:02d}".format(ctr + 1)
-                records[uid] = yield txn.directoryService().recordWithUID(uid)
-            for ctr in range(10):
-                uid = u"puser{:02d}".format(ctr + 1)
-                records[uid] = yield txn.directoryService().recordWithUID(uid)
-            for ctr in range(10):
-                uid = u"group{:02d}".format(ctr + 1)
-                records[uid] = yield txn.directoryService().recordWithUID(uid)
-
-            delegates = yield Delegates.delegatesOf(txn, records["user01"], True, False)
-            self.assertTrue(records["user02"] in delegates)
-            self.assertTrue(records["group02"] in delegates)
-            delegates = yield Delegates.delegatesOf(txn, records["user01"], True, True)
-            self.assertTrue(records["user02"] in delegates)
-            self.assertTrue(records["user06"] in delegates)
-            self.assertTrue(records["user07"] in delegates)
-            self.assertTrue(records["user08"] in delegates)
-
-            delegates = yield Delegates.delegatesOf(txn, records["user01"], False, False)
-            self.assertTrue(records["user03"] in delegates)
-            self.assertTrue(records["group03"] in delegates)
-            self.assertTrue(records["puser01"] in delegates)
-            delegates = yield Delegates.delegatesOf(txn, records["user01"], False, True)
-            self.assertTrue(records["user03"] in delegates)
-            self.assertTrue(records["user07"] in delegates)
-            self.assertTrue(records["user08"] in delegates)
-            self.assertTrue(records["user09"] in delegates)
-            self.assertTrue(records["puser01"] in delegates)
-
-        # Attachments
-        obj = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(1), name="01_3.ics", calendar_name="calendar", home="user01")
-        attachment = yield obj.attachmentWithManagedID(self.stash["user01_attachment_mid"])
-        self.assertTrue(attachment is not None)
-        self.assertEqual(attachment.md5(), self.stash["user01_attachment_md5"])
-        data = yield self.attachmentToString(attachment)
-        self.assertEqual(data, "Here is some text #1.")
-
-
-    @inlineCallbacks
-    def test_migration(self):
-        """
-        Full migration cycle.
-        """
-
-        yield self.preCheck()
-
-        # Step 1. Live full sync
-        yield self.initialState()
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.sync()
-
-        # Step 2. Live incremental sync
-        yield self.secondState()
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.sync()
-
-        # Step 3. Disable home after final changes
-        yield self.finalState()
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
-        yield syncer.disableRemoteHome()
-
-        # Step 4. Final incremental sync
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
-        yield syncer.sync()
-
-        # Step 5. Final reconcile sync
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
-        yield syncer.finalSync()
-
-        # Step 6. Enable new home
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
-        yield syncer.enableLocalHome()
-
-        # Step 7. Remove old home
-        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
-        yield syncer.removeRemoteHome()
-
-        yield self.switchAccounts()
-
-        yield self.postCheck()

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/migration/test/test_migration.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/migration/test/test_migration.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,693 @@
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from pycalendar.datetime import DateTime
+from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.python.filepath import FilePath
+from twistedcaldav.config import config
+from twistedcaldav.ical import Component
+from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync
+from txdav.common.datastore.podding.test.util import MultiStoreConduitTest
+from txdav.common.datastore.sql_tables import _BIND_MODE_READ, \
+    _HOME_STATUS_DISABLED, _HOME_STATUS_NORMAL, _HOME_STATUS_EXTERNAL, \
+    _HOME_STATUS_MIGRATING
+from txdav.common.datastore.test.util import populateCalendarsFrom
+from txdav.who.delegates import Delegates
+from txweb2.http_headers import MimeType
+from txweb2.stream import MemoryStream
+from txdav.caldav.datastore.scheduling.ischedule.delivery import IScheduleRequest
+from txdav.caldav.datastore.scheduling.ischedule.resource import IScheduleInboxResource
+from txweb2.dav.test.util import SimpleRequest
+from txdav.caldav.datastore.test.common import CaptureProtocol
+
+
+class TestCompleteMigrationCycle(MultiStoreConduitTest):
+    """
+    Test that a full migration cycle using L{CrossPodHomeSync} works.
+    """
+
+    def __init__(self, methodName='runTest'):
+        super(TestCompleteMigrationCycle, self).__init__(methodName)
+        self.stash = {}
+
+
+    @inlineCallbacks
+    def setUp(self):
+        @inlineCallbacks
+        def _fakeSubmitRequest(iself, ssl, host, port, request):
+            pod = (port - 8008) / 100
+            inbox = IScheduleInboxResource(self.site.resource, self.theStoreUnderTest(pod), podding=True)
+            response = yield inbox.http_POST(SimpleRequest(
+                self.site,
+                "POST",
+                "http://{host}:{port}/podding".format(host=host, port=port),
+                request.headers,
+                request.stream.mem,
+            ))
+            returnValue(response)
+
+
+        self.patch(IScheduleRequest, "_submitRequest", _fakeSubmitRequest)
+        self.accounts = FilePath(__file__).sibling("accounts").child("groupAccounts.xml")
+        self.augments = FilePath(__file__).sibling("accounts").child("augments.xml")
+        yield super(TestCompleteMigrationCycle, self).setUp()
+        yield self.populate()
+
+
+    def configure(self):
+        super(TestCompleteMigrationCycle, self).configure()
+        config.GroupAttendees.Enabled = True
+        config.GroupAttendees.ReconciliationDelaySeconds = 0
+        config.GroupAttendees.AutoUpdateSecondsFromNow = 0
+        config.AccountingCategories.migration = True
+        config.AccountingPrincipals = ["*"]
+
+
+    @inlineCallbacks
+    def populate(self):
+        yield populateCalendarsFrom(self.requirements0, self.theStoreUnderTest(0))
+        yield populateCalendarsFrom(self.requirements1, self.theStoreUnderTest(1))
+
+    requirements0 = {
+        "user01" : None,
+        "user02" : None,
+        "user03" : None,
+        "user04" : None,
+        "user05" : None,
+        "user06" : None,
+        "user07" : None,
+        "user08" : None,
+        "user09" : None,
+        "user10" : None,
+    }
+
+    requirements1 = {
+        "puser01" : None,
+        "puser02" : None,
+        "puser03" : None,
+        "puser04" : None,
+        "puser05" : None,
+        "puser06" : None,
+        "puser07" : None,
+        "puser08" : None,
+        "puser09" : None,
+        "puser10" : None,
+    }
+
+
+    @inlineCallbacks
+    def _createShare(self, shareFrom, shareTo, accept=True):
+        # Invite
+        txnindex = 1 if shareFrom[0] == "p" else 0
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareFrom, create=True)
+        calendar = yield home.childWithName("calendar")
+        shareeView = yield calendar.inviteUIDToShare(shareTo, _BIND_MODE_READ, "summary")
+        yield self.commitTransaction(txnindex)
+
+        # Accept
+        if accept:
+            inviteUID = shareeView.shareUID()
+            txnindex = 1 if shareTo[0] == "p" else 0
+            shareeHome = yield self.homeUnderTest(txn=self.theTransactionUnderTest(txnindex), name=shareTo)
+            shareeView = yield shareeHome.acceptShare(inviteUID)
+            sharedName = shareeView.name()
+            yield self.commitTransaction(txnindex)
+        else:
+            sharedName = None
+
+        returnValue(sharedName)
+
+
+    def attachmentToString(self, attachment):
+        """
+        Convenience to convert an L{IAttachment} to a string.
+
+        @param attachment: an L{IAttachment} provider to convert into a string.
+
+        @return: a L{Deferred} that fires with the contents of the attachment.
+
+        @rtype: L{Deferred} firing C{bytes}
+        """
+        capture = CaptureProtocol()
+        attachment.retrieve(capture)
+        return capture.deferred
+
+
+    now = {
+        "now": DateTime.getToday().getYear(),
+        "now1": DateTime.getToday().getYear() + 1,
+    }
+
+    data01_1 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data01_1
+DTSTART:{now1:04d}0102T140000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:data01_1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    data01_1_changed = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data01_1
+DTSTART:{now1:04d}0102T140000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:data01_1_changed
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    data01_2 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data01_2
+DTSTART:{now1:04d}0102T160000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+SUMMARY:data01_2
+ORGANIZER:mailto:user01 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE:mailto:user02 at example.com
+ATTENDEE:mailto:puser02 at example.com
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    data01_3 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data01_3
+DTSTART:{now1:04d}0102T180000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+SUMMARY:data01_3
+ORGANIZER:mailto:user01 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE:mailto:group02 at example.com
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    data02_1 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data02_1
+DTSTART:{now1:04d}0103T140000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:data02_1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    data02_2 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data02_2
+DTSTART:{now1:04d}0103T160000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+SUMMARY:data02_2
+ORGANIZER:mailto:user02 at example.com
+ATTENDEE:mailto:user02 at example.com
+ATTENDEE:mailto:user01 at example.com
+ATTENDEE:mailto:puser02 at example.com
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    data02_3 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_data02_3
+DTSTART:{now1:04d}0103T180000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+SUMMARY:data02_3
+ORGANIZER:mailto:user02 at example.com
+ATTENDEE:mailto:user02 at example.com
+ATTENDEE:mailto:group01 at example.com
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    datap02_1 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_datap02_1
+DTSTART:{now1:04d}0103T140000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+RRULE:FREQ=WEEKLY
+SUMMARY:datap02_1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    datap02_2 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_datap02_2
+DTSTART:{now1:04d}0103T160000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+SUMMARY:datap02_2
+ORGANIZER:mailto:puser02 at example.com
+ATTENDEE:mailto:puser02 at example.com
+ATTENDEE:mailto:user01 at example.com
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+    datap02_3 = """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:uid_datap02_3
+DTSTART:{now1:04d}0103T180000Z
+DURATION:PT1H
+CREATED:20060102T190000Z
+DTSTAMP:20051222T210507Z
+SUMMARY:datap02_3
+ORGANIZER:mailto:puser02 at example.com
+ATTENDEE:mailto:puser02 at example.com
+ATTENDEE:mailto:group01 at example.com
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n").format(**now)
+
+
+    @inlineCallbacks
+    def preCheck(self):
+        """
+        Checks prior to starting any tests
+        """
+
+        for i in range(self.numberOfStores):
+            txn = self.theTransactionUnderTest(i)
+            record = yield txn.directoryService().recordWithUID(u"user01")
+            self.assertEqual(record.serviceNodeUID, "A")
+            self.assertEqual(record.thisServer(), i == 0)
+            record = yield txn.directoryService().recordWithUID(u"user02")
+            self.assertEqual(record.serviceNodeUID, "A")
+            self.assertEqual(record.thisServer(), i == 0)
+            record = yield txn.directoryService().recordWithUID(u"puser02")
+            self.assertEqual(record.serviceNodeUID, "B")
+            self.assertEqual(record.thisServer(), i == 1)
+            yield self.commitTransaction(i)
+
+
+    @inlineCallbacks
+    def initialState(self):
+        """
+        Setup the server with an initial set of data
+
+        user01 - migrating user
+        user02 - has a calendar shared with user01
+        user03 - shared to by user01
+
+        puser01 - user on other pod
+        puser02 - has a calendar shared with user01
+        puser03 - shared to by user01
+        """
+
+        # Data for user01
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user01", create=True)
+        calendar = yield home.childWithName("calendar")
+        yield calendar.createCalendarObjectWithName("01_1.ics", Component.fromString(self.data01_1))
+        yield calendar.createCalendarObjectWithName("01_2.ics", Component.fromString(self.data01_2))
+        obj3 = yield calendar.createCalendarObjectWithName("01_3.ics", Component.fromString(self.data01_3))
+        attachment, _ignore_location = yield obj3.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text #1."))
+        self.stash["user01_attachment_id"] = attachment.id()
+        self.stash["user01_attachment_md5"] = attachment.md5()
+        self.stash["user01_attachment_mid"] = attachment.managedID()
+        yield self.commitTransaction(0)
+
+        # Data for user02
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name="user02", create=True)
+        calendar = yield home.childWithName("calendar")
+        yield calendar.createCalendarObjectWithName("02_1.ics", Component.fromString(self.data02_1))
+        yield calendar.createCalendarObjectWithName("02_2.ics", Component.fromString(self.data02_2))
+        yield calendar.createCalendarObjectWithName("02_3.ics", Component.fromString(self.data02_3))
+        yield self.commitTransaction(0)
+
+        # Data for puser02
+        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="puser02", create=True)
+        calendar = yield home.childWithName("calendar")
+        yield calendar.createCalendarObjectWithName("p02_1.ics", Component.fromString(self.datap02_1))
+        yield calendar.createCalendarObjectWithName("p02_2.ics", Component.fromString(self.datap02_2))
+        yield calendar.createCalendarObjectWithName("p02_3.ics", Component.fromString(self.datap02_3))
+        yield self.commitTransaction(1)
+
+        # Share calendars
+        self.stash["sharename_user01_to_user03"] = yield self._createShare("user01", "user03")
+        self.stash["sharename_user01_to_puser03"] = yield self._createShare("user01", "puser03")
+        self.stash["sharename_user02_to_user01"] = yield self._createShare("user02", "user01")
+        self.stash["sharename_puser02_to_user01"] = yield self._createShare("puser02", "user01")
+
+        # Add some delegates
+        txn = self.theTransactionUnderTest(0)
+        record01 = yield txn.directoryService().recordWithUID(u"user01")
+        record02 = yield txn.directoryService().recordWithUID(u"user02")
+        record03 = yield txn.directoryService().recordWithUID(u"user03")
+        precord01 = yield txn.directoryService().recordWithUID(u"puser01")
+
+        group02 = yield txn.directoryService().recordWithUID(u"group02")
+        group03 = yield txn.directoryService().recordWithUID(u"group03")
+
+        # Add user02 and user03 as individual delegates
+        yield Delegates.addDelegate(txn, record01, record02, True)
+        yield Delegates.addDelegate(txn, record01, record03, False)
+        yield Delegates.addDelegate(txn, record01, precord01, False)
+
+        # Add group delegates
+        yield Delegates.addDelegate(txn, record01, group02, True)
+        yield Delegates.addDelegate(txn, record01, group03, False)
+
+        # Add external delegates
+        yield txn.assignExternalDelegates(u"user01", None, None, u"external1", u"external2")
+
+        yield self.commitTransaction(0)
+
+        yield self.waitAllEmpty()
+
+
+    @inlineCallbacks
+    def secondState(self):
+        """
+        Setup the server with data changes appearing after the first sync
+        """
+        txn = self.theTransactionUnderTest(0)
+        obj = yield self.calendarObjectUnderTest(txn, name="01_1.ics", calendar_name="calendar", home="user01")
+        yield obj.setComponent(self.data01_1_changed)
+
+        obj = yield self.calendarObjectUnderTest(txn, name="02_2.ics", calendar_name="calendar", home="user02")
+        attachment, _ignore_location = yield obj.addAttachment(None, MimeType.fromString("text/plain"), "test_02.txt", MemoryStream("Here is some text #02."))
+        self.stash["user02_attachment_id"] = attachment.id()
+        self.stash["user02_attachment_md5"] = attachment.md5()
+        self.stash["user02_attachment_mid"] = attachment.managedID()
+
+        yield self.commitTransaction(0)
+
+        yield self.waitAllEmpty()
+
+
+    @inlineCallbacks
+    def finalState(self):
+        """
+        Setup the server with data changes appearing before the final sync
+        """
+        txn = self.theTransactionUnderTest(1)
+        obj = yield self.calendarObjectUnderTest(txn, name="p02_2.ics", calendar_name="calendar", home="puser02")
+        attachment, _ignore_location = yield obj.addAttachment(None, MimeType.fromString("text/plain"), "test_p02.txt", MemoryStream("Here is some text #p02."))
+        self.stash["puser02_attachment_id"] = attachment.id()
+        self.stash["puser02_attachment_mid"] = attachment.managedID()
+        self.stash["puser02_attachment_md5"] = attachment.md5()
+
+        yield self.commitTransaction(1)
+
+        yield self.waitAllEmpty()
+
+
+    @inlineCallbacks
+    def switchAccounts(self):
+        """
+        Switch the migrated user accounts to point to the new pod
+        """
+
+        for i in range(self.numberOfStores):
+            txn = self.theTransactionUnderTest(i)
+            record = yield txn.directoryService().recordWithUID(u"user01")
+            yield self.changeRecord(record, txn.directoryService().fieldName.serviceNodeUID, u"B", directory=txn.directoryService())
+            yield self.commitTransaction(i)
+
+        for i in range(self.numberOfStores):
+            txn = self.theTransactionUnderTest(i)
+            record = yield txn.directoryService().recordWithUID(u"user01")
+            self.assertEqual(record.serviceNodeUID, "B")
+            self.assertEqual(record.thisServer(), i == 1)
+            record = yield txn.directoryService().recordWithUID(u"user02")
+            self.assertEqual(record.serviceNodeUID, "A")
+            self.assertEqual(record.thisServer(), i == 0)
+            record = yield txn.directoryService().recordWithUID(u"puser02")
+            self.assertEqual(record.serviceNodeUID, "B")
+            self.assertEqual(record.thisServer(), i == 1)
+            yield self.commitTransaction(i)
+
+
+    @inlineCallbacks
+    def postCheck(self):
+        """
+        Checks after migration is done
+        """
+
+        # Check that the home has been moved
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01")
+        self.assertTrue(home.external())
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_NORMAL)
+        self.assertTrue(home is None)
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_EXTERNAL)
+        self.assertTrue(home is not None)
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_DISABLED)
+        self.assertTrue(home is not None)
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home is None)
+        yield self.commitTransaction(0)
+
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01")
+        self.assertTrue(home.normal())
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_NORMAL)
+        self.assertTrue(home is not None)
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_EXTERNAL)
+        self.assertTrue(home is None)
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_DISABLED)
+        self.assertTrue(home is not None)
+        home = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_MIGRATING)
+        self.assertTrue(home is None)
+        yield self.commitTransaction(1)
+
+        # Check that the notifications have been moved
+        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_NORMAL)
+        self.assertTrue(notifications is None)
+        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_EXTERNAL)
+        self.assertTrue(notifications is None)
+        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(0), name="user01", status=_HOME_STATUS_DISABLED)
+        self.assertTrue(notifications is not None)
+        yield self.commitTransaction(0)
+
+        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_NORMAL)
+        self.assertTrue(notifications is not None)
+        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_EXTERNAL)
+        self.assertTrue(notifications is None)
+        notifications = yield self.notificationCollectionUnderTest(self.theTransactionUnderTest(1), name="user01", status=_HOME_STATUS_DISABLED)
+        self.assertTrue(notifications is not None)
+        yield self.commitTransaction(1)
+
+        # New pod data
+        homes = {}
+        homes["user01"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user01")
+        homes["user02"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user02")
+        self.assertTrue(homes["user02"].external())
+        homes["user03"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="user03")
+        self.assertTrue(homes["user03"].external())
+        homes["puser01"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="puser01")
+        self.assertTrue(homes["puser01"].normal())
+        homes["puser02"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="puser02")
+        self.assertTrue(homes["puser02"].normal())
+        homes["puser03"] = yield self.homeUnderTest(self.theTransactionUnderTest(1), name="puser03")
+        self.assertTrue(homes["puser03"].normal())
+
+        # Check calendar data on new pod
+        calendars = yield homes["user01"].loadChildren()
+        calnames = dict([(calendar.name(), calendar) for calendar in calendars])
+        self.assertEqual(
+            set(calnames.keys()),
+            set(("calendar", "tasks", "inbox", "trash", self.stash["sharename_user02_to_user01"], self.stash["sharename_puser02_to_user01"],))
+        )
+
+        # Check shared-by user01 on new pod
+        shared = calnames["calendar"]
+        invitations = yield shared.sharingInvites()
+        by_sharee = dict([(invitation.shareeUID, invitation) for invitation in invitations])
+        self.assertEqual(len(invitations), 2)
+        self.assertEqual(set(by_sharee.keys()), set(("user03", "puser03",)))
+        self.assertEqual(by_sharee["user03"].shareeHomeID, homes["user03"].id())
+        self.assertEqual(by_sharee["puser03"].shareeHomeID, homes["puser03"].id())
+
+        # Check shared-to user01 on new pod
+        shared = calnames[self.stash["sharename_user02_to_user01"]]
+        self.assertEqual(shared.ownerHome().uid(), "user02")
+        self.assertEqual(shared.ownerHome().id(), homes["user02"].id())
+
+        shared = calnames[self.stash["sharename_puser02_to_user01"]]
+        self.assertEqual(shared.ownerHome().uid(), "puser02")
+        self.assertEqual(shared.ownerHome().id(), homes["puser02"].id())
+
+        shared = yield homes["puser02"].calendarWithName("calendar")
+        invitations = yield shared.sharingInvites()
+        self.assertEqual(len(invitations), 1)
+        self.assertEqual(invitations[0].shareeHomeID, homes["user01"].id())
+
+        yield self.commitTransaction(1)
+
+        # Old pod data
+        homes = {}
+        homes["user01"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user01")
+        homes["user02"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user02")
+        self.assertTrue(homes["user02"].normal())
+        homes["user03"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="user03")
+        self.assertTrue(homes["user03"].normal())
+        homes["puser01"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="puser01")
+        self.assertTrue(homes["puser01"] is None)
+        homes["puser02"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="puser02")
+        self.assertTrue(homes["puser02"].external())
+        homes["puser03"] = yield self.homeUnderTest(self.theTransactionUnderTest(0), name="puser03")
+        self.assertTrue(homes["puser03"].external())
+
+        # Check shared-by user01 on old pod
+        shared = yield homes["user03"].calendarWithName(self.stash["sharename_user01_to_user03"])
+        self.assertEqual(shared.ownerHome().uid(), "user01")
+        self.assertEqual(shared.ownerHome().id(), homes["user01"].id())
+
+        # Check shared-to user01 on old pod
+        shared = yield homes["user02"].calendarWithName("calendar")
+        invitations = yield shared.sharingInvites()
+        self.assertEqual(len(invitations), 1)
+        self.assertEqual(invitations[0].shareeHomeID, homes["user01"].id())
+
+        yield self.commitTransaction(0)
+
+        # Delegates on each pod
+        for pod in range(self.numberOfStores):
+            txn = self.theTransactionUnderTest(pod)
+            records = {}
+            for ctr in range(10):
+                uid = u"user{:02d}".format(ctr + 1)
+                records[uid] = yield txn.directoryService().recordWithUID(uid)
+            for ctr in range(10):
+                uid = u"puser{:02d}".format(ctr + 1)
+                records[uid] = yield txn.directoryService().recordWithUID(uid)
+            for ctr in range(10):
+                uid = u"group{:02d}".format(ctr + 1)
+                records[uid] = yield txn.directoryService().recordWithUID(uid)
+
+            delegates = yield Delegates.delegatesOf(txn, records["user01"], True, False)
+            self.assertTrue(records["user02"] in delegates)
+            self.assertTrue(records["group02"] in delegates)
+            delegates = yield Delegates.delegatesOf(txn, records["user01"], True, True)
+            self.assertTrue(records["user02"] in delegates)
+            self.assertTrue(records["user06"] in delegates)
+            self.assertTrue(records["user07"] in delegates)
+            self.assertTrue(records["user08"] in delegates)
+
+            delegates = yield Delegates.delegatesOf(txn, records["user01"], False, False)
+            self.assertTrue(records["user03"] in delegates)
+            self.assertTrue(records["group03"] in delegates)
+            self.assertTrue(records["puser01"] in delegates)
+            delegates = yield Delegates.delegatesOf(txn, records["user01"], False, True)
+            self.assertTrue(records["user03"] in delegates)
+            self.assertTrue(records["user07"] in delegates)
+            self.assertTrue(records["user08"] in delegates)
+            self.assertTrue(records["user09"] in delegates)
+            self.assertTrue(records["puser01"] in delegates)
+
+        # Attachments
+        obj = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(1), name="01_3.ics", calendar_name="calendar", home="user01")
+        attachment = yield obj.attachmentWithManagedID(self.stash["user01_attachment_mid"])
+        self.assertTrue(attachment is not None)
+        self.assertEqual(attachment.md5(), self.stash["user01_attachment_md5"])
+        data = yield self.attachmentToString(attachment)
+        self.assertEqual(data, "Here is some text #1.")
+
+
+    @inlineCallbacks
+    def test_migration(self):
+        """
+        Full migration cycle.
+        """
+
+        yield self.preCheck()
+
+        # Step 1. Live full sync
+        yield self.initialState()
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.sync()
+
+        # Step 2. Live incremental sync
+        yield self.secondState()
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.sync()
+
+        # Step 3. Disable home after final changes
+        yield self.finalState()
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01")
+        yield syncer.disableRemoteHome()
+
+        # Step 4. Final incremental sync
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
+        yield syncer.sync()
+
+        # Step 5. Final reconcile sync
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
+        yield syncer.finalSync()
+
+        # Step 6. Enable new home
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
+        yield syncer.enableLocalHome()
+
+        # Step 7. Remove old home
+        syncer = CrossPodHomeSync(self.theStoreUnderTest(1), "user01", final=True)
+        yield syncer.removeRemoteHome()
+
+        yield self.switchAccounts()
+
+        yield self.postCheck()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/request.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/request.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/request.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -23,7 +23,7 @@
 from txweb2.client.http import HTTPClientProtocol, ClientRequest
 from txweb2.dav.util import allDataFromStream
 from txweb2.http_headers import Headers, MimeType
-from txweb2.stream import MemoryStream
+from txweb2.stream import MemoryStream, readStream
 
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.internet.protocol import Factory
@@ -50,11 +50,12 @@
     case the JSON data is sent in an HTTP header.
     """
 
-    def __init__(self, server, data, stream=None, stream_type=None):
+    def __init__(self, server, data, stream=None, stream_type=None, writeStream=None):
         self.server = server
         self.data = json.dumps(data)
         self.stream = stream
         self.streamType = stream_type
+        self.writeStream = writeStream
 
 
     @inlineCallbacks
@@ -72,7 +73,28 @@
                 self.loggedResponse = yield self.logResponse(response)
                 emitAccounting("xPod", "", self.loggedRequest + "\n" + self.loggedResponse, "POST")
 
-            if response.code in (responsecode.OK, responsecode.BAD_REQUEST,):
+            if response.code == responsecode.OK:
+                if self.writeStream is None:
+                    data = (yield allDataFromStream(response.stream))
+                    data = json.loads(data)
+                else:
+                    yield readStream(response.stream, self.writeStream.write)
+                    content_type = response.headers.getHeader("content-type")
+                    if content_type is None:
+                        content_type = MimeType("application", "octet-stream")
+                    content_disposition = response.headers.getHeader("content-disposition")
+                    if content_disposition is None or "filename" not in content_disposition.params:
+                        filename = ""
+                    else:
+                        filename = content_disposition.params["filename"]
+                    self.writeStream.resetDetails(content_type, filename)
+                    yield self.writeStream.loseConnection()
+                    data = {
+                        "result": "ok",
+                        "content-type": content_type,
+                        "name": filename,
+                    }
+            elif response.code == responsecode.BAD_REQUEST:
                 data = (yield allDataFromStream(response.stream))
                 data = json.loads(data)
             else:

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/resource.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/resource.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/resource.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -18,9 +18,11 @@
 from txweb2.dav.noneprops import NonePropertyStore
 from txweb2.dav.util import allDataFromStream
 from txweb2.http import Response, HTTPError, StatusResponse, JSONResponse
-from txweb2.http_headers import MimeType
+from txweb2.http_headers import MimeType, MimeDisposition
+from txweb2.stream import ProducerStream
 
 from twisted.internet.defer import succeed, returnValue, inlineCallbacks
+from twisted.internet.protocol import Protocol
 
 from twistedcaldav.extensions import DAVResource, \
     DAVResourceWithoutChildrenMixin
@@ -154,19 +156,54 @@
             request.extendedLogItems = {}
         request.extendedLogItems["xpod"] = j["action"] if "action" in j else "unknown"
 
-        # Get the conduit to process the data
-        try:
-            result = yield self.store.conduit.processRequest(j)
-            code = responsecode.OK if result["result"] == "ok" else responsecode.BAD_REQUEST
-        except Exception as e:
-            # Send the exception over to the other side
-            result = {
-                "result": "exception",
-                "class": ".".join((e.__class__.__module__, e.__class__.__name__,)),
-                "request": str(e),
-            }
-            code = responsecode.BAD_REQUEST
+        # Look for a streaming action which needs special handling
+        if self.store.conduit.isStreamAction(j):
+            # Get the conduit to process the data stream
+            try:
 
+                stream = ProducerStream()
+                class StreamProtocol(Protocol):
+                    def connectionMade(self):
+                        stream.registerProducer(self.transport, False)
+                    def dataReceived(self, data):
+                        stream.write(data)
+                    def connectionLost(self, reason):
+                        stream.finish()
+
+                result = yield self.store.conduit.processRequestStream(j, StreamProtocol())
+
+                try:
+                    ct, name = result
+                except ValueError:
+                    code = responsecode.BAD_REQUEST
+                else:
+                    headers = {"content-type": MimeType.fromString(ct)}
+                    headers["content-disposition"] = MimeDisposition("attachment", params={"filename": name})
+                    returnValue(Response(responsecode.OK, headers, stream))
+
+            except Exception as e:
+                # Send the exception over to the other side
+                result = {
+                    "result": "exception",
+                    "class": ".".join((e.__class__.__module__, e.__class__.__name__,)),
+                    "details": str(e),
+                }
+                code = responsecode.BAD_REQUEST
+
+        else:
+            # Get the conduit to process the data
+            try:
+                result = yield self.store.conduit.processRequest(j)
+                code = responsecode.OK if result["result"] == "ok" else responsecode.BAD_REQUEST
+            except Exception as e:
+                # Send the exception over to the other side
+                result = {
+                    "result": "exception",
+                    "class": ".".join((e.__class__.__module__, e.__class__.__name__,)),
+                    "details": str(e),
+                }
+                code = responsecode.BAD_REQUEST
+
         response = JSONResponse(code, result)
         returnValue(response)
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/sharing_invites.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/sharing_invites.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/sharing_invites.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -26,43 +26,55 @@
     """
 
     @inlineCallbacks
-    def send_shareinvite(self, txn, homeType, ownerUID, ownerID, ownerName, shareeUID, shareUID, bindMode, summary, copy_properties, supported_components):
+    def send_shareinvite(
+        self, txn, homeType, ownerUID, ownerName, shareeUID, shareUID,
+        bindMode, bindUID, summary, copy_properties, supported_components
+    ):
         """
         Send a sharing invite cross-pod message.
 
         @param homeType: Type of home being shared.
         @type homeType: C{int}
+
         @param ownerUID: UID of the sharer.
         @type ownerUID: C{str}
-        @param ownerID: resource ID of the sharer calendar
-        @type ownerID: C{int}
+
         @param ownerName: owner's name of the sharer calendar
         @type ownerName: C{str}
+
         @param shareeUID: UID of the sharee
         @type shareeUID: C{str}
+
         @param shareUID: Resource/invite ID for sharee
         @type shareUID: C{str}
+
         @param bindMode: bind mode for the share
         @type bindMode: C{str}
+        @param bindUID: bind UID of the sharer calendar
+        @type bindUID: C{str}
         @param summary: sharing message
         @type summary: C{str}
+
         @param copy_properties: C{str} name/value for properties to be copied
         @type copy_properties: C{dict}
+
         @param supported_components: supproted components, may be C{None}
         @type supported_components: C{str}
         """
 
-        _ignore_sender, recipient = yield self.validRequest(ownerUID, shareeUID)
+        _ignore_sender, recipient = yield self.validRequest(
+            ownerUID, shareeUID
+        )
 
         request = {
             "action": "shareinvite",
             "type": homeType,
             "owner": ownerUID,
-            "owner_id": ownerID,
             "owner_name": ownerName,
             "sharee": shareeUID,
             "share_id": shareUID,
             "mode": bindMode,
+            "bind_uid": bindUID,
             "summary": summary,
             "properties": copy_properties,
         }
@@ -75,24 +87,27 @@
     @inlineCallbacks
     def recv_shareinvite(self, txn, request):
         """
-        Process a sharing invite cross-pod request. Request arguments as per L{send_shareinvite}.
+        Process a sharing invite cross-pod request.
+        Request arguments as per L{send_shareinvite}.
 
         @param request: request arguments
         @type request: C{dict}
         """
 
         # Sharee home on this pod must exist (create if needed)
-        shareeHome = yield txn.homeWithUID(request["type"], request["sharee"], create=True)
+        shareeHome = yield txn.homeWithUID(
+            request["type"], request["sharee"], create=True
+        )
         if shareeHome is None or shareeHome.external():
             raise FailedCrossPodRequestError("Invalid sharee UID specified")
 
         # Create a share
         yield shareeHome.processExternalInvite(
             request["owner"],
-            request["owner_id"],
             request["owner_name"],
             request["share_id"],
             request["mode"],
+            request["bind_uid"],
             request["summary"],
             request["properties"],
             supported_components=request.get("supported-components")
@@ -100,29 +115,37 @@
 
 
     @inlineCallbacks
-    def send_shareuninvite(self, txn, homeType, ownerUID, ownerID, shareeUID, shareUID):
+    def send_shareuninvite(
+        self, txn, homeType, ownerUID,
+        bindUID, shareeUID, shareUID
+    ):
         """
         Send a sharing uninvite cross-pod message.
 
         @param homeType: Type of home being shared.
         @type homeType: C{int}
+
         @param ownerUID: UID of the sharer.
         @type ownerUID: C{str}
-        @param ownerID: resource ID of the sharer calendar
-        @type ownerID: C{int}
+        @param bindUID: bind UID of the sharer calendar
+        @type bindUID: C{str}
+
         @param shareeUID: UID of the sharee
         @type shareeUID: C{str}
+
         @param shareUID: Resource/invite ID for sharee
         @type shareUID: C{str}
         """
 
-        _ignore_sender, recipient = yield self.validRequest(ownerUID, shareeUID)
+        _ignore_sender, recipient = yield self.validRequest(
+            ownerUID, shareeUID
+        )
 
         request = {
             "action": "shareuninvite",
             "type": homeType,
             "owner": ownerUID,
-            "owner_id": ownerID,
+            "bind_uid": bindUID,
             "sharee": shareeUID,
             "share_id": shareUID,
         }
@@ -133,7 +156,8 @@
     @inlineCallbacks
     def recv_shareuninvite(self, txn, request):
         """
-        Process a sharing uninvite cross-pod request. Request arguments as per L{send_shareuninvite}.
+        Process a sharing uninvite cross-pod request.
+        Request arguments as per L{send_shareuninvite}.
 
         @param request: request arguments
         @type request: C{dict}
@@ -147,13 +171,16 @@
         # Remove a share
         yield shareeHome.processExternalUninvite(
             request["owner"],
-            request["owner_id"],
+            request["bind_uid"],
             request["share_id"],
         )
 
 
     @inlineCallbacks
-    def send_sharereply(self, txn, homeType, ownerUID, shareeUID, shareUID, bindStatus, summary=None):
+    def send_sharereply(
+        self, txn, homeType, ownerUID,
+        shareeUID, shareUID, bindStatus, summary=None
+    ):
         """
         Send a sharing reply cross-pod message.
 
@@ -171,7 +198,9 @@
         @type summary: C{str}
         """
 
-        _ignore_sender, recipient = yield self.validRequest(shareeUID, ownerUID)
+        _ignore_sender, recipient = yield self.validRequest(
+            shareeUID, ownerUID
+        )
 
         request = {
             "action": "sharereply",
@@ -190,7 +219,8 @@
     @inlineCallbacks
     def recv_sharereply(self, txn, request):
         """
-        Process a sharing reply cross-pod request. Request arguments as per L{send_sharereply}.
+        Process a sharing reply cross-pod request.
+        Request arguments as per L{send_sharereply}.
 
         @param request: request arguments
         @type request: C{dict}

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/store_api.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/store_api.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/store_api.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -16,8 +16,9 @@
 
 from twisted.internet.defer import inlineCallbacks, returnValue
 
-from txdav.common.datastore.podding.base import FailedCrossPodRequestError
 from txdav.caldav.datastore.scheduling.freebusy import generateFreeBusyInfo
+from txdav.common.datastore.podding.util import UtilityConduitMixin
+from txdav.common.datastore.sql_tables import _HOME_STATUS_DISABLED
 
 from twistedcaldav.caldavxml import TimeRange
 
@@ -27,126 +28,21 @@
     Defines common cross-pod API for generic access to remote resources.
     """
 
-    #
-    # Utility methods to map from store objects to/from JSON
-    #
-
     @inlineCallbacks
-    def _getRequestForStoreObject(self, action, storeObject, classMethod):
+    def send_home_resource_id(self, txn, recipient, migrating=False):
         """
-        Create the JSON data needed to identify the remote resource by type and ids, along with any parent resources.
-
-        @param action: the conduit action name
-        @type action: L{str}
-        @param storeObject: the store object that is being operated on
-        @type storeObject: L{object}
-        @param classMethod: indicates whether the method being called is a classmethod
-        @type classMethod: L{bool}
-
-        @return: the transaction in use, the JSON dict to send in the request,
-            the server where the request should be sent
-        @rtype: L{tuple} of (L{CommonStoreTransaction}, L{dict}, L{str})
-        """
-
-        from txdav.common.datastore.sql import CommonObjectResource, CommonHomeChild, CommonHome
-        result = {
-            "action": action,
-        }
-
-        # Extract the relevant store objects
-        txn = storeObject._txn
-        owner_home = None
-        viewer_home = None
-        home_child = None
-        object_resource = None
-        if isinstance(storeObject, CommonObjectResource):
-            owner_home = storeObject.ownerHome()
-            viewer_home = storeObject.viewerHome()
-            home_child = storeObject.parentCollection()
-            object_resource = storeObject
-        elif isinstance(storeObject, CommonHomeChild):
-            owner_home = storeObject.ownerHome()
-            viewer_home = storeObject.viewerHome()
-            home_child = storeObject
-            result["classMethod"] = classMethod
-        elif isinstance(storeObject, CommonHome):
-            owner_home = storeObject
-            viewer_home = storeObject
-            txn = storeObject._txn
-            result["classMethod"] = classMethod
-
-        # Add store object identities to JSON request
-        result["homeType"] = viewer_home._homeType
-        result["homeUID"] = viewer_home.uid()
-        if home_child:
-            if home_child.owned():
-                result["homeChildID"] = home_child.id()
-            else:
-                result["homeChildSharedID"] = home_child.name()
-        if object_resource:
-            result["objectResourceID"] = object_resource.id()
-
-        # Note that the owner_home is always the ownerHome() because in the sharing case
-        # a viewer is accessing the owner's data on another pod.
-        recipient = yield self.store.directoryService().recordWithUID(owner_home.uid())
-
-        returnValue((txn, result, recipient.server(),))
-
-
-    @inlineCallbacks
-    def _getStoreObjectForRequest(self, txn, request):
-        """
-        Resolve the supplied JSON data to get a store object to operate on.
-        """
-
-        returnObject = txn
-        classObject = None
-
-        if "homeUID" in request:
-            home = yield txn.homeWithUID(request["homeType"], request["homeUID"])
-            if home is None:
-                raise FailedCrossPodRequestError("Invalid owner UID specified")
-            home._internalRequest = False
-            returnObject = home
-            if request.get("classMethod", False):
-                classObject = home._childClass
-
-        if "homeChildID" in request:
-            homeChild = yield home.childWithID(request["homeChildID"])
-            if homeChild is None:
-                raise FailedCrossPodRequestError("Invalid home child specified")
-            returnObject = homeChild
-            if request.get("classMethod", False):
-                classObject = homeChild._objectResourceClass
-        elif "homeChildSharedID" in request:
-            homeChild = yield home.childWithName(request["homeChildSharedID"])
-            if homeChild is None:
-                raise FailedCrossPodRequestError("Invalid home child specified")
-            returnObject = homeChild
-            if request.get("classMethod", False):
-                classObject = homeChild._objectResourceClass
-
-        if "objectResourceID" in request:
-            objectResource = yield homeChild.objectResourceWithID(request["objectResourceID"])
-            if objectResource is None:
-                raise FailedCrossPodRequestError("Invalid object resource specified")
-            returnObject = objectResource
-
-        returnValue((returnObject, classObject,))
-
-
-    @inlineCallbacks
-    def send_home_resource_id(self, txn, recipient):
-        """
         Lookup the remote resourceID matching the specified directory uid.
 
         @param ownerUID: directory record for user whose home is needed
         @type ownerUID: L{DirectroryRecord}
+        @param migrating: if L{True} then also return a disbaled home
+        @type migrating: L{bool}
         """
 
         request = {
             "action": "home-resource_id",
             "ownerUID": recipient.uid,
+            "migrating": migrating,
         }
 
         response = yield self.sendRequest(txn, recipient, request)
@@ -163,6 +59,8 @@
         """
 
         home = yield txn.calendarHomeWithUID(request["ownerUID"])
+        if home is None and request["migrating"]:
+            home = yield txn.calendarHomeWithUID(request["ownerUID"], status=_HOME_STATUS_DISABLED)
         returnValue(home.id() if home is not None else None)
 
 
@@ -236,133 +134,63 @@
         })
 
 
-    #
-    # We can simplify code generation for simple calls by dynamically generating the appropriate class methods.
-    #
-
-    @inlineCallbacks
-    def _simple_object_send(self, actionName, storeObject, classMethod=False, transform=None, args=None, kwargs=None):
+    @staticmethod
+    def _to_serialize_pair_list(value):
         """
-        A simple send operation that returns a value.
-
-        @param actionName: name of the action.
-        @type actionName: C{str}
-        @param shareeView: sharee resource being operated on.
-        @type shareeView: L{CommonHomeChildExternal}
-        @param objectResource: the resource being operated on, or C{None} for classmethod.
-        @type objectResource: L{CommonObjectResourceExternal}
-        @param transform: a function used to convert the JSON response into return values.
-        @type transform: C{callable}
-        @param args: list of optional arguments.
-        @type args: C{list}
-        @param kwargs: optional keyword arguments.
-        @type kwargs: C{dict}
+        Convert the value to the external (JSON-based) representation.
         """
+        return [[a.serialize(), b.serialize(), ] for a, b in value]
 
-        txn, request, server = yield self._getRequestForStoreObject(actionName, storeObject, classMethod)
-        if args is not None:
-            request["arguments"] = args
-        if kwargs is not None:
-            request["keywords"] = kwargs
-        response = yield self.sendRequestToServer(txn, server, request)
-        returnValue(transform(response) if transform is not None else response)
 
-
-    @inlineCallbacks
-    def _simple_object_recv(self, txn, actionName, request, method, transform=None):
-        """
-        A simple recv operation that returns a value. We also look for an optional set of arguments/keywords
-        and include those only if present.
-
-        @param actionName: name of the action.
-        @type actionName: C{str}
-        @param request: request arguments
-        @type request: C{dict}
-        @param method: name of the method to execute on the shared resource to get the result.
-        @type method: C{str}
-        @param transform: method to call on returned JSON value to convert it to something useful.
-        @type transform: C{callable}
-        """
-
-        storeObject, classObject = yield self._getStoreObjectForRequest(txn, request)
-        if classObject is not None:
-            value = yield getattr(classObject, method)(storeObject, *request.get("arguments", ()), **request.get("keywords", {}))
-        else:
-            value = yield getattr(storeObject, method)(*request.get("arguments", ()), **request.get("keywords", {}))
-
-        returnValue(transform(value) if transform is not None else value)
-
-
-    #
-    # Factory methods for binding actions to the conduit class
-    #
-    @classmethod
-    def _make_simple_action(cls, action, method, classMethod=False, transform_recv_result=None, transform_send_result=None):
-        setattr(
-            cls,
-            "send_{}".format(action),
-            lambda self, storeObject, *args, **kwargs:
-                self._simple_object_send(action, storeObject, classMethod=classMethod, transform=transform_send_result, args=args, kwargs=kwargs)
-        )
-        setattr(
-            cls,
-            "recv_{}".format(action),
-            lambda self, txn, message:
-                self._simple_object_recv(txn, action, message, method, transform=transform_recv_result)
-        )
-
-
-    #
-    # Transforms for returned data
-    #
     @staticmethod
-    def _to_externalize(value):
+    def _to_serialize_dict_value(value):
         """
         Convert the value to the external (JSON-based) representation.
         """
-        return value.externalize() if value is not None else None
+        return dict([(k, v.serialize(),) for k, v in value.items()])
 
 
     @staticmethod
-    def _to_externalize_list(value):
+    def _to_serialize_dict_list_serialized_value(value):
         """
         Convert the value to the external (JSON-based) representation.
         """
-        return [v.externalize() for v in value]
+        return dict([(k, UtilityConduitMixin._to_serialize_list(v),) for k, v in value.items()])
 
-
-    @staticmethod
-    def _to_string(value):
-        return str(value)
-
-
-    @staticmethod
-    def _to_tuple(value):
-        return tuple(value)
-
 # These are the actions on store objects we need to expose via the conduit api
 
 # Calls on L{CommonHome} objects
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "home_metadata", "serialize")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "home_set_status", "setStatus")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "home_get_all_group_attendees", "getAllGroupAttendees", transform_recv_result=StoreAPIConduitMixin._to_serialize_pair_list)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "home_shared_to_records", "sharedToBindRecords", transform_recv_result=StoreAPIConduitMixin._to_serialize_dict_list_serialized_value)
 
 # Calls on L{CommonHomeChild} objects
-StoreAPIConduitMixin._make_simple_action("homechild_listobjects", "listObjects", classMethod=True)
-StoreAPIConduitMixin._make_simple_action("homechild_loadallobjects", "loadAllObjects", classMethod=True, transform_recv_result=StoreAPIConduitMixin._to_externalize_list)
-StoreAPIConduitMixin._make_simple_action("homechild_objectwith", "objectWith", classMethod=True, transform_recv_result=StoreAPIConduitMixin._to_externalize)
-StoreAPIConduitMixin._make_simple_action("homechild_movehere", "moveObjectResourceHere")
-StoreAPIConduitMixin._make_simple_action("homechild_moveaway", "moveObjectResourceAway")
-StoreAPIConduitMixin._make_simple_action("homechild_synctoken", "syncToken")
-StoreAPIConduitMixin._make_simple_action("homechild_resourcenamessincerevision", "resourceNamesSinceRevision", transform_send_result=StoreAPIConduitMixin._to_tuple)
-StoreAPIConduitMixin._make_simple_action("homechild_search", "search")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_listobjects", "listObjects", classMethod=True)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_loadallobjects", "loadAllObjects", classMethod=True, transform_recv_result=UtilityConduitMixin._to_serialize_list)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_objectwith", "objectWith", classMethod=True, transform_recv_result=UtilityConduitMixin._to_serialize)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_movehere", "moveObjectResourceHere")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_moveaway", "moveObjectResourceAway")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_synctokenrevision", "syncTokenRevision")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_resourcenamessincerevision", "resourceNamesSinceRevision", transform_send_result=UtilityConduitMixin._to_tuple)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_search", "search")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_sharing_records", "sharingBindRecords", transform_recv_result=StoreAPIConduitMixin._to_serialize_dict_value)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_migrate_sharing_records", "migrateBindRecords")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "homechild_group_sharees", "groupSharees", transform_recv_result=StoreAPIConduitMixin._to_serialize_dict_list_serialized_value)
 
 # Calls on L{CommonObjectResource} objects
-StoreAPIConduitMixin._make_simple_action("objectresource_loadallobjects", "loadAllObjects", classMethod=True, transform_recv_result=StoreAPIConduitMixin._to_externalize_list)
-StoreAPIConduitMixin._make_simple_action("objectresource_loadallobjectswithnames", "loadAllObjectsWithNames", classMethod=True, transform_recv_result=StoreAPIConduitMixin._to_externalize_list)
-StoreAPIConduitMixin._make_simple_action("objectresource_listobjects", "listObjects", classMethod=True)
-StoreAPIConduitMixin._make_simple_action("objectresource_countobjects", "countObjects", classMethod=True)
-StoreAPIConduitMixin._make_simple_action("objectresource_objectwith", "objectWith", classMethod=True, transform_recv_result=StoreAPIConduitMixin._to_externalize)
-StoreAPIConduitMixin._make_simple_action("objectresource_resourcenameforuid", "resourceNameForUID", classMethod=True)
-StoreAPIConduitMixin._make_simple_action("objectresource_resourceuidforname", "resourceUIDForName", classMethod=True)
-StoreAPIConduitMixin._make_simple_action("objectresource_create", "create", classMethod=True, transform_recv_result=StoreAPIConduitMixin._to_externalize)
-StoreAPIConduitMixin._make_simple_action("objectresource_setcomponent", "setComponent")
-StoreAPIConduitMixin._make_simple_action("objectresource_component", "component", transform_recv_result=StoreAPIConduitMixin._to_string)
-StoreAPIConduitMixin._make_simple_action("objectresource_remove", "remove")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_loadallobjects", "loadAllObjects", classMethod=True, transform_recv_result=UtilityConduitMixin._to_serialize_list)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_loadallobjectswithnames", "loadAllObjectsWithNames", classMethod=True, transform_recv_result=UtilityConduitMixin._to_serialize_list)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_listobjects", "listObjects", classMethod=True)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_countobjects", "countObjects", classMethod=True)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_objectwith", "objectWith", classMethod=True, transform_recv_result=UtilityConduitMixin._to_serialize)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_resourcenameforuid", "resourceNameForUID", classMethod=True)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_resourceuidforname", "resourceUIDForName", classMethod=True)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_create", "create", classMethod=True, transform_recv_result=UtilityConduitMixin._to_serialize)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_setcomponent", "setComponent")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_component", "component", transform_recv_result=UtilityConduitMixin._to_string)
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "objectresource_remove", "remove")
+
+# Calls on L{NotificationCollection} objects
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "notification_set_status", "setStatus")
+UtilityConduitMixin._make_simple_action(StoreAPIConduitMixin, "notification_all_records", "notificationObjectRecords", transform_recv_result=UtilityConduitMixin._to_serialize_list)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_conduit.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_conduit.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_conduit.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -32,7 +32,7 @@
 from txdav.caldav.datastore.query.filter import Filter
 from txdav.caldav.datastore.scheduling.freebusy import generateFreeBusyInfo
 from txdav.caldav.datastore.scheduling.ischedule.localservers import ServersDB, Server
-from txdav.caldav.datastore.sql import ManagedAttachment
+from txdav.caldav.datastore.sql import ManagedAttachment, AttachmentLink
 from txdav.caldav.datastore.test.common import CaptureProtocol
 from txdav.common.datastore.podding.conduit import PoddingConduit, \
     FailedCrossPodRequestError
@@ -362,11 +362,11 @@
         yield self.createShare("user01", "puser01")
 
         calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
-        token1_1 = yield calendar1.syncToken()
+        token1_1 = yield calendar1.syncTokenRevision()
         yield self.commitTransaction(0)
 
         shared = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="puser01", name="shared-calendar")
-        token2_1 = yield shared.syncToken()
+        token2_1 = yield shared.syncTokenRevision()
         yield self.commitTransaction(1)
 
         self.assertEqual(token1_1, token2_1)
@@ -376,11 +376,11 @@
         yield self.commitTransaction(0)
 
         calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
-        token1_2 = yield calendar1.syncToken()
+        token1_2 = yield calendar1.syncTokenRevision()
         yield self.commitTransaction(0)
 
         shared = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="puser01", name="shared-calendar")
-        token2_2 = yield shared.syncToken()
+        token2_2 = yield shared.syncTokenRevision()
         yield self.commitTransaction(1)
 
         self.assertNotEqual(token1_1, token1_2)
@@ -394,11 +394,11 @@
         yield self.commitTransaction(0)
 
         calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
-        token1_3 = yield calendar1.syncToken()
+        token1_3 = yield calendar1.syncTokenRevision()
         yield self.commitTransaction(0)
 
         shared = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(1), home="puser01", name="shared-calendar")
-        token2_3 = yield shared.syncToken()
+        token2_3 = yield shared.syncTokenRevision()
         yield self.commitTransaction(1)
 
         self.assertNotEqual(token1_1, token1_3)
@@ -1056,3 +1056,83 @@
         attachment = yield ManagedAttachment.load(self.theTransactionUnderTest(0), resourceID, managedID)
         self.assertTrue(attachment is None)
         yield self.commitTransaction(0)
+
+
+    @inlineCallbacks
+    def test_get_all_attachments(self):
+        """
+        Test that action=get-all-attachments works.
+        """
+
+        yield self.createShare("user01", "puser01")
+
+        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
+        yield calendar1.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
+        yield self.commitTransaction(0)
+
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text."))
+        yield self.commitTransaction(0)
+
+        shared_object = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(1), home="puser01", calendar_name="shared-calendar", name="1.ics")
+        attachments = yield shared_object.ownerHome().getAllAttachments()
+        self.assertEqual(len(attachments), 1)
+        self.assertTrue(isinstance(attachments[0], ManagedAttachment))
+        self.assertEqual(attachments[0].contentType(), MimeType.fromString("text/plain"))
+        self.assertEqual(attachments[0].name(), "test.txt")
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_get_attachment_data(self):
+        """
+        Test that action=get-all-attachments works.
+        """
+
+        yield self.createShare("user01", "puser01")
+
+        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
+        yield calendar1.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
+        yield self.commitTransaction(0)
+
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        attachment, _ignore_location = yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text."))
+        remote_id = attachment.id()
+        yield self.commitTransaction(0)
+
+        home1 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name="puser01")
+        shared_object = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(1), home="puser01", calendar_name="shared-calendar", name="1.ics")
+        attachment = yield ManagedAttachment._create(self.theTransactionUnderTest(1), None, home1.id())
+        attachment._contentType = MimeType.fromString("text/plain")
+        attachment._name = "test.txt"
+        yield shared_object.ownerHome().readAttachmentData(remote_id, attachment)
+        yield self.commitTransaction(1)
+
+
+    @inlineCallbacks
+    def test_get_attachment_links(self):
+        """
+        Test that action=get-attachment-links works.
+        """
+
+        yield self.createShare("user01", "puser01")
+
+        calendar1 = yield self.calendarUnderTest(txn=self.theTransactionUnderTest(0), home="user01", name="calendar")
+        cobj1 = yield calendar1.createCalendarObjectWithName("1.ics", Component.fromString(self.caldata1))
+        calobjID = cobj1.id()
+        yield self.commitTransaction(0)
+
+        object1 = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(0), home="user01", calendar_name="calendar", name="1.ics")
+        attachment, _ignore_location = yield object1.addAttachment(None, MimeType.fromString("text/plain"), "test.txt", MemoryStream("Here is some text."))
+        attID = attachment.id()
+        managedID = attachment.managedID()
+        yield self.commitTransaction(0)
+
+        shared_object = yield self.calendarObjectUnderTest(txn=self.theTransactionUnderTest(1), home="puser01", calendar_name="shared-calendar", name="1.ics")
+        links = yield shared_object.ownerHome().getAttachmentLinks()
+        self.assertEqual(len(links), 1)
+        self.assertTrue(isinstance(links[0], AttachmentLink))
+        self.assertEqual(links[0]._attachmentID, attID)
+        self.assertEqual(links[0]._managedID, managedID)
+        self.assertEqual(links[0]._calendarObjectID, calobjID)
+        yield self.commitTransaction(1)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_store_api.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_store_api.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/test_store_api.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -104,8 +104,8 @@
 
         from txdav.caldav.datastore.sql_external import CalendarHomeExternal
         recipient = yield txn.store().directoryService().recordWithUID(uid)
-        resourceID = yield txn.store().conduit.send_home_resource_id(self, recipient)
-        home = CalendarHomeExternal(txn, recipient.uid, resourceID) if resourceID is not None else None
+        resourceID = yield txn.store().conduit.send_home_resource_id(txn, recipient)
+        home = CalendarHomeExternal.makeSyntheticExternalHome(txn, recipient.uid, resourceID) if resourceID is not None else None
         if home:
             home._childClass = home._childClass._externalClass
         returnValue(home)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/util.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/util.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/test/util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -14,25 +14,33 @@
 # limitations under the License.
 ##
 
+from twisted.internet import reactor
 from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.internet.protocol import Protocol
 
 from txdav.caldav.datastore.scheduling.ischedule.localservers import (
     Server, ServersDB
 )
 from txdav.common.datastore.podding.conduit import PoddingConduit
+from txdav.common.datastore.podding.request import ConduitRequest
 from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE
 from txdav.common.datastore.test.util import (
     CommonCommonTests, SQLStoreBuilder, buildTestDirectory
 )
 
 import txweb2.dav.test.util
+from txweb2 import responsecode
+from txweb2.http import Response, JSONResponse
+from txweb2.http_headers import MimeDisposition, MimeType
+from txweb2.stream import ProducerStream
 
 from twext.enterprise.ienterprise import AlreadyFinishedError
+from twext.enterprise.jobqueue import JobItem
 
 import json
 
 
-class FakeConduitRequest(object):
+class FakeConduitRequest(ConduitRequest):
     """
     A conduit request that sends messages internally rather than using HTTP
     """
@@ -42,11 +50,12 @@
     @classmethod
     def addServerStore(cls, server, store):
         """
-        Add a store mapped to a server. These mappings are used to "deliver" conduit
-        requests to the appropriate store.
+        Add a store mapped to a server. These mappings are used to "deliver"
+        conduit requests to the appropriate store.
 
         @param uri: the server
         @type uri: L{Server}
+
         @param store: the store
         @type store: L{ICommonDataStore}
         """
@@ -54,28 +63,17 @@
         cls.storeMap[server.details()] = store
 
 
-    def __init__(self, server, data, stream=None, stream_type=None):
-
+    def __init__(
+        self, server, data, stream=None, stream_type=None, writeStream=None
+    ):
         self.server = server
         self.data = json.dumps(data)
         self.stream = stream
         self.streamType = stream_type
+        self.writeStream = writeStream
 
 
     @inlineCallbacks
-    def doRequest(self, txn):
-
-        # Generate an HTTP client request
-        try:
-            response = (yield self._processRequest())
-            response = json.loads(response)
-        except Exception as e:
-            raise ValueError("Failed cross-pod request: {}".format(e))
-
-        returnValue(response)
-
-
-    @inlineCallbacks
     def _processRequest(self):
         """
         Process the request by sending it to the relevant server.
@@ -90,19 +88,53 @@
             j["stream"] = self.stream
             j["streamType"] = self.streamType
         try:
-            result = yield store.conduit.processRequest(j)
+            if store.conduit.isStreamAction(j):
+                stream = ProducerStream()
+
+                class StreamProtocol(Protocol):
+                    def connectionMade(self):
+                        stream.registerProducer(self.transport, False)
+
+                    def dataReceived(self, data):
+                        stream.write(data)
+
+                    def connectionLost(self, reason):
+                        stream.finish()
+
+                result = yield store.conduit.processRequestStream(
+                    j, StreamProtocol()
+                )
+
+                try:
+                    ct, name = result
+                except ValueError:
+                    code = responsecode.BAD_REQUEST
+                else:
+                    headers = {"content-type": MimeType.fromString(ct)}
+                    headers["content-disposition"] = MimeDisposition(
+                        "attachment", params={"filename": name}
+                    )
+                    returnValue(Response(responsecode.OK, headers, stream))
+            else:
+                result = yield store.conduit.processRequest(j)
+                code = responsecode.OK
         except Exception as e:
             # Send the exception over to the other side
             result = {
                 "result": "exception",
-                "class": ".".join((e.__class__.__module__, e.__class__.__name__,)),
-                "request": str(e),
+                "class": ".".join((
+                    e.__class__.__module__,
+                    e.__class__.__name__,
+                )),
+                "details": str(e),
             }
-        result = json.dumps(result)
-        returnValue(result)
+            code = responsecode.BAD_REQUEST
 
+        response = JSONResponse(code, result)
+        returnValue(response)
 
 
+
 class MultiStoreConduitTest(CommonCommonTests, txweb2.dav.test.util.TestCase):
 
     numberOfStores = 2
@@ -110,11 +142,15 @@
     theStoreBuilders = []
     theStores = []
     activeTransactions = []
+    accounts = None
+    augments = None
 
     def __init__(self, methodName='runTest'):
         txweb2.dav.test.util.TestCase.__init__(self, methodName)
         while len(self.theStoreBuilders) < self.numberOfStores:
-            self.theStoreBuilders.append(SQLStoreBuilder(count=len(self.theStoreBuilders)))
+            self.theStoreBuilders.append(
+                SQLStoreBuilder(count=len(self.theStoreBuilders))
+            )
         self.theStores = [None] * self.numberOfStores
         self.activeTransactions = [None] * self.numberOfStores
 
@@ -129,26 +165,38 @@
             for j in range(self.numberOfStores):
                 letter = chr(ord("A") + j)
                 port = 8008 + 100 * j
-                server = Server(letter, "http://127.0.0.1:{}".format(port), letter, j == i)
+                server = Server(
+                    letter, "http://127.0.0.1:{}".format(port), letter, j == i
+                )
                 serversDB.addServer(server)
 
             if i == 0:
                 yield self.buildStoreAndDirectory(
                     serversDB=serversDB,
-                    storeBuilder=self.theStoreBuilders[i]
+                    storeBuilder=self.theStoreBuilders[i],
+                    accounts=self.accounts,
+                    augments=self.augments,
                 )
                 self.theStores[i] = self.store
             else:
-                self.theStores[i] = yield self.buildStore(self.theStoreBuilders[i])
+                self.theStores[i] = yield self.buildStore(
+                    self.theStoreBuilders[i]
+                )
                 directory = buildTestDirectory(
-                    self.theStores[i], self.mktemp(), serversDB=serversDB
+                    self.theStores[i],
+                    self.mktemp(),
+                    serversDB=serversDB,
+                    accounts=self.accounts,
+                    augments=self.augments,
                 )
                 self.theStores[i].setDirectoryService(directory)
 
             self.theStores[i].queryCacher = None     # Cannot use query caching
             self.theStores[i].conduit = self.makeConduit(self.theStores[i])
 
-            FakeConduitRequest.addServerStore(serversDB.getServerById(chr(ord("A") + i)), self.theStores[i])
+            FakeConduitRequest.addServerStore(
+                serversDB.getServerById(chr(ord("A") + i)), self.theStores[i]
+            )
 
 
     def configure(self):
@@ -199,6 +247,14 @@
         self.activeTransactions[count] = None
 
 
+    @inlineCallbacks
+    def waitAllEmpty(self):
+        for i in range(self.numberOfStores):
+            yield JobItem.waitEmpty(
+                self.theStoreUnderTest(i).newTransaction, reactor, 60.0
+            )
+
+
     def makeConduit(self, store):
         conduit = PoddingConduit(store)
         conduit.conduitRequestClass = FakeConduitRequest
@@ -206,15 +262,23 @@
 
 
     @inlineCallbacks
-    def createShare(self, ownerGUID="user01", shareeGUID="puser02", name="calendar"):
+    def createShare(
+        self, ownerGUID="user01", shareeGUID="puser02", name="calendar"
+    ):
 
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name=ownerGUID, create=True)
+        home = yield self.homeUnderTest(
+            txn=self.theTransactionUnderTest(0), name=ownerGUID, create=True
+        )
         calendar = yield home.calendarWithName(name)
-        yield calendar.inviteUIDToShare(shareeGUID, _BIND_MODE_WRITE, "shared", shareName="shared-calendar")
+        yield calendar.inviteUIDToShare(
+            shareeGUID, _BIND_MODE_WRITE, "shared", shareName="shared-calendar"
+        )
         yield self.commitTransaction(0)
 
         # ACK: home2 is None
-        home2 = yield self.homeUnderTest(txn=self.theTransactionUnderTest(1), name=shareeGUID)
+        home2 = yield self.homeUnderTest(
+            txn=self.theTransactionUnderTest(1), name=shareeGUID
+        )
         yield home2.acceptShare("shared-calendar")
         yield self.commitTransaction(1)
 
@@ -222,9 +286,13 @@
 
 
     @inlineCallbacks
-    def removeShare(self, ownerGUID="user01", shareeGUID="puser02", name="calendar"):
+    def removeShare(
+        self, ownerGUID="user01", shareeGUID="puser02", name="calendar"
+    ):
 
-        home = yield self.homeUnderTest(txn=self.theTransactionUnderTest(0), name=ownerGUID)
+        home = yield self.homeUnderTest(
+            txn=self.theTransactionUnderTest(0), name=ownerGUID
+        )
         calendar = yield home.calendarWithName(name)
         yield calendar.uninviteUIDFromShare(shareeGUID)
         yield self.commitTransaction(0)

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/util.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/podding/util.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/util.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/podding/util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,265 @@
+##
+# Copyright (c) 2013-2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twisted.internet.defer import inlineCallbacks, returnValue
+
+from txdav.common.datastore.podding.base import FailedCrossPodRequestError
+from txdav.common.datastore.sql_notification import NotificationCollection, \
+    NotificationObject
+
+
+class UtilityConduitMixin(object):
+    """
+    Defines utility methods for cross-pod API and mix-ins.
+    """
+
+    #
+    # Utility methods to map from store objects to/from JSON
+    #
+
+    @inlineCallbacks
+    def _getRequestForStoreObject(self, action, storeObject, classMethod):
+        """
+        Create the JSON data needed to identify the remote resource by type and ids, along with any parent resources.
+
+        @param action: the conduit action name
+        @type action: L{str}
+        @param storeObject: the store object that is being operated on
+        @type storeObject: L{object}
+        @param classMethod: indicates whether the method being called is a classmethod
+        @type classMethod: L{bool}
+
+        @return: the transaction in use, the JSON dict to send in the request,
+            the server where the request should be sent
+        @rtype: L{tuple} of (L{CommonStoreTransaction}, L{dict}, L{str})
+        """
+
+        from txdav.common.datastore.sql import CommonObjectResource, CommonHomeChild, CommonHome
+        result = {
+            "action": action,
+        }
+
+        # Extract the relevant store objects
+        txn = storeObject._txn
+        owner_home = None
+        viewer_home = None
+        home_child = None
+        object_resource = None
+        notification = None
+        if isinstance(storeObject, CommonObjectResource):
+            owner_home = storeObject.ownerHome()
+            viewer_home = storeObject.viewerHome()
+            home_child = storeObject.parentCollection()
+            object_resource = storeObject
+        elif isinstance(storeObject, CommonHomeChild):
+            owner_home = storeObject.ownerHome()
+            viewer_home = storeObject.viewerHome()
+            home_child = storeObject
+            result["classMethod"] = classMethod
+        elif isinstance(storeObject, CommonHome):
+            owner_home = storeObject
+            viewer_home = storeObject
+            txn = storeObject._txn
+            result["classMethod"] = classMethod
+        elif isinstance(storeObject, NotificationCollection):
+            notification = storeObject
+            txn = storeObject._txn
+            result["classMethod"] = classMethod
+
+        # Add store object identities to JSON request
+        if viewer_home:
+            result["homeType"] = viewer_home._homeType
+            result["homeUID"] = viewer_home.uid()
+            if getattr(viewer_home, "_migratingHome", False):
+                result["allowDisabledHome"] = True
+            if home_child:
+                if home_child.owned():
+                    result["homeChildID"] = home_child.id()
+                else:
+                    result["homeChildSharedID"] = home_child.name()
+            if object_resource:
+                result["objectResourceID"] = object_resource.id()
+
+            # Note that the owner_home is always the ownerHome() because in the sharing case
+            # a viewer is accessing the owner's data on another pod.
+            recipient = yield self.store.directoryService().recordWithUID(owner_home.uid())
+
+        elif notification:
+            result["notificationUID"] = notification.uid()
+            if getattr(notification, "_migratingHome", False):
+                result["allowDisabledHome"] = True
+            recipient = yield self.store.directoryService().recordWithUID(notification.uid())
+
+        returnValue((txn, result, recipient.server(),))
+
+
+    @inlineCallbacks
+    def _getStoreObjectForRequest(self, txn, request):
+        """
+        Resolve the supplied JSON data to get a store object to operate on.
+        """
+
+        returnObject = txn
+        classObject = None
+
+        if "allowDisabledHome" in request:
+            txn._allowDisabled = True
+
+        if "homeUID" in request:
+            home = yield txn.homeWithUID(request["homeType"], request["homeUID"])
+            if home is None:
+                raise FailedCrossPodRequestError("Invalid owner UID specified")
+            home._internalRequest = False
+            returnObject = home
+            if request.get("classMethod", False):
+                classObject = home._childClass
+
+        if "homeChildID" in request:
+            homeChild = yield home.childWithID(request["homeChildID"])
+            if homeChild is None:
+                raise FailedCrossPodRequestError("Invalid home child specified")
+            returnObject = homeChild
+            if request.get("classMethod", False):
+                classObject = homeChild._objectResourceClass
+        elif "homeChildSharedID" in request:
+            homeChild = yield home.childWithName(request["homeChildSharedID"])
+            if homeChild is None:
+                raise FailedCrossPodRequestError("Invalid home child specified")
+            returnObject = homeChild
+            if request.get("classMethod", False):
+                classObject = homeChild._objectResourceClass
+
+        if "objectResourceID" in request:
+            objectResource = yield homeChild.objectResourceWithID(request["objectResourceID"])
+            if objectResource is None:
+                raise FailedCrossPodRequestError("Invalid object resource specified")
+            returnObject = objectResource
+
+        if "notificationUID" in request:
+            notification = yield txn.notificationsWithUID(request["notificationUID"])
+            if notification is None:
+                raise FailedCrossPodRequestError("Invalid notification UID specified")
+            notification._internalRequest = False
+            returnObject = notification
+            if request.get("classMethod", False):
+                classObject = NotificationObject
+
+        returnValue((returnObject, classObject,))
+
+
+    #
+    # We can simplify code generation for simple calls by dynamically generating the appropriate class methods.
+    #
+
+    @inlineCallbacks
+    def _simple_object_send(self, actionName, storeObject, classMethod=False, transform=None, args=None, kwargs=None):
+        """
+        A simple send operation that returns a value.
+
+        @param actionName: name of the action.
+        @type actionName: C{str}
+        @param shareeView: sharee resource being operated on.
+        @type shareeView: L{CommonHomeChildExternal}
+        @param objectResource: the resource being operated on, or C{None} for classmethod.
+        @type objectResource: L{CommonObjectResourceExternal}
+        @param transform: a function used to convert the JSON response into return values.
+        @type transform: C{callable}
+        @param args: list of optional arguments.
+        @type args: C{list}
+        @param kwargs: optional keyword arguments.
+        @type kwargs: C{dict}
+        """
+
+        txn, request, server = yield self._getRequestForStoreObject(actionName, storeObject, classMethod)
+        if args is not None:
+            request["arguments"] = args
+        if kwargs is not None:
+            request["keywords"] = kwargs
+        response = yield self.sendRequestToServer(txn, server, request)
+        returnValue(transform(response) if transform is not None else response)
+
+
+    @inlineCallbacks
+    def _simple_object_recv(self, txn, actionName, request, method, transform=None):
+        """
+        A simple recv operation that returns a value. We also look for an optional set of arguments/keywords
+        and include those only if present.
+
+        @param actionName: name of the action.
+        @type actionName: C{str}
+        @param request: request arguments
+        @type request: C{dict}
+        @param method: name of the method to execute on the shared resource to get the result.
+        @type method: C{str}
+        @param transform: method to call on returned JSON value to convert it to something useful.
+        @type transform: C{callable}
+        """
+
+        storeObject, classObject = yield self._getStoreObjectForRequest(txn, request)
+        if classObject is not None:
+            value = yield getattr(classObject, method)(storeObject, *request.get("arguments", ()), **request.get("keywords", {}))
+        else:
+            value = yield getattr(storeObject, method)(*request.get("arguments", ()), **request.get("keywords", {}))
+
+        returnValue(transform(value) if transform is not None else value)
+
+
+    #
+    # Factory methods for binding actions to the conduit class
+    #
+    @staticmethod
+    def _make_simple_action(bindcls, action, method, classMethod=False, transform_recv_result=None, transform_send_result=None):
+        setattr(
+            bindcls,
+            "send_{}".format(action),
+            lambda self, storeObject, *args, **kwargs:
+                self._simple_object_send(action, storeObject, classMethod=classMethod, transform=transform_send_result, args=args, kwargs=kwargs)
+        )
+        setattr(
+            bindcls,
+            "recv_{}".format(action),
+            lambda self, txn, message:
+                self._simple_object_recv(txn, action, message, method, transform=transform_recv_result)
+        )
+
+
+    #
+    # Transforms for returned data
+    #
+    @staticmethod
+    def _to_serialize(value):
+        """
+        Convert the value to the external (JSON-based) representation.
+        """
+        return value.serialize() if value is not None else None
+
+
+    @staticmethod
+    def _to_serialize_list(value):
+        """
+        Convert the value to the external (JSON-based) representation.
+        """
+        return [v.serialize() for v in value]
+
+
+    @staticmethod
+    def _to_string(value):
+        return str(value)
+
+
+    @staticmethod
+    def _to_tuple(value):
+        return tuple(value)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -29,88 +29,72 @@
 
 from pycalendar.datetime import DateTime
 
+from twext.enterprise.dal.parseschema import splitSQLString
 from twext.enterprise.dal.syntax import (
     Delete, utcNowSQL, Union, Insert, Len, Max, Parameter, SavepointAction,
-    Select, Update, ColumnSyntax, TableSyntax, Upper, Count, ALL_COLUMNS, Sum,
+    Select, Update, Count, ALL_COLUMNS, Sum,
     DatabaseLock, DatabaseUnlock)
 from twext.enterprise.ienterprise import AlreadyFinishedError
 from twext.enterprise.jobqueue import LocalQueuer
 from twext.enterprise.util import parseSQLTimestamp
-from twext.internet.decorate import memoizedKey, Memoizable
+from twext.internet.decorate import Memoizable
 from twext.python.clsprop import classproperty
 from twext.python.log import Logger
-from txweb2.http_headers import MimeType
 
 from twisted.application.service import Service
 from twisted.internet import reactor
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
-from twisted.python import hashlib
 from twisted.python.failure import Failure
 from twisted.python.modules import getModule
 from twisted.python.util import FancyEqMixin
 
 from twistedcaldav.config import config
-from twistedcaldav.dateops import datetimeMktime, pyCalendarTodatetime
+from twistedcaldav.dateops import datetimeMktime, pyCalendarToSQLTimestamp
 
 from txdav.base.datastore.util import QueryCacher
-from txdav.base.datastore.util import normalizeUUIDOrNot
-from txdav.base.propertystore.base import PropertyName
 from txdav.base.propertystore.none import PropertyStore as NonePropertyStore
 from txdav.base.propertystore.sql import PropertyStore
 from txdav.caldav.icalendarstore import ICalendarTransaction, ICalendarStore
 from txdav.carddav.iaddressbookstore import IAddressBookTransaction
 from txdav.common.datastore.common import HomeChildBase
 from txdav.common.datastore.podding.conduit import PoddingConduit
-from txdav.common.datastore.sql_tables import _BIND_MODE_DIRECT, \
-    _BIND_MODE_INDIRECT, _BIND_MODE_OWN, _BIND_STATUS_ACCEPTED, \
-    _BIND_STATUS_DECLINED, _BIND_STATUS_DELETED, _BIND_STATUS_INVALID, \
-    _BIND_STATUS_INVITED, _HOME_STATUS_EXTERNAL, _HOME_STATUS_NORMAL, \
-    _HOME_STATUS_PURGING, schema, splitSQLString
+from txdav.common.datastore.sql_apn import APNSubscriptionsMixin
+from txdav.common.datastore.sql_directory import DelegatesAPIMixin, \
+    GroupsAPIMixin, GroupCacherAPIMixin
+from txdav.common.datastore.sql_imip import imipAPIMixin
+from txdav.common.datastore.sql_notification import NotificationCollection
+from txdav.common.datastore.sql_tables import _BIND_MODE_OWN, _BIND_STATUS_ACCEPTED, \
+    _HOME_STATUS_EXTERNAL, _HOME_STATUS_NORMAL, \
+    _HOME_STATUS_PURGING, schema, _HOME_STATUS_MIGRATING, \
+    _HOME_STATUS_DISABLED, _CHILD_TYPE_NORMAL
+from txdav.common.datastore.sql_util import _SharedSyncLogic
+from txdav.common.datastore.sql_sharing import SharingHomeMixIn, SharingMixIn
 from txdav.common.icommondatastore import ConcurrentModification, \
-    RecordNotAllowedError, ExternalShareFailed, ShareNotAllowed, \
-    IndexedSearchException, NotFoundError
+    RecordNotAllowedError, ShareNotAllowed, \
+    IndexedSearchException, EADDRESSBOOKTYPE, ECALENDARTYPE
 from txdav.common.icommondatastore import HomeChildNameNotAllowedError, \
     HomeChildNameAlreadyExistsError, NoSuchHomeChildError, \
     ObjectResourceNameNotAllowedError, ObjectResourceNameAlreadyExistsError, \
-    NoSuchObjectResourceError, AllRetriesFailed, InvalidSubscriptionValues, \
-    InvalidIMIPTokenValues, TooManyObjectResourcesError, \
-    SyncTokenValidException, AlreadyInTrashError
+    NoSuchObjectResourceError, AllRetriesFailed, \
+    TooManyObjectResourcesError, SyncTokenValidException, AlreadyInTrashError
 from txdav.common.idirectoryservice import IStoreDirectoryService, \
     DirectoryRecordNotFoundError
-from txdav.common.inotifications import INotificationCollection, \
-    INotificationObject
 from txdav.idav import ChangeCategory
-from txdav.who.delegates import Delegates
-from txdav.xml import element
 
-from uuid import uuid4, UUID
-
 from zope.interface import implements, directlyProvides
 
-from collections import namedtuple, defaultdict
+from collections import defaultdict
 import datetime
 import inspect
 import itertools
-import json
 import sys
 import time
+from uuid import uuid4
 
 current_sql_schema = getModule(__name__).filePath.sibling("sql_schema").child("current.sql").getContent()
 
 log = Logger()
 
-ECALENDARTYPE = 0
-EADDRESSBOOKTYPE = 1
-ENOTIFICATIONTYPE = 2
-
-# Labels used to identify the class of resource being modified, so that
-# notification systems can target the correct application
-NotifierPrefixes = {
-    ECALENDARTYPE: "CalDAV",
-    EADDRESSBOOKTYPE: "CardDAV",
-}
-
-
 class CommonDataStore(Service, object):
     """
     Shared logic for SQL-based data stores, between calendar and addressbook
@@ -565,7 +549,10 @@
 
 
 
-class CommonStoreTransaction(object):
+class CommonStoreTransaction(
+    GroupsAPIMixin, GroupCacherAPIMixin, DelegatesAPIMixin,
+    imipAPIMixin, APNSubscriptionsMixin,
+):
     """
     Transaction implementation for SQL database.
     """
@@ -585,14 +572,26 @@
 
         self._store = store
         self._queuer = self._store.queuer
-        self._calendarHomes = {}
-        self._addressbookHomes = {}
-        self._notificationHomes = {}
+        self._cachedHomes = {
+            ECALENDARTYPE: {
+                "byUID": defaultdict(dict),
+                "byID": defaultdict(dict),
+            },
+            EADDRESSBOOKTYPE: {
+                "byUID": defaultdict(dict),
+                "byID": defaultdict(dict),
+            },
+        }
+        self._notificationHomes = {
+            "byUID": defaultdict(dict),
+            "byID": defaultdict(dict),
+        }
         self._notifierFactories = notifierFactories
         self._notifiedAlready = set()
         self._bumpedRevisionAlready = set()
         self._label = label
         self._migrating = migrating
+        self._allowDisabled = False
         self._primaryHomeType = None
         self._disableCache = disableCache or not store.queryCachingEnabled()
         if disableCache:
@@ -695,14 +694,11 @@
         ).on(self)
 
 
-    def _determineMemo(self, storeType, uid, create=False, authzUID=None):
+    def _determineMemo(self, storeType, lookupMode, status):
         """
         Determine the memo dictionary to use for homeWithUID.
         """
-        if storeType == ECALENDARTYPE:
-            return self._calendarHomes
-        else:
-            return self._addressbookHomes
+        return self._cachedHomes[storeType][lookupMode][status]
 
 
     @inlineCallbacks
@@ -717,11 +713,11 @@
             yield self.homeWithUID(storeType, uid, create=False)
 
         # Return the memoized list directly
-        returnValue([kv[1] for kv in sorted(self._determineMemo(storeType, None).items(), key=lambda x: x[0])])
+        returnValue([kv[1] for kv in sorted(self._determineMemo(storeType, "byUID", _HOME_STATUS_NORMAL).items(), key=lambda x: x[0])])
 
 
-    @memoizedKey("uid", _determineMemo)
-    def homeWithUID(self, storeType, uid, create=False, authzUID=None):
+    @inlineCallbacks
+    def homeWithUID(self, storeType, uid, status=None, create=False, authzUID=None):
         """
         We need to distinguish between various different users "looking" at a home and its
         child resources because we have per-user properties that depend on which user is "looking".
@@ -733,15 +729,21 @@
         if storeType not in (ECALENDARTYPE, EADDRESSBOOKTYPE):
             raise RuntimeError("Unknown home type.")
 
-        return self._homeClass[storeType].homeWithUID(self, uid, create, authzUID)
+        result = self._determineMemo(storeType, "byUID", status).get(uid)
+        if result is None:
+            result = yield self._homeClass[storeType].homeWithUID(self, uid, status, create, authzUID)
+            if result:
+                self._determineMemo(storeType, "byUID", status)[uid] = result
+                self._determineMemo(storeType, "byID", None)[result.id()] = result
+        returnValue(result)
 
 
-    def calendarHomeWithUID(self, uid, create=False, authzUID=None):
-        return self.homeWithUID(ECALENDARTYPE, uid, create=create, authzUID=authzUID)
+    def calendarHomeWithUID(self, uid, status=None, create=False, authzUID=None):
+        return self.homeWithUID(ECALENDARTYPE, uid, status=status, create=create, authzUID=authzUID)
 
 
-    def addressbookHomeWithUID(self, uid, create=False, authzUID=None):
-        return self.homeWithUID(EADDRESSBOOKTYPE, uid, create=create, authzUID=authzUID)
+    def addressbookHomeWithUID(self, uid, status=None, create=False, authzUID=None):
+        return self.homeWithUID(EADDRESSBOOKTYPE, uid, status=status, create=create, authzUID=authzUID)
 
 
     @inlineCallbacks
@@ -749,12 +751,15 @@
         """
         Load a calendar or addressbook home by its integer resource ID.
         """
-        uid = (yield self._homeClass[storeType].homeUIDWithResourceID(self, rid))
-        if uid:
-            # Always get the owner's view of the home = i.e., authzUID=uid
-            result = (yield self.homeWithUID(storeType, uid, authzUID=uid))
-        else:
-            result = None
+        if storeType not in (ECALENDARTYPE, EADDRESSBOOKTYPE):
+            raise RuntimeError("Unknown home type.")
+
+        result = self._determineMemo(storeType, "byID", None).get(rid)
+        if result is None:
+            result = yield self._homeClass[storeType].homeWithResourceID(self, rid)
+            if result:
+                self._determineMemo(storeType, "byID", None)[rid] = result
+                self._determineMemo(storeType, "byUID", result.status())[result.uid()] = result
         returnValue(result)
 
 
@@ -766,1303 +771,36 @@
         return self.homeWithResourceID(EADDRESSBOOKTYPE, rid)
 
 
-    @memoizedKey("uid", "_notificationHomes")
-    def notificationsWithUID(self, uid, create=True):
+    @inlineCallbacks
+    def notificationsWithUID(self, uid, status=None, create=False):
         """
         Implement notificationsWithUID.
         """
-        return NotificationCollection.notificationsWithUID(self, uid, create)
 
+        result = self._notificationHomes["byUID"][status].get(uid)
+        if result is None:
+            result = yield NotificationCollection.notificationsWithUID(self, uid, status=status, create=create)
+            if result:
+                self._notificationHomes["byUID"][status][uid] = result
+                self._notificationHomes["byID"][None][result.id()] = result
+        returnValue(result)
 
-    @memoizedKey("rid", "_notificationHomes")
+
+    @inlineCallbacks
     def notificationsWithResourceID(self, rid):
         """
         Implement notificationsWithResourceID.
         """
-        return NotificationCollection.notificationsWithResourceID(self, rid)
 
+        result = self._notificationHomes["byID"][None].get(rid)
+        if result is None:
+            result = yield NotificationCollection.notificationsWithResourceID(self, rid)
+            if result:
+                self._notificationHomes["byID"][None][rid] = result
+                self._notificationHomes["byUID"][result.status()][result.uid()] = result
+        returnValue(result)
 
-    @classproperty
-    def _insertAPNSubscriptionQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Insert({
-            apn.TOKEN: Parameter("token"),
-            apn.RESOURCE_KEY: Parameter("resourceKey"),
-            apn.MODIFIED: Parameter("modified"),
-            apn.SUBSCRIBER_GUID: Parameter("subscriber"),
-            apn.USER_AGENT: Parameter("userAgent"),
-            apn.IP_ADDR: Parameter("ipAddr")
-        })
 
-
-    @classproperty
-    def _updateAPNSubscriptionQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Update(
-            {
-                apn.MODIFIED: Parameter("modified"),
-                apn.SUBSCRIBER_GUID: Parameter("subscriber"),
-                apn.USER_AGENT: Parameter("userAgent"),
-                apn.IP_ADDR: Parameter("ipAddr")
-            },
-            Where=(apn.TOKEN == Parameter("token")).And(
-                apn.RESOURCE_KEY == Parameter("resourceKey"))
-        )
-
-
-    @classproperty
-    def _selectAPNSubscriptionQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Select(
-            [apn.MODIFIED, apn.SUBSCRIBER_GUID],
-            From=apn,
-            Where=(apn.TOKEN == Parameter("token")).And(
-                apn.RESOURCE_KEY == Parameter("resourceKey")
-            )
-        )
-
-
-    @inlineCallbacks
-    def addAPNSubscription(
-        self, token, key, timestamp, subscriber,
-        userAgent, ipAddr
-    ):
-        if not (token and key and timestamp and subscriber):
-            raise InvalidSubscriptionValues()
-
-        # Cap these values at 255 characters
-        userAgent = userAgent[:255]
-        ipAddr = ipAddr[:255]
-
-        row = yield self._selectAPNSubscriptionQuery.on(
-            self,
-            token=token, resourceKey=key
-        )
-        if not row:  # Subscription does not yet exist
-            try:
-                yield self._insertAPNSubscriptionQuery.on(
-                    self,
-                    token=token, resourceKey=key, modified=timestamp,
-                    subscriber=subscriber, userAgent=userAgent,
-                    ipAddr=ipAddr)
-            except Exception:
-                # Subscription may have been added by someone else, which is fine
-                pass
-
-        else:  # Subscription exists, so update with new timestamp and subscriber
-            try:
-                yield self._updateAPNSubscriptionQuery.on(
-                    self,
-                    token=token, resourceKey=key, modified=timestamp,
-                    subscriber=subscriber, userAgent=userAgent,
-                    ipAddr=ipAddr)
-            except Exception:
-                # Subscription may have been added by someone else, which is fine
-                pass
-
-
-    @classproperty
-    def _removeAPNSubscriptionQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Delete(From=apn,
-                      Where=(apn.TOKEN == Parameter("token")).And(
-                          apn.RESOURCE_KEY == Parameter("resourceKey")))
-
-
-    def removeAPNSubscription(self, token, key):
-        return self._removeAPNSubscriptionQuery.on(
-            self,
-            token=token, resourceKey=key)
-
-
-    @classproperty
-    def _purgeOldAPNSubscriptionQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Delete(From=apn,
-                      Where=(apn.MODIFIED < Parameter("olderThan")))
-
-
-    def purgeOldAPNSubscriptions(self, olderThan):
-        return self._purgeOldAPNSubscriptionQuery.on(
-            self,
-            olderThan=olderThan)
-
-
-    @classproperty
-    def _apnSubscriptionsByTokenQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Select([apn.RESOURCE_KEY, apn.MODIFIED, apn.SUBSCRIBER_GUID],
-                      From=apn, Where=apn.TOKEN == Parameter("token"))
-
-
-    def apnSubscriptionsByToken(self, token):
-        return self._apnSubscriptionsByTokenQuery.on(self, token=token)
-
-
-    @classproperty
-    def _apnSubscriptionsByKeyQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Select([apn.TOKEN, apn.SUBSCRIBER_GUID],
-                      From=apn, Where=apn.RESOURCE_KEY == Parameter("resourceKey"))
-
-
-    def apnSubscriptionsByKey(self, key):
-        return self._apnSubscriptionsByKeyQuery.on(self, resourceKey=key)
-
-
-    @classproperty
-    def _apnSubscriptionsBySubscriberQuery(cls):
-        apn = schema.APN_SUBSCRIPTIONS
-        return Select([apn.TOKEN, apn.RESOURCE_KEY, apn.MODIFIED, apn.USER_AGENT, apn.IP_ADDR],
-                      From=apn, Where=apn.SUBSCRIBER_GUID == Parameter("subscriberGUID"))
-
-
-    def apnSubscriptionsBySubscriber(self, guid):
-        return self._apnSubscriptionsBySubscriberQuery.on(self, subscriberGUID=guid)
-
-
-    # Create IMIP token
-
-    @classproperty
-    def _insertIMIPTokenQuery(cls):
-        imip = schema.IMIP_TOKENS
-        return Insert({
-            imip.TOKEN: Parameter("token"),
-            imip.ORGANIZER: Parameter("organizer"),
-            imip.ATTENDEE: Parameter("attendee"),
-            imip.ICALUID: Parameter("icaluid"),
-        })
-
-
-    @inlineCallbacks
-    def imipCreateToken(self, organizer, attendee, icaluid, token=None):
-        if not (organizer and attendee and icaluid):
-            raise InvalidIMIPTokenValues()
-
-        if token is None:
-            token = str(uuid4())
-
-        try:
-            yield self._insertIMIPTokenQuery.on(
-                self,
-                token=token, organizer=organizer, attendee=attendee,
-                icaluid=icaluid)
-        except Exception:
-            # TODO: is it okay if someone else created the same row just now?
-            pass
-        returnValue(token)
-
-    # Lookup IMIP organizer+attendee+icaluid for token
-
-
-    @classproperty
-    def _selectIMIPTokenByTokenQuery(cls):
-        imip = schema.IMIP_TOKENS
-        return Select([imip.ORGANIZER, imip.ATTENDEE, imip.ICALUID], From=imip,
-                      Where=(imip.TOKEN == Parameter("token")))
-
-
-    def imipLookupByToken(self, token):
-        return self._selectIMIPTokenByTokenQuery.on(self, token=token)
-
-    # Lookup IMIP token for organizer+attendee+icaluid
-
-
-    @classproperty
-    def _selectIMIPTokenQuery(cls):
-        imip = schema.IMIP_TOKENS
-        return Select(
-            [imip.TOKEN],
-            From=imip,
-            Where=(imip.ORGANIZER == Parameter("organizer")).And(
-                imip.ATTENDEE == Parameter("attendee")).And(
-                imip.ICALUID == Parameter("icaluid"))
-        )
-
-
-    @classproperty
-    def _updateIMIPTokenQuery(cls):
-        imip = schema.IMIP_TOKENS
-        return Update(
-            {imip.ACCESSED: utcNowSQL, },
-            Where=(imip.ORGANIZER == Parameter("organizer")).And(
-                imip.ATTENDEE == Parameter("attendee")).And(
-                    imip.ICALUID == Parameter("icaluid"))
-        )
-
-
-    @inlineCallbacks
-    def imipGetToken(self, organizer, attendee, icaluid):
-        row = (yield self._selectIMIPTokenQuery.on(
-            self, organizer=organizer,
-            attendee=attendee, icaluid=icaluid))
-        if row:
-            token = row[0][0]
-            # update the timestamp
-            yield self._updateIMIPTokenQuery.on(
-                self, organizer=organizer,
-                attendee=attendee, icaluid=icaluid)
-        else:
-            token = None
-        returnValue(token)
-
-
-    # Remove IMIP token
-    @classproperty
-    def _removeIMIPTokenQuery(cls):
-        imip = schema.IMIP_TOKENS
-        return Delete(From=imip,
-                      Where=(imip.TOKEN == Parameter("token")))
-
-
-    def imipRemoveToken(self, token):
-        return self._removeIMIPTokenQuery.on(self, token=token)
-
-
-    # Purge old IMIP tokens
-    @classproperty
-    def _purgeOldIMIPTokensQuery(cls):
-        imip = schema.IMIP_TOKENS
-        return Delete(From=imip,
-                      Where=(imip.ACCESSED < Parameter("olderThan")))
-
-
-    def purgeOldIMIPTokens(self, olderThan):
-        """
-        @type olderThan: datetime
-        """
-        return self._purgeOldIMIPTokensQuery.on(self, olderThan=olderThan)
-
-    # End of IMIP
-
-
-    # Groups
-
-    @classproperty
-    def _addGroupQuery(cls):
-        gr = schema.GROUPS
-        return Insert(
-            {
-                gr.NAME: Parameter("name"),
-                gr.GROUP_UID: Parameter("groupUID"),
-                gr.MEMBERSHIP_HASH: Parameter("membershipHash")
-            },
-            Return=gr.GROUP_ID
-        )
-
-
-    @classproperty
-    def _updateGroupQuery(cls):
-        gr = schema.GROUPS
-        return Update(
-            {
-                gr.MEMBERSHIP_HASH: Parameter("membershipHash"),
-                gr.NAME: Parameter("name"),
-                gr.MODIFIED: Parameter("timestamp"),
-                gr.EXTANT: Parameter("extant"),
-            },
-            Where=(gr.GROUP_UID == Parameter("groupUID"))
-        )
-
-
-    @classproperty
-    def _groupByUID(cls):
-        gr = schema.GROUPS
-        return Select(
-            [gr.GROUP_ID, gr.NAME, gr.MEMBERSHIP_HASH, gr.MODIFIED, gr.EXTANT],
-            From=gr,
-            Where=(gr.GROUP_UID == Parameter("groupUID"))
-        )
-
-
-    @classproperty
-    def _groupByID(cls):
-        gr = schema.GROUPS
-        return Select(
-            [gr.GROUP_UID, gr.NAME, gr.MEMBERSHIP_HASH, gr.EXTANT],
-            From=gr,
-            Where=(gr.GROUP_ID == Parameter("groupID"))
-        )
-
-
-    @classproperty
-    def _deleteGroup(cls):
-        gr = schema.GROUPS
-        return Delete(
-            From=gr,
-            Where=(gr.GROUP_ID == Parameter("groupID"))
-        )
-
-
-    @inlineCallbacks
-    def addGroup(self, groupUID, name, membershipHash):
-        """
-        @type groupUID: C{unicode}
-        @type name: C{unicode}
-        @type membershipHash: C{str}
-        """
-        record = yield self.directoryService().recordWithUID(groupUID)
-        if record is None:
-            returnValue(None)
-
-        groupID = (yield self._addGroupQuery.on(
-            self,
-            name=name.encode("utf-8"),
-            groupUID=groupUID.encode("utf-8"),
-            membershipHash=membershipHash
-        ))[0][0]
-
-        yield self.refreshGroup(
-            groupUID, record, groupID, name.encode("utf-8"), membershipHash, True
-        )
-        returnValue(groupID)
-
-
-    def updateGroup(self, groupUID, name, membershipHash, extant=True):
-        """
-        @type groupUID: C{unicode}
-        @type name: C{unicode}
-        @type membershipHash: C{str}
-        @type extant: C{boolean}
-        """
-        timestamp = datetime.datetime.utcnow()
-        return self._updateGroupQuery.on(
-            self,
-            name=name.encode("utf-8"),
-            groupUID=groupUID.encode("utf-8"),
-            timestamp=timestamp,
-            membershipHash=membershipHash,
-            extant=(1 if extant else 0)
-        )
-
-
-    @inlineCallbacks
-    def groupByUID(self, groupUID, create=True):
-        """
-        Return or create a record for the group UID.
-
-        @type groupUID: C{unicode}
-
-        @return: Deferred firing with tuple of group ID C{str}, group name
-            C{unicode}, membership hash C{str}, modified timestamp, and
-            extant C{boolean}
-        """
-        results = (
-            yield self._groupByUID.on(
-                self, groupUID=groupUID.encode("utf-8")
-            )
-        )
-        if results:
-            returnValue((
-                results[0][0],  # group id
-                results[0][1].decode("utf-8"),  # name
-                results[0][2],  # membership hash
-                results[0][3],  # modified timestamp
-                bool(results[0][4]),  # extant
-            ))
-        elif create:
-            savepoint = SavepointAction("groupByUID")
-            yield savepoint.acquire(self)
-            try:
-                groupID = yield self.addGroup(groupUID, u"", "")
-                if groupID is None:
-                    # The record does not actually exist within the directory
-                    yield savepoint.release(self)
-                    returnValue((None, None, None, None, None))
-
-            except Exception:
-                yield savepoint.rollback(self)
-                results = (
-                    yield self._groupByUID.on(
-                        self, groupUID=groupUID.encode("utf-8")
-                    )
-                )
-                if results:
-                    returnValue((
-                        results[0][0],  # group id
-                        results[0][1].decode("utf-8"),  # name
-                        results[0][2],  # membership hash
-                        results[0][3],  # modified timestamp
-                        bool(results[0][4]),  # extant
-                    ))
-                else:
-                    returnValue((None, None, None, None, None))
-            else:
-                yield savepoint.release(self)
-                results = (
-                    yield self._groupByUID.on(
-                        self, groupUID=groupUID.encode("utf-8")
-                    )
-                )
-                if results:
-                    returnValue((
-                        results[0][0],  # group id
-                        results[0][1].decode("utf-8"),  # name
-                        results[0][2],  # membership hash
-                        results[0][3],  # modified timestamp
-                        bool(results[0][4]),  # extant
-                    ))
-                else:
-                    returnValue((None, None, None, None, None))
-        else:
-            returnValue((None, None, None, None, None))
-
-
-    @inlineCallbacks
-    def groupByID(self, groupID):
-        """
-        Given a group ID, return the group UID, or raise NotFoundError
-
-        @type groupID: C{str}
-        @return: Deferred firing with a tuple of group UID C{unicode},
-            group name C{unicode}, membership hash C{str}, and extant C{boolean}
-        """
-        try:
-            results = (yield self._groupByID.on(self, groupID=groupID))[0]
-            if results:
-                results = (
-                    results[0].decode("utf-8"),
-                    results[1].decode("utf-8"),
-                    results[2],
-                    bool(results[3])
-                )
-            returnValue(results)
-        except IndexError:
-            raise NotFoundError
-
-
-    def deleteGroup(self, groupID):
-        return self._deleteGroup.on(self, groupID=groupID)
-
-    # End of Groups
-
-
-    # Group Members
-
-    @classproperty
-    def _addMemberToGroupQuery(cls):
-        gm = schema.GROUP_MEMBERSHIP
-        return Insert(
-            {
-                gm.GROUP_ID: Parameter("groupID"),
-                gm.MEMBER_UID: Parameter("memberUID")
-            }
-        )
-
-
-    @classproperty
-    def _removeMemberFromGroupQuery(cls):
-        gm = schema.GROUP_MEMBERSHIP
-        return Delete(
-            From=gm,
-            Where=(
-                gm.GROUP_ID == Parameter("groupID")
-            ).And(
-                gm.MEMBER_UID == Parameter("memberUID")
-            )
-        )
-
-
-    @classproperty
-    def _selectGroupMembersQuery(cls):
-        gm = schema.GROUP_MEMBERSHIP
-        return Select(
-            [gm.MEMBER_UID],
-            From=gm,
-            Where=(
-                gm.GROUP_ID == Parameter("groupID")
-            )
-        )
-
-
-    @classproperty
-    def _selectGroupsForQuery(cls):
-        gr = schema.GROUPS
-        gm = schema.GROUP_MEMBERSHIP
-
-        return Select(
-            [gr.GROUP_UID],
-            From=gr,
-            Where=(
-                gr.GROUP_ID.In(
-                    Select(
-                        [gm.GROUP_ID],
-                        From=gm,
-                        Where=(
-                            gm.MEMBER_UID == Parameter("uid")
-                        )
-                    )
-                )
-            )
-        )
-
-
-    def addMemberToGroup(self, memberUID, groupID):
-        return self._addMemberToGroupQuery.on(
-            self, groupID=groupID, memberUID=memberUID.encode("utf-8")
-        )
-
-
-    def removeMemberFromGroup(self, memberUID, groupID):
-        return self._removeMemberFromGroupQuery.on(
-            self, groupID=groupID, memberUID=memberUID.encode("utf-8")
-        )
-
-
-    @inlineCallbacks
-    def groupMemberUIDs(self, groupID):
-        """
-        Returns the cached set of UIDs for members of the given groupID.
-        Sub-groups are not returned in the results but their members are,
-        because the group membership has already been expanded/flattened
-        before storing in the db.
-
-        @param groupID: the group ID
-        @type groupID: C{int}
-        @return: the set of member UIDs
-        @rtype: a Deferred which fires with a set() of C{str} UIDs
-        """
-        members = set()
-        results = (yield self._selectGroupMembersQuery.on(self, groupID=groupID))
-        for row in results:
-            members.add(row[0].decode("utf-8"))
-        returnValue(members)
-
-
-    @inlineCallbacks
-    def refreshGroup(self, groupUID, record, groupID, cachedName, cachedMembershipHash, cachedExtant):
-        """
-        @param groupUID: the directory record
-        @type groupUID: C{unicode}
-        @param record: the directory record
-        @type record: C{iDirectoryRecord}
-        @param groupID: group resource id
-        @type groupID: C{str}
-        @param cachedName: group name in the database
-        @type cachedName: C{unicode}
-        @param cachedMembershipHash: membership hash in the database
-        @type cachedMembershipHash: C{str}
-        @param cachedExtant: extent field from in the database
-        @type cachedExtant: C{bool}
-
-        @return: Deferred firing with membershipChanged C{boolean}
-
-        """
-        if record is not None:
-            memberUIDs = yield record.expandedMemberUIDs()
-            name = record.displayName
-            extant = True
-        else:
-            memberUIDs = frozenset()
-            name = cachedName
-            extant = False
-
-        membershipHashContent = hashlib.md5()
-        for memberUID in sorted(memberUIDs):
-            membershipHashContent.update(str(memberUID))
-        membershipHash = membershipHashContent.hexdigest()
-
-        if cachedMembershipHash != membershipHash:
-            membershipChanged = True
-            log.debug(
-                "Group '{group}' changed", group=name
-            )
-        else:
-            membershipChanged = False
-
-        if membershipChanged or extant != cachedExtant:
-            # also updates group mod date
-            yield self.updateGroup(
-                groupUID, name, membershipHash, extant=extant
-            )
-
-        if membershipChanged:
-            addedUIDs, removedUIDs = yield self.synchronizeMembers(groupID, set(memberUIDs))
-        else:
-            addedUIDs = removedUIDs = None
-
-        returnValue((membershipChanged, addedUIDs, removedUIDs,))
-
-
-    @inlineCallbacks
-    def synchronizeMembers(self, groupID, newMemberUIDs):
-        """
-        Update the group membership table in the database to match the new membership list. This
-        method will diff the existing set with the new set and apply the changes. It also calls out
-        to a groupChanged() method with the set of added and removed members so that other modules
-        that depend on groups can monitor the changes.
-
-        @param groupID: group id of group to update
-        @type groupID: L{str}
-        @param newMemberUIDs: set of new member UIDs in the group
-        @type newMemberUIDs: L{set} of L{str}
-        """
-        cachedMemberUIDs = (yield self.groupMemberUIDs(groupID))
-
-        removed = cachedMemberUIDs - newMemberUIDs
-        for memberUID in removed:
-            yield self.removeMemberFromGroup(memberUID, groupID)
-
-        added = newMemberUIDs - cachedMemberUIDs
-        for memberUID in added:
-            yield self.addMemberToGroup(memberUID, groupID)
-
-        yield self.groupChanged(groupID, added, removed)
-
-        returnValue((added, removed,))
-
-
-    @inlineCallbacks
-    def groupChanged(self, groupID, addedUIDs, removedUIDs):
-        """
-        Called when membership of a group changes.
-
-        @param groupID: group id of group that changed
-        @type groupID: L{str}
-        @param addedUIDs: set of new member UIDs added to the group
-        @type addedUIDs: L{set} of L{str}
-        @param removedUIDs: set of old member UIDs removed from the group
-        @type removedUIDs: L{set} of L{str}
-        """
-        yield Delegates.groupChanged(self, groupID, addedUIDs, removedUIDs)
-
-
-    @inlineCallbacks
-    def groupMembers(self, groupID):
-        """
-        The members of the given group as recorded in the db
-        """
-        members = set()
-        memberUIDs = (yield self.groupMemberUIDs(groupID))
-        for uid in memberUIDs:
-            record = (yield self.directoryService().recordWithUID(uid))
-            if record is not None:
-                members.add(record)
-        returnValue(members)
-
-
-    @inlineCallbacks
-    def groupUIDsFor(self, uid):
-        """
-        Returns the cached set of UIDs for the groups this given uid is
-        a member of.
-
-        @param uid: the uid
-        @type uid: C{unicode}
-        @return: the set of group IDs
-        @rtype: a Deferred which fires with a set() of C{int} group IDs
-        """
-        groups = set()
-        results = (
-            yield self._selectGroupsForQuery.on(
-                self, uid=uid.encode("utf-8")
-            )
-        )
-        for row in results:
-            groups.add(row[0].decode("utf-8"))
-        returnValue(groups)
-
-    # End of Group Members
-
-    # Delegates
-
-
-    @classproperty
-    def _addDelegateQuery(cls):
-        de = schema.DELEGATES
-        return Insert({de.DELEGATOR: Parameter("delegator"),
-                       de.DELEGATE: Parameter("delegate"),
-                       de.READ_WRITE: Parameter("readWrite"),
-                       })
-
-
-    @classproperty
-    def _addDelegateGroupQuery(cls):
-        ds = schema.DELEGATE_GROUPS
-        return Insert({ds.DELEGATOR: Parameter("delegator"),
-                       ds.GROUP_ID: Parameter("groupID"),
-                       ds.READ_WRITE: Parameter("readWrite"),
-                       ds.IS_EXTERNAL: Parameter("isExternal"),
-                       })
-
-
-    @classproperty
-    def _removeDelegateQuery(cls):
-        de = schema.DELEGATES
-        return Delete(
-            From=de,
-            Where=(
-                de.DELEGATOR == Parameter("delegator")
-            ).And(
-                de.DELEGATE == Parameter("delegate")
-            ).And(
-                de.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _removeDelegatesQuery(cls):
-        de = schema.DELEGATES
-        return Delete(
-            From=de,
-            Where=(
-                de.DELEGATOR == Parameter("delegator")
-            ).And(
-                de.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _removeDelegateGroupQuery(cls):
-        ds = schema.DELEGATE_GROUPS
-        return Delete(
-            From=ds,
-            Where=(
-                ds.DELEGATOR == Parameter("delegator")
-            ).And(
-                ds.GROUP_ID == Parameter("groupID")
-            ).And(
-                ds.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _removeDelegateGroupsQuery(cls):
-        ds = schema.DELEGATE_GROUPS
-        return Delete(
-            From=ds,
-            Where=(
-                ds.DELEGATOR == Parameter("delegator")
-            ).And(
-                ds.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _selectDelegatesQuery(cls):
-        de = schema.DELEGATES
-        return Select(
-            [de.DELEGATE],
-            From=de,
-            Where=(
-                de.DELEGATOR == Parameter("delegator")
-            ).And(
-                de.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _selectDelegatorsToGroupQuery(cls):
-        dg = schema.DELEGATE_GROUPS
-        return Select(
-            [dg.DELEGATOR],
-            From=dg,
-            Where=(
-                dg.GROUP_ID == Parameter("delegateGroup")
-            ).And(
-                dg.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _selectDelegateGroupsQuery(cls):
-        ds = schema.DELEGATE_GROUPS
-        gr = schema.GROUPS
-
-        return Select(
-            [gr.GROUP_UID],
-            From=gr,
-            Where=(
-                gr.GROUP_ID.In(
-                    Select(
-                        [ds.GROUP_ID],
-                        From=ds,
-                        Where=(
-                            ds.DELEGATOR == Parameter("delegator")
-                        ).And(
-                            ds.READ_WRITE == Parameter("readWrite")
-                        )
-                    )
-                )
-            )
-        )
-
-
-    @classproperty
-    def _selectDirectDelegatorsQuery(cls):
-        de = schema.DELEGATES
-        return Select(
-            [de.DELEGATOR],
-            From=de,
-            Where=(
-                de.DELEGATE == Parameter("delegate")
-            ).And(
-                de.READ_WRITE == Parameter("readWrite")
-            )
-        )
-
-
-    @classproperty
-    def _selectIndirectDelegatorsQuery(cls):
-        dg = schema.DELEGATE_GROUPS
-        gm = schema.GROUP_MEMBERSHIP
-
-        return Select(
-            [dg.DELEGATOR],
-            From=dg,
-            Where=(
-                dg.GROUP_ID.In(
-                    Select(
-                        [gm.GROUP_ID],
-                        From=gm,
-                        Where=(gm.MEMBER_UID == Parameter("delegate"))
-                    )
-                ).And(
-                    dg.READ_WRITE == Parameter("readWrite")
-                )
-            )
-        )
-
-
-    @classproperty
-    def _selectIndirectDelegatesQuery(cls):
-        dg = schema.DELEGATE_GROUPS
-        gm = schema.GROUP_MEMBERSHIP
-
-        return Select(
-            [gm.MEMBER_UID],
-            From=gm,
-            Where=(
-                gm.GROUP_ID.In(
-                    Select(
-                        [dg.GROUP_ID],
-                        From=dg,
-                        Where=(dg.DELEGATOR == Parameter("delegator")).And(
-                            dg.READ_WRITE == Parameter("readWrite"))
-                    )
-                )
-            )
-        )
-
-
-    @classproperty
-    def _selectExternalDelegateGroupsQuery(cls):
-        edg = schema.EXTERNAL_DELEGATE_GROUPS
-        return Select(
-            [edg.DELEGATOR, edg.GROUP_UID_READ, edg.GROUP_UID_WRITE],
-            From=edg
-        )
-
-
-    @classproperty
-    def _removeExternalDelegateGroupsPairQuery(cls):
-        edg = schema.EXTERNAL_DELEGATE_GROUPS
-        return Delete(
-            From=edg,
-            Where=(
-                edg.DELEGATOR == Parameter("delegator")
-            )
-        )
-
-
-    @classproperty
-    def _storeExternalDelegateGroupsPairQuery(cls):
-        edg = schema.EXTERNAL_DELEGATE_GROUPS
-        return Insert(
-            {
-                edg.DELEGATOR: Parameter("delegator"),
-                edg.GROUP_UID_READ: Parameter("readDelegate"),
-                edg.GROUP_UID_WRITE: Parameter("writeDelegate"),
-            }
-        )
-
-
-    @classproperty
-    def _removeExternalDelegateGroupsQuery(cls):
-        ds = schema.DELEGATE_GROUPS
-        return Delete(
-            From=ds,
-            Where=(
-                ds.DELEGATOR == Parameter("delegator")
-            ).And(
-                ds.IS_EXTERNAL == 1
-            )
-        )
-
-
-    @inlineCallbacks
-    def addDelegate(self, delegator, delegate, readWrite):
-        """
-        Adds a row to the DELEGATES table.  The delegate should not be a
-        group.  To delegate to a group, call addDelegateGroup() instead.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param delegate: the UID of the delegate
-        @type delegate: C{unicode}
-        @param readWrite: grant read and write access if True, otherwise
-            read-only access
-        @type readWrite: C{boolean}
-        """
-
-        def _addDelegate(subtxn):
-            return self._addDelegateQuery.on(
-                subtxn,
-                delegator=delegator.encode("utf-8"),
-                delegate=delegate.encode("utf-8"),
-                readWrite=1 if readWrite else 0
-            )
-
-        try:
-            yield self.subtransaction(_addDelegate, retries=0, failureOK=True)
-        except AllRetriesFailed:
-            pass
-
-
-    @inlineCallbacks
-    def addDelegateGroup(self, delegator, delegateGroupID, readWrite,
-                         isExternal=False):
-        """
-        Adds a row to the DELEGATE_GROUPS table.  The delegate should be a
-        group.  To delegate to a person, call addDelegate() instead.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param delegateGroupID: the GROUP_ID of the delegate group
-        @type delegateGroupID: C{int}
-        @param readWrite: grant read and write access if True, otherwise
-            read-only access
-        @type readWrite: C{boolean}
-        """
-
-        def _addDelegateGroup(subtxn):
-            return self._addDelegateGroupQuery.on(
-                subtxn,
-                delegator=delegator.encode("utf-8"),
-                groupID=delegateGroupID,
-                readWrite=1 if readWrite else 0,
-                isExternal=1 if isExternal else 0
-            )
-
-        try:
-            yield self.subtransaction(_addDelegateGroup, retries=0, failureOK=True)
-        except AllRetriesFailed:
-            pass
-
-
-    def removeDelegate(self, delegator, delegate, readWrite):
-        """
-        Removes a row from the DELEGATES table.  The delegate should not be a
-        group.  To remove a delegate group, call removeDelegateGroup() instead.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param delegate: the UID of the delegate
-        @type delegate: C{unicode}
-        @param readWrite: remove read and write access if True, otherwise
-            read-only access
-        @type readWrite: C{boolean}
-        """
-        return self._removeDelegateQuery.on(
-            self,
-            delegator=delegator.encode("utf-8"),
-            delegate=delegate.encode("utf-8"),
-            readWrite=1 if readWrite else 0
-        )
-
-
-    def removeDelegates(self, delegator, readWrite):
-        """
-        Removes all rows for this delegator/readWrite combination from the
-        DELEGATES table.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param readWrite: remove read and write access if True, otherwise
-            read-only access
-        @type readWrite: C{boolean}
-        """
-        return self._removeDelegatesQuery.on(
-            self,
-            delegator=delegator.encode("utf-8"),
-            readWrite=1 if readWrite else 0
-        )
-
-
-    def removeDelegateGroup(self, delegator, delegateGroupID, readWrite):
-        """
-        Removes a row from the DELEGATE_GROUPS table.  The delegate should be a
-        group.  To remove a delegate person, call removeDelegate() instead.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param delegateGroupID: the GROUP_ID of the delegate group
-        @type delegateGroupID: C{int}
-        @param readWrite: remove read and write access if True, otherwise
-            read-only access
-        @type readWrite: C{boolean}
-        """
-        return self._removeDelegateGroupQuery.on(
-            self,
-            delegator=delegator.encode("utf-8"),
-            groupID=delegateGroupID,
-            readWrite=1 if readWrite else 0
-        )
-
-
-    def removeDelegateGroups(self, delegator, readWrite):
-        """
-        Removes all rows for this delegator/readWrite combination from the
-        DELEGATE_GROUPS table.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param readWrite: remove read and write access if True, otherwise
-            read-only access
-        @type readWrite: C{boolean}
-        """
-        return self._removeDelegateGroupsQuery.on(
-            self,
-            delegator=delegator.encode("utf-8"),
-            readWrite=1 if readWrite else 0
-        )
-
-
-    @inlineCallbacks
-    def delegates(self, delegator, readWrite, expanded=False):
-        """
-        Returns the UIDs of all delegates for the given delegator.  If
-        expanded is False, only the direct delegates (users and groups)
-        are returned.  If expanded is True, the expanded membership is
-        returned, not including the groups themselves.
-
-        @param delegator: the UID of the delegator
-        @type delegator: C{unicode}
-        @param readWrite: the access-type to check for; read and write
-            access if True, otherwise read-only access
-        @type readWrite: C{boolean}
-        @returns: the UIDs of the delegates (for the specified access
-            type)
-        @rtype: a Deferred resulting in a set
-        """
-        delegates = set()
-        delegatorU = delegator.encode("utf-8")
-
-        # First get the direct delegates
-        results = (
-            yield self._selectDelegatesQuery.on(
-                self,
-                delegator=delegatorU,
-                readWrite=1 if readWrite else 0
-            )
-        )
-        delegates.update([row[0].decode("utf-8") for row in results])
-
-        if expanded:
-            # Get those who are in groups which have been delegated to
-            results = (
-                yield self._selectIndirectDelegatesQuery.on(
-                    self,
-                    delegator=delegatorU,
-                    readWrite=1 if readWrite else 0
-                )
-            )
-            # Skip the delegator if they are in one of the groups
-            delegates.update([row[0].decode("utf-8") for row in results if row[0] != delegatorU])
-
-        else:
-            # Get the directly-delegated-to groups
-            results = (
-                yield self._selectDelegateGroupsQuery.on(
-                    self,
-                    delegator=delegatorU,
-                    readWrite=1 if readWrite else 0
-                )
-            )
-            delegates.update([row[0].decode("utf-8") for row in results])
-
-        returnValue(delegates)
-
-
-    @inlineCallbacks
-    def delegators(self, delegate, readWrite):
-        """
-        Returns the UIDs of all delegators which have granted access to
-        the given delegate, either directly or indirectly via groups.
-
-        @param delegate: the UID of the delegate
-        @type delegate: C{unicode}
-        @param readWrite: the access-type to check for; read and write
-            access if True, otherwise read-only access
-        @type readWrite: C{boolean}
-        @returns: the UIDs of the delegators (for the specified access
-            type)
-        @rtype: a Deferred resulting in a set
-        """
-        delegators = set()
-        delegateU = delegate.encode("utf-8")
-
-        # First get the direct delegators
-        results = (
-            yield self._selectDirectDelegatorsQuery.on(
-                self,
-                delegate=delegateU,
-                readWrite=1 if readWrite else 0
-            )
-        )
-        delegators.update([row[0].decode("utf-8") for row in results])
-
-        # Finally get those who have delegated to groups the delegate
-        # is a member of
-        results = (
-            yield self._selectIndirectDelegatorsQuery.on(
-                self,
-                delegate=delegateU,
-                readWrite=1 if readWrite else 0
-            )
-        )
-        # Skip the delegator if they are in one of the groups
-        delegators.update([row[0].decode("utf-8") for row in results if row[0] != delegateU])
-
-        returnValue(delegators)
-
-
-    @inlineCallbacks
-    def delegatorsToGroup(self, delegateGroupID, readWrite):
-        """
-        Return the UIDs of those who have delegated to the given group with the
-        given access level.
-
-        @param delegateGroupID: the group ID of the delegate group
-        @type delegateGroupID: C{int}
-        @param readWrite: the access-type to check for; read and write
-            access if True, otherwise read-only access
-        @type readWrite: C{boolean}
-        @returns: the UIDs of the delegators (for the specified access
-            type)
-        @rtype: a Deferred resulting in a set
-
-        """
-        results = (
-            yield self._selectDelegatorsToGroupQuery.on(
-                self,
-                delegateGroup=delegateGroupID,
-                readWrite=1 if readWrite else 0
-            )
-        )
-        delegators = set([row[0].decode("utf-8") for row in results])
-        returnValue(delegators)
-
-
-    @inlineCallbacks
-    def allGroupDelegates(self):
-        """
-        Return the UIDs of all groups which have been delegated to.  Useful
-        for obtaining the set of groups which need to be synchronized from
-        the directory.
-
-        @returns: the UIDs of all delegated-to groups
-        @rtype: a Deferred resulting in a set
-        """
-        gr = schema.GROUPS
-        dg = schema.DELEGATE_GROUPS
-
-        results = (yield Select(
-            [gr.GROUP_UID],
-            From=gr,
-            Where=(gr.GROUP_ID.In(Select([dg.GROUP_ID], From=dg, Where=None)))
-        ).on(self))
-        delegates = set()
-        for row in results:
-            delegates.add(row[0].decode("utf-8"))
-
-        returnValue(delegates)
-
-
-    @inlineCallbacks
-    def externalDelegates(self):
-        """
-        Returns a dictionary mapping delegate UIDs to (read-group, write-group)
-        tuples, including only those assignments that originated from the
-        directory.
-
-        @returns: dictionary mapping delegator uid to (readDelegateUID,
-            writeDelegateUID) tuples
-        @rtype: a Deferred resulting in a dictionary
-        """
-        delegates = {}
-
-        # Get the externally managed delegates (which are all groups)
-        results = (yield self._selectExternalDelegateGroupsQuery.on(self))
-        for delegator, readDelegateUID, writeDelegateUID in results:
-            delegates[delegator.encode("utf-8")] = (
-                readDelegateUID.encode("utf-8") if readDelegateUID else None,
-                writeDelegateUID.encode("utf-8") if writeDelegateUID else None
-            )
-
-        returnValue(delegates)
-
-
-    @inlineCallbacks
-    def assignExternalDelegates(
-        self, delegator, readDelegateGroupID, writeDelegateGroupID,
-        readDelegateUID, writeDelegateUID
-    ):
-        """
-        Update the external delegate group table so we can quickly identify
-        diffs next time, and update the delegate group table itself
-
-        @param delegator
-        @type delegator: C{UUID}
-        """
-
-        # Delete existing external assignments for the delegator
-        yield self._removeExternalDelegateGroupsQuery.on(
-            self,
-            delegator=str(delegator)
-        )
-
-        # Remove from the external comparison table
-        yield self._removeExternalDelegateGroupsPairQuery.on(
-            self,
-            delegator=str(delegator)
-        )
-
-        # Store new assignments in the external comparison table
-        if readDelegateUID or writeDelegateUID:
-            readDelegateForDB = (
-                readDelegateUID.encode("utf-8") if readDelegateUID else ""
-            )
-            writeDelegateForDB = (
-                writeDelegateUID.encode("utf-8") if writeDelegateUID else ""
-            )
-            yield self._storeExternalDelegateGroupsPairQuery.on(
-                self,
-                delegator=str(delegator),
-                readDelegate=readDelegateForDB,
-                writeDelegate=writeDelegateForDB
-            )
-
-        # Apply new assignments
-        if readDelegateGroupID is not None:
-            yield self.addDelegateGroup(
-                delegator, readDelegateGroupID, False, isExternal=True
-            )
-        if writeDelegateGroupID is not None:
-            yield self.addDelegateGroup(
-                delegator, writeDelegateGroupID, True, isExternal=True
-            )
-
-
-    # End of Delegates
-
-
     def preCommit(self, operation):
         """
         Run things before C{commit}.  (Note: only provided by SQL
@@ -2307,6 +1045,7 @@
         )
 
 
+    @inlineCallbacks
     def eventsOlderThan(self, cutoff, batchSize=None):
         """
         Return up to the oldest batchSize events which exist completely earlier
@@ -2323,8 +1062,9 @@
             if cutoff < truncateLowerLimit:
                 raise ValueError("Cannot query events older than %s" % (truncateLowerLimit.getText(),))
 
-        kwds = {"CutOff": pyCalendarTodatetime(cutoff)}
-        return self._oldEventsBase(batchSize).on(self, **kwds)
+        kwds = {"CutOff": pyCalendarToSQLTimestamp(cutoff)}
+        rows = yield self._oldEventsBase(batchSize).on(self, **kwds)
+        returnValue([[row[0], row[1], row[2], parseSQLTimestamp(row[3])] for row in rows])
 
 
     @inlineCallbacks
@@ -2436,7 +1176,7 @@
 
         Returns a deferred to a list of (calendar_home_owner_uid, quota used, total old size, total old count) tuples.
         """
-        kwds = {"CutOff": pyCalendarTodatetime(cutoff)}
+        kwds = {"CutOff": pyCalendarToSQLTimestamp(cutoff)}
         if uuid:
             kwds["uuid"] = uuid
 
@@ -2478,7 +1218,7 @@
         # TODO: see if there is a better way to import Attachment
         from txdav.caldav.datastore.sql import DropBoxAttachment
 
-        kwds = {"CutOff": pyCalendarTodatetime(cutoff)}
+        kwds = {"CutOff": pyCalendarToSQLTimestamp(cutoff)}
         if uuid:
             kwds["uuid"] = uuid
 
@@ -2523,7 +1263,7 @@
 
         Returns a deferred to a list of (calendar_home_owner_uid, quota used, total old size, total old count) tuples.
         """
-        kwds = {"CutOff": pyCalendarTodatetime(cutoff)}
+        kwds = {"CutOff": pyCalendarToSQLTimestamp(cutoff)}
         if uuid:
             kwds["uuid"] = uuid
 
@@ -2566,7 +1306,7 @@
         # TODO: see if there is a better way to import Attachment
         from txdav.caldav.datastore.sql import ManagedAttachment
 
-        kwds = {"CutOff": pyCalendarTodatetime(cutoff)}
+        kwds = {"CutOff": pyCalendarToSQLTimestamp(cutoff)}
         if uuid:
             kwds["uuid"] = uuid
 
@@ -2809,228 +1549,59 @@
 
 
 
-class _EmptyCacher(object):
-
-    def set(self, key, value):
-        return succeed(True)
-
-
-    def get(self, key, withIdentifier=False):
-        return succeed(None)
-
-
-    def delete(self, key):
-        return succeed(True)
-
-
-
-class SharingHomeMixIn(object):
-    """
-    Common class for CommonHome to implement sharing operations
-    """
-
-    @inlineCallbacks
-    def acceptShare(self, shareUID, summary=None):
-        """
-        This share is being accepted.
-        """
-
-        shareeView = yield self.anyObjectWithShareUID(shareUID)
-        if shareeView is not None:
-            yield shareeView.acceptShare(summary)
-
-        returnValue(shareeView)
-
-
-    @inlineCallbacks
-    def declineShare(self, shareUID):
-        """
-        This share is being declined.
-        """
-
-        shareeView = yield self.anyObjectWithShareUID(shareUID)
-        if shareeView is not None:
-            yield shareeView.declineShare()
-
-        returnValue(shareeView is not None)
-
-
-    #
-    # External (cross-pod) sharing - entry point is the sharee's home collection.
-    #
-    @inlineCallbacks
-    def processExternalInvite(
-        self, ownerUID, ownerRID, ownerName, shareUID, bindMode, summary,
-        copy_invite_properties, supported_components=None
-    ):
-        """
-        External invite received.
-        """
-
-        # Get the owner home - create external one if not present
-        ownerHome = yield self._txn.homeWithUID(
-            self._homeType, ownerUID, create=True
-        )
-        if ownerHome is None or not ownerHome.external():
-            raise ExternalShareFailed("Invalid owner UID: {}".format(ownerUID))
-
-        # Try to find owner calendar via its external id
-        ownerView = yield ownerHome.childWithExternalID(ownerRID)
-        if ownerView is None:
-            try:
-                ownerView = yield ownerHome.createChildWithName(
-                    ownerName, externalID=ownerRID
-                )
-            except HomeChildNameAlreadyExistsError:
-                # This is odd - it means we possibly have a left over sharer
-                # collection which the sharer likely removed and re-created
-                # with the same name but now it has a different externalID and
-                # is not found by the initial query. What we do is check to see
-                # whether any shares still reference the old ID - if they do we
-                # are hosed. If not, we can remove the old item and create a new one.
-                oldOwnerView = yield ownerHome.childWithName(ownerName)
-                invites = yield oldOwnerView.sharingInvites()
-                if len(invites) != 0:
-                    log.error(
-                        "External invite collection name is present with a "
-                        "different externalID and still has shares"
-                    )
-                    raise
-                log.error(
-                    "External invite collection name is present with a "
-                    "different externalID - trying to fix"
-                )
-                yield ownerHome.removeExternalChild(oldOwnerView)
-                ownerView = yield ownerHome.createChildWithName(
-                    ownerName, externalID=ownerRID
-                )
-
-            if (
-                supported_components is not None and
-                hasattr(ownerView, "setSupportedComponents")
-            ):
-                yield ownerView.setSupportedComponents(supported_components)
-
-        # Now carry out the share operation
-        if bindMode == _BIND_MODE_DIRECT:
-            shareeView = yield ownerView.directShareWithUser(
-                self.uid(), shareName=shareUID
-            )
-        else:
-            shareeView = yield ownerView.inviteUIDToShare(
-                self.uid(), bindMode, summary, shareName=shareUID
-            )
-
-        shareeView.setInviteCopyProperties(copy_invite_properties)
-
-
-    @inlineCallbacks
-    def processExternalUninvite(self, ownerUID, ownerRID, shareUID):
-        """
-        External invite received.
-        """
-
-        # Get the owner home
-        ownerHome = yield self._txn.homeWithUID(self._homeType, ownerUID)
-        if ownerHome is None or not ownerHome.external():
-            raise ExternalShareFailed("Invalid owner UID: {}".format(ownerUID))
-
-        # Try to find owner calendar via its external id
-        ownerView = yield ownerHome.childWithExternalID(ownerRID)
-        if ownerView is None:
-            raise ExternalShareFailed("Invalid share ID: {}".format(shareUID))
-
-        # Now carry out the share operation
-        yield ownerView.uninviteUIDFromShare(self.uid())
-
-        # See if there are any references to the external share. If not,
-        # remove it
-        invites = yield ownerView.sharingInvites()
-        if len(invites) == 0:
-            yield ownerHome.removeExternalChild(ownerView)
-
-
-    @inlineCallbacks
-    def processExternalReply(
-        self, ownerUID, shareeUID, shareUID, bindStatus, summary=None
-    ):
-        """
-        External invite received.
-        """
-
-        # Make sure the shareeUID and shareUID match
-
-        # Get the owner home - create external one if not present
-        shareeHome = yield self._txn.homeWithUID(self._homeType, shareeUID)
-        if shareeHome is None or not shareeHome.external():
-            raise ExternalShareFailed(
-                "Invalid sharee UID: {}".format(shareeUID)
-            )
-
-        # Try to find owner calendar via its external id
-        shareeView = yield shareeHome.anyObjectWithShareUID(shareUID)
-        if shareeView is None:
-            raise ExternalShareFailed("Invalid share UID: {}".format(shareUID))
-
-        # Now carry out the share operation
-        if bindStatus == _BIND_STATUS_ACCEPTED:
-            yield shareeHome.acceptShare(shareUID, summary)
-        elif bindStatus == _BIND_STATUS_DECLINED:
-            if shareeView.direct():
-                yield shareeView.deleteShare()
-            else:
-                yield shareeHome.declineShare(shareUID)
-
-
-
 class CommonHome(SharingHomeMixIn):
     log = Logger()
 
     # All these need to be initialized by derived classes for each store type
     _homeType = None
-    _homeTable = None
-    _homeMetaDataTable = None
+    _homeSchema = None
+    _homeMetaDataSchema = None
+
     _externalClass = None
     _childClass = None
     _trashClass = None
-    _childTable = None
+
+    _bindSchema = None
+    _revisionsSchema = None
+    _objectSchema = None
+
     _notifierPrefix = None
 
     _dataVersionKey = None
     _dataVersionValue = None
 
-    _cacher = None  # Initialize in derived classes
-
     @classmethod
-    @inlineCallbacks
-    def makeClass(cls, transaction, ownerUID, no_cache=False, authzUID=None):
+    def makeClass(cls, transaction, homeData, authzUID=None):
         """
         Build the actual home class taking into account the possibility that we might need to
         switch in the external version of the class.
 
         @param transaction: transaction
         @type transaction: L{CommonStoreTransaction}
-        @param ownerUID: owner UID of home to load
-        @type ownerUID: C{str}
-        @param no_cache: should cached query be used
-        @type no_cache: C{bool}
+        @param homeData: home table column data
+        @type homeData: C{list}
         """
-        home = cls(transaction, ownerUID, authzUID=authzUID)
-        actualHome = yield home.initFromStore(no_cache)
-        returnValue(actualHome)
 
+        status = homeData[cls.homeColumns().index(cls._homeSchema.STATUS)]
+        if status == _HOME_STATUS_EXTERNAL:
+            home = cls._externalClass(transaction, homeData)
+        else:
+            home = cls(transaction, homeData, authzUID=authzUID)
+        return home.initFromStore()
 
-    def __init__(self, transaction, ownerUID, authzUID=None):
+
+    def __init__(self, transaction, homeData, authzUID=None):
         self._txn = transaction
-        self._ownerUID = ownerUID
+
+        for attr, value in zip(self.homeAttributes(), homeData):
+            setattr(self, attr, value)
+
         self._authzUID = authzUID
         if self._authzUID is None:
             if self._txn._authz_uid is not None:
                 self._authzUID = self._txn._authz_uid
             else:
                 self._authzUID = self._ownerUID
-        self._resourceID = None
-        self._status = _HOME_STATUS_NORMAL
         self._dataVersion = None
         self._childrenLoaded = False
         self._children = defaultdict(dict)
@@ -3039,15 +1610,13 @@
         self._created = None
         self._modified = None
         self._syncTokenRevision = None
-        if transaction._disableCache:
-            self._cacher = _EmptyCacher()
 
         # This is used to track whether the originating request is from the store associated
         # by the transaction, or from a remote store. We need to be able to distinguish store
         # objects that are locally hosted (_HOME_STATUS_NORMAL) or remotely hosted
         # (_HOME_STATUS_EXTERNAL). For the later we need to know whether the object is being
         # accessed from the local store (in which case requests for child objects etc will be
-        # directed at a remote store) or whether it is being accessed as the tresult of a remote
+        # directed at a remote store) or whether it is being accessed as the result of a remote
         # request (in which case requests for child objects etc will be directed at the local store).
         self._internalRequest = True
 
@@ -3072,14 +1641,16 @@
         return Select(
             cls.homeColumns(),
             From=home,
-            Where=home.OWNER_UID == Parameter("ownerUID")
+            Where=(home.OWNER_UID == Parameter("ownerUID")).And(
+                home.STATUS == Parameter("status")
+            )
         )
 
 
     @classproperty
     def _ownerFromResourceID(cls):
         home = cls._homeSchema
-        return Select([home.OWNER_UID],
+        return Select([home.OWNER_UID, home.STATUS],
                       From=home,
                       Where=home.RESOURCE_ID == Parameter("resourceID"))
 
@@ -3155,41 +1726,22 @@
 
 
     @inlineCallbacks
-    def initFromStore(self, no_cache=False):
+    def initFromStore(self):
         """
         Initialize this object from the store. We read in and cache all the
         extra meta-data from the DB to avoid having to do DB queries for those
         individually later.
         """
-        result = yield self._cacher.get(self._ownerUID)
-        if result is None:
-            result = yield self._homeColumnsFromOwnerQuery.on(self._txn, ownerUID=self._ownerUID)
-            if result:
-                result = result[0]
-                if not no_cache:
-                    yield self._cacher.set(self._ownerUID, result)
 
-        if result:
-            for attr, value in zip(self.homeAttributes(), result):
-                setattr(self, attr, value)
+        yield self.initMetaDataFromStore()
+        yield self._loadPropertyStore()
 
-            # STOP! If the status is external we need to convert this object to a CommonHomeExternal class which will
-            # have the right behavior for non-hosted external users.
-            if self._status == _HOME_STATUS_EXTERNAL:
-                actualHome = self._externalClass(self._txn, self._ownerUID, self._resourceID)
-            else:
-                actualHome = self
-            yield actualHome.initMetaDataFromStore()
-            yield actualHome._loadPropertyStore()
+        for factory_type, factory in self._txn._notifierFactories.items():
+            self.addNotifier(factory_type, factory.newNotifier(self))
 
-            for factory_type, factory in self._txn._notifierFactories.items():
-                actualHome.addNotifier(factory_type, factory.newNotifier(actualHome))
+        returnValue(self)
 
-            returnValue(actualHome)
-        else:
-            returnValue(None)
 
-
     @inlineCallbacks
     def initMetaDataFromStore(self):
         """
@@ -3212,8 +1764,33 @@
 
         for attr, value in zip(self.metadataAttributes(), data):
             setattr(self, attr, value)
+        self._created = parseSQLTimestamp(self._created)
+        self._modified = parseSQLTimestamp(self._modified)
 
 
+    def serialize(self):
+        """
+        Create a dictionary mapping metadata attributes so this object can be sent over a cross-pod call
+        and reconstituted at the other end. Note that the other end may have a different schema so
+        the attributes may not match exactly and will need to be processed accordingly.
+        """
+        data = dict([(attr[1:], getattr(self, attr, None)) for attr in self.metadataAttributes()])
+        data["created"] = data["created"].isoformat(" ")
+        data["modified"] = data["modified"].isoformat(" ")
+        return data
+
+
+    def deserialize(self, mapping):
+        """
+        Given a mapping generated by L{serialize}, convert the values to attributes on this object.
+        """
+
+        for attr in self.metadataAttributes():
+            setattr(self, attr, mapping.get(attr[1:]))
+        self._created = parseSQLTimestamp(self._created)
+        self._modified = parseSQLTimestamp(self._modified)
+
+
     @classmethod
     @inlineCallbacks
     def listHomes(cls, txn):
@@ -3231,16 +1808,93 @@
 
 
     @classmethod
+    def homeWithUID(cls, txn, uid, status=None, create=False, authzUID=None):
+        return cls.homeWith(txn, None, uid, status, create=create, authzUID=authzUID)
+
+
+    @classmethod
+    def homeWithResourceID(cls, txn, rid):
+        return cls.homeWith(txn, rid, None)
+
+
+    @classmethod
     @inlineCallbacks
-    def homeWithUID(cls, txn, uid, create=False, authzUID=None):
+    def homeWith(cls, txn, rid, uid, status=None, create=False, authzUID=None):
         """
-        @param uid: I'm going to assume uid is utf-8 encoded bytes
+        Lookup or create a home based in either its resource id or uid. If a status is given,
+        return only the one matching that status. If status is L{None} we lookup any regular
+        status type (normal, external or purging). When creating with status L{None} we create
+        one with a status matching the current directory record thisServer() value. The only
+        other status that can be directly created is migrating.
         """
-        homeObject = yield cls.makeClass(txn, uid, authzUID=authzUID)
-        if homeObject is not None:
+
+        # Setup the SQL query and query cacher keys
+        queryCacher = txn._queryCacher
+        cacheKeys = []
+        if rid is not None:
+            query = cls._homeSchema.RESOURCE_ID == rid
+            if queryCacher:
+                cacheKeys.append(queryCacher.keyForHomeWithID(cls._homeType, rid, status))
+        elif uid is not None:
+            query = cls._homeSchema.OWNER_UID == uid
+            if status is not None:
+                query = query.And(cls._homeSchema.STATUS == status)
+                if queryCacher:
+                    cacheKeys.append(queryCacher.keyForHomeWithUID(cls._homeType, uid, status))
+            else:
+                statusSet = (_HOME_STATUS_NORMAL, _HOME_STATUS_EXTERNAL, _HOME_STATUS_PURGING)
+                if txn._allowDisabled:
+                    statusSet += (_HOME_STATUS_DISABLED,)
+                query = query.And(cls._homeSchema.STATUS.In(statusSet))
+                if queryCacher:
+                    for item in statusSet:
+                        cacheKeys.append(queryCacher.keyForHomeWithUID(cls._homeType, uid, item))
+        else:
+            raise AssertionError("One of rid or uid must be set")
+
+        # Try to fetch a result from the query cache first
+        for cacheKey in cacheKeys:
+            result = (yield queryCacher.get(cacheKey))
+            if result is not None:
+                break
+        else:
+            result = None
+
+        # If nothing in the cache, do the SQL query and cache the result
+        if result is None:
+            results = yield Select(
+                cls.homeColumns(),
+                From=cls._homeSchema,
+                Where=query,
+            ).on(txn)
+
+            if len(results) > 1:
+                # Pick the best one in order: normal, disabled and external
+                byStatus = dict([(item[cls.homeColumns().index(cls._homeSchema.STATUS)], item) for item in results])
+                result = byStatus.get(_HOME_STATUS_NORMAL)
+                if result is None:
+                    result = byStatus.get(_HOME_STATUS_DISABLED)
+                if result is None:
+                    result = byStatus.get(_HOME_STATUS_EXTERNAL)
+            elif results:
+                result = results[0]
+            else:
+                result = None
+
+            if result and queryCacher:
+                if rid is not None:
+                    cacheKey = cacheKeys[0]
+                elif uid is not None:
+                    cacheKey = queryCacher.keyForHomeWithUID(cls._homeType, uid, result[cls.homeColumns().index(cls._homeSchema.STATUS)])
+                yield queryCacher.set(cacheKey, result)
+
+        if result:
+            # Return object that already exists in the store
+            homeObject = yield cls.makeClass(txn, result, authzUID=authzUID)
             returnValue(homeObject)
         else:
-            if not create:
+            # Can only create when uid is specified
+            if not create or uid is None:
                 returnValue(None)
 
             # Determine if the user is local or external
@@ -3248,8 +1902,18 @@
             if record is None:
                 raise DirectoryRecordNotFoundError("Cannot create home for UID since no directory record exists: {}".format(uid))
 
-            state = _HOME_STATUS_NORMAL if record.thisServer() else _HOME_STATUS_EXTERNAL
+            if status is None:
+                createStatus = _HOME_STATUS_NORMAL if record.thisServer() else _HOME_STATUS_EXTERNAL
+            elif status == _HOME_STATUS_MIGRATING:
+                if record.thisServer():
+                    raise RecordNotAllowedError("Cannot migrate a user data for a user already hosted on this server")
+                createStatus = status
+            elif status in (_HOME_STATUS_NORMAL, _HOME_STATUS_EXTERNAL,):
+                createStatus = status
+            else:
+                raise RecordNotAllowedError("Cannot create home with status {}: {}".format(status, uid))
 
+
             # Use savepoint so we can do a partial rollback if there is a race condition
             # where this row has already been inserted
             savepoint = SavepointAction("homeWithUID")
@@ -3261,7 +1925,7 @@
                 resourceid = (yield Insert(
                     {
                         cls._homeSchema.OWNER_UID: uid,
-                        cls._homeSchema.STATUS: state,
+                        cls._homeSchema.STATUS: createStatus,
                         cls._homeSchema.DATAVERSION: cls._dataVersionValue,
                     },
                     Return=cls._homeSchema.RESOURCE_ID
@@ -3271,8 +1935,13 @@
                 yield savepoint.rollback(txn)
 
                 # Retry the query - row may exist now, if not re-raise
-                homeObject = yield cls.makeClass(txn, uid, authzUID=authzUID)
-                if homeObject:
+                results = yield Select(
+                    cls.homeColumns(),
+                    From=cls._homeSchema,
+                    Where=query,
+                ).on(txn)
+                if results:
+                    homeObject = yield cls.makeClass(txn, results[0], authzUID=authzUID)
                     returnValue(homeObject)
                 else:
                     raise
@@ -3280,27 +1949,27 @@
                 yield savepoint.release(txn)
 
                 # Note that we must not cache the owner_uid->resource_id
-                # mapping in _cacher when creating as we don't want that to appear
+                # mapping in the query cacher when creating as we don't want that to appear
                 # until AFTER the commit
-                home = yield cls.makeClass(txn, uid, no_cache=True, authzUID=authzUID)
-                yield home.createdHome()
-                returnValue(home)
+                results = yield Select(
+                    cls.homeColumns(),
+                    From=cls._homeSchema,
+                    Where=cls._homeSchema.RESOURCE_ID == resourceid,
+                ).on(txn)
+                homeObject = yield cls.makeClass(txn, results[0], authzUID=authzUID)
+                if homeObject.normal():
+                    yield homeObject.createdHome()
+                returnValue(homeObject)
 
 
-    @classmethod
-    @inlineCallbacks
-    def homeUIDWithResourceID(cls, txn, rid):
-        rows = (yield cls._ownerFromResourceID.on(txn, resourceID=rid))
-        if rows:
-            returnValue(rows[0][0])
-        else:
-            returnValue(None)
-
-
     def __repr__(self):
         return "<%s: %s, %s>" % (self.__class__.__name__, self._resourceID, self._ownerUID)
 
 
+    def cacheKey(self):
+        return "{}-{}".format(self._status, self._ownerUID)
+
+
     def id(self):
         """
         Retrieve the store identifier for this home.
@@ -3329,6 +1998,19 @@
         return self._authzUID
 
 
+    def status(self):
+        return self._status
+
+
+    def normal(self):
+        """
+        Is this an normal (internal) home.
+
+        @return: a L{bool}.
+        """
+        return self._status == _HOME_STATUS_NORMAL
+
+
     def external(self):
         """
         Is this an external home.
@@ -3358,6 +2040,15 @@
         return self._status == _HOME_STATUS_PURGING
 
 
+    def migrating(self):
+        """
+        Is this an external home.
+
+        @return: a string.
+        """
+        return self._status == _HOME_STATUS_MIGRATING
+
+
     def purge(self):
         """
         Mark this home as being purged.
@@ -3365,6 +2056,13 @@
         return self.setStatus(_HOME_STATUS_PURGING)
 
 
+    def migrate(self):
+        """
+        Mark this home as being purged.
+        """
+        return self.setStatus(_HOME_STATUS_MIGRATING)
+
+
     @inlineCallbacks
     def setStatus(self, newStatus):
         """
@@ -3376,10 +2074,67 @@
                 {self._homeSchema.STATUS: newStatus},
                 Where=(self._homeSchema.RESOURCE_ID == self._resourceID),
             ).on(self._txn)
+            if self._txn._queryCacher:
+                yield self._txn._queryCacher.delete(self._txn._queryCacher.keyForHomeWithUID(
+                    self._homeType,
+                    self.uid(),
+                    self._status,
+                ))
+                yield self._txn._queryCacher.delete(self._txn._queryCacher.keyForHomeWithID(
+                    self._homeType,
+                    self.id(),
+                    self._status,
+                ))
             self._status = newStatus
-            yield self._cacher.delete(self._ownerUID)
 
 
+    @inlineCallbacks
+    def remove(self):
+
+        # Removing the home table entry does NOT remove the child class entry - it does remove
+        # the associated bind entry. So manually remove each child.
+        yield self.removeAllChildren()
+
+        r = self._childClass._revisionsSchema
+        yield Delete(
+            From=r,
+            Where=r.HOME_RESOURCE_ID == self._resourceID,
+        ).on(self._txn)
+
+        h = self._homeSchema
+        yield Delete(
+            From=h,
+            Where=h.RESOURCE_ID == self._resourceID,
+        ).on(self._txn)
+
+        yield self.properties()._removeResource()
+
+        if self._txn._queryCacher:
+            yield self._txn._queryCacher.delete(self._txn._queryCacher.keyForHomeWithUID(
+                self._homeType,
+                self.uid(),
+                self._status,
+            ))
+            yield self._txn._queryCacher.delete(self._txn._queryCacher.keyForHomeWithID(
+                self._homeType,
+                self.id(),
+                self._status,
+            ))
+
+
+    @inlineCallbacks
+    def removeAllChildren(self):
+        """
+        Remove each child.
+        """
+
+        children = yield self.loadChildren()
+        for child in children:
+            yield child.remove()
+            self._children.pop(child.name(), None)
+            self._children.pop(child.id(), None)
+
+
     def transaction(self):
         return self._txn
 
@@ -3483,7 +2238,7 @@
             child = yield self._childClass.objectWithName(self, name, onlyInTrash=onlyInTrash)
             if child is not None:
                 self._children[childrenKey][name] = child
-                self._children[childrenKey][child._resourceID] = child
+                self._children[childrenKey][child.id()] = child
         returnValue(self._children[childrenKey].get(name, None))
 
 
@@ -3512,19 +2267,19 @@
             child = yield self._childClass.objectWithID(self, resourceID, onlyInTrash=onlyInTrash)
             if child is not None:
                 self._children[childrenKey][resourceID] = child
-                self._children[childrenKey][child._name] = child
+                self._children[childrenKey][child.name()] = child
         returnValue(self._children[childrenKey].get(resourceID, None))
 
 
-    def childWithExternalID(self, externalID, onlyInTrash=False):
+    def childWithBindUID(self, bindUID, onlyInTrash=False):
         """
-        Retrieve the child with the given C{externalID} contained in this
+        Retrieve the child with the given C{bindUID} contained in this
         home.
 
         @param name: a string.
         @return: an L{ICalendar} or C{None} if no such child exists.
         """
-        return self._childClass.objectWithExternalID(self, externalID, onlyInTrash=onlyInTrash)
+        return self._childClass.objectWithBindUID(self, bindUID, onlyInTrash=onlyInTrash)
 
 
     def allChildWithID(self, resourceID, onlyInTrash=False):
@@ -3539,15 +2294,15 @@
 
 
     @inlineCallbacks
-    def createChildWithName(self, name, externalID=None):
+    def createChildWithName(self, name, bindUID=None):
         if name.startswith("."):
             raise HomeChildNameNotAllowedError(name)
 
-        child = yield self._childClass.create(self, name, externalID=externalID)
+        child = yield self._childClass.create(self, name, bindUID=bindUID)
         if child is not None:
             key = self._childrenKey(False)
             self._children[key][name] = child
-            self._children[key][child._resourceID] = child
+            self._children[key][child.id()] = child
         returnValue(child)
 
 
@@ -3629,13 +2384,18 @@
         taken to invalid the cached value properly.
         """
         if self._syncTokenRevision is None:
-            self._syncTokenRevision = (yield self._syncTokenQuery.on(
-                self._txn, resourceID=self._resourceID))[0][0]
-            if self._syncTokenRevision is None:
-                self._syncTokenRevision = int((yield self._txn.calendarserverValue("MIN-VALID-REVISION")))
+            self._syncTokenRevision = yield self.syncTokenRevision()
         returnValue("%s_%s" % (self._resourceID, self._syncTokenRevision))
 
 
+    @inlineCallbacks
+    def syncTokenRevision(self):
+        revision = (yield self._syncTokenQuery.on(self._txn, resourceID=self._resourceID))[0][0]
+        if revision is None:
+            revision = int((yield self._txn.calendarserverValue("MIN-VALID-REVISION")))
+        returnValue(revision)
+
+
     @classproperty
     def _changesQuery(cls):
         bind = cls._bindSchema
@@ -3851,11 +2611,11 @@
 
 
     def created(self):
-        return datetimeMktime(parseSQLTimestamp(self._created)) if self._created else None
+        return datetimeMktime(self._created) if self._created else None
 
 
     def modified(self):
-        return datetimeMktime(parseSQLTimestamp(self._modified)) if self._modified else None
+        return datetimeMktime(self._modified) if self._modified else None
 
 
     @classmethod
@@ -4093,9 +2853,9 @@
             returnValue(result)
 
         try:
-            self._modified = (
+            self._modified = parseSQLTimestamp((
                 yield self._txn.subtransaction(_bumpModified, retries=0, failureOK=True)
-            )[0][0]
+            )[0][0])
             yield self.invalidateQueryCache()
 
         except AllRetriesFailed:
@@ -4122,1464 +2882,16 @@
         Get the owner home for a shared child ID and the owner's name for that bound child.
         Subclasses may override.
         """
-        ownerHomeID, ownerName = (yield self._childClass._ownerHomeWithResourceID.on(self._txn, resourceID=resourceID))[0]
-        ownerHome = yield self._txn.homeWithResourceID(self._homeType, ownerHomeID)
-        returnValue((ownerHome, ownerName))
-
-
-
-class _SharedSyncLogic(object):
-    """
-    Logic for maintaining sync-token shared between notification collections and
-    shared collections.
-    """
-
-    @classproperty
-    def _childSyncTokenQuery(cls):
-        """
-        DAL query for retrieving the sync token of a L{CommonHomeChild} based on
-        its resource ID.
-        """
-        rev = cls._revisionsSchema
-        return Select([Max(rev.REVISION)], From=rev,
-                      Where=rev.RESOURCE_ID == Parameter("resourceID"))
-
-
-    def revisionFromToken(self, token):
-        if token is None:
-            return 0
-        elif isinstance(token, str) or isinstance(token, unicode):
-            _ignore_uuid, revision = token.split("_", 1)
-            return int(revision)
+        rows = yield self._childClass._ownerHomeWithResourceID.on(self._txn, resourceID=resourceID)
+        if rows:
+            ownerHomeID, ownerName = rows[0]
+            ownerHome = yield self._txn.homeWithResourceID(self._homeType, ownerHomeID)
+            returnValue((ownerHome, ownerName))
         else:
-            return token
+            returnValue((None, None))
 
 
-    @inlineCallbacks
-    def syncToken(self):
-        if self._syncTokenRevision is None:
-            self._syncTokenRevision = (yield self._childSyncTokenQuery.on(
-                self._txn, resourceID=self._resourceID))[0][0]
-            if self._syncTokenRevision is None:
-                self._syncTokenRevision = int((yield self._txn.calendarserverValue("MIN-VALID-REVISION")))
-        returnValue(("%s_%s" % (self._resourceID, self._syncTokenRevision,)))
 
-
-    def objectResourcesSinceToken(self, token):
-        raise NotImplementedError()
-
-
-    @classmethod
-    def _objectNamesSinceRevisionQuery(cls, deleted=True):
-        """
-        DAL query for (resource, deleted-flag)
-        """
-        rev = cls._revisionsSchema
-        where = (rev.REVISION > Parameter("revision")).And(rev.RESOURCE_ID == Parameter("resourceID"))
-        if not deleted:
-            where = where.And(rev.DELETED == False)
-        return Select(
-            [rev.RESOURCE_NAME, rev.DELETED],
-            From=rev,
-            Where=where,
-        )
-
-
-    def resourceNamesSinceToken(self, token):
-        """
-        Return the changed and deleted resources since a particular sync-token. This simply extracts
-        the revision from from the token then calls L{resourceNamesSinceRevision}.
-
-        @param revision: the revision to determine changes since
-        @type revision: C{int}
-        """
-
-        return self.resourceNamesSinceRevision(self.revisionFromToken(token))
-
-
-    @inlineCallbacks
-    def resourceNamesSinceRevision(self, revision):
-        """
-        Return the changed and deleted resources since a particular revision.
-
-        @param revision: the revision to determine changes since
-        @type revision: C{int}
-        """
-        changed = []
-        deleted = []
-        invalid = []
-        if revision:
-            minValidRevision = yield self._txn.calendarserverValue("MIN-VALID-REVISION")
-            if revision < int(minValidRevision):
-                raise SyncTokenValidException
-
-            results = [
-                (name if name else "", removed) for name, removed in (
-                    yield self._objectNamesSinceRevisionQuery().on(
-                        self._txn, revision=revision, resourceID=self._resourceID)
-                )
-            ]
-            results.sort(key=lambda x: x[1])
-
-            for name, wasdeleted in results:
-                if name:
-                    if wasdeleted:
-                        deleted.append(name)
-                    else:
-                        changed.append(name)
-        else:
-            changed = yield self.listObjectResources()
-
-        returnValue((changed, deleted, invalid))
-
-
-    @classproperty
-    def _removeDeletedRevision(cls):
-        rev = cls._revisionsSchema
-        return Delete(From=rev,
-                      Where=(rev.HOME_RESOURCE_ID == Parameter("homeID")).And(
-                          rev.COLLECTION_NAME == Parameter("collectionName")))
-
-
-    @classproperty
-    def _addNewRevision(cls):
-        rev = cls._revisionsSchema
-        return Insert(
-            {
-                rev.HOME_RESOURCE_ID: Parameter("homeID"),
-                rev.RESOURCE_ID: Parameter("resourceID"),
-                rev.COLLECTION_NAME: Parameter("collectionName"),
-                rev.RESOURCE_NAME: None,
-                # Always starts false; may be updated to be a tombstone
-                # later.
-                rev.DELETED: False
-            },
-            Return=[rev.REVISION]
-        )
-
-
-    @inlineCallbacks
-    def _initSyncToken(self):
-        yield self._removeDeletedRevision.on(
-            self._txn, homeID=self._home._resourceID, collectionName=self._name
-        )
-        self._syncTokenRevision = (yield (
-            self._addNewRevision.on(self._txn, homeID=self._home._resourceID,
-                                    resourceID=self._resourceID,
-                                    collectionName=self._name)))[0][0]
-        self._txn.bumpRevisionForObject(self)
-
-
-    @classproperty
-    def _renameSyncTokenQuery(cls):
-        """
-        DAL query to change sync token for a rename (increment and adjust
-        resource name).
-        """
-        rev = cls._revisionsSchema
-        return Update(
-            {
-                rev.REVISION: schema.REVISION_SEQ,
-                rev.COLLECTION_NAME: Parameter("name")
-            },
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And
-                  (rev.RESOURCE_NAME == None),
-            Return=rev.REVISION
-        )
-
-
-    @inlineCallbacks
-    def _renameSyncToken(self):
-        self._syncTokenRevision = (yield self._renameSyncTokenQuery.on(
-            self._txn, name=self._name, resourceID=self._resourceID))[0][0]
-        self._txn.bumpRevisionForObject(self)
-
-
-    @classproperty
-    def _bumpSyncTokenQuery(cls):
-        """
-        DAL query to change collection sync token. Note this can impact multiple rows if the
-        collection is shared.
-        """
-        rev = cls._revisionsSchema
-        return Update(
-            {rev.REVISION: schema.REVISION_SEQ, },
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And
-                  (rev.RESOURCE_NAME == None)
-        )
-
-
-    @inlineCallbacks
-    def _bumpSyncToken(self):
-
-        if not self._txn.isRevisionBumpedAlready(self):
-            self._txn.bumpRevisionForObject(self)
-            yield self._bumpSyncTokenQuery.on(
-                self._txn,
-                resourceID=self._resourceID,
-            )
-            self._syncTokenRevision = None
-
-
-    @classproperty
-    def _deleteSyncTokenQuery(cls):
-        """
-        DAL query to remove all child revision information. The revision for the collection
-        itself is not touched.
-        """
-        rev = cls._revisionsSchema
-        return Delete(
-            From=rev,
-            Where=(rev.HOME_RESOURCE_ID == Parameter("homeID")).And
-                  (rev.RESOURCE_ID == Parameter("resourceID")).And
-                  (rev.COLLECTION_NAME == None)
-        )
-
-
-    @classproperty
-    def _sharedRemovalQuery(cls):
-        """
-        DAL query to indicate a shared collection has been deleted.
-        """
-        rev = cls._revisionsSchema
-        return Update(
-            {
-                rev.RESOURCE_ID: None,
-                rev.REVISION: schema.REVISION_SEQ,
-                rev.DELETED: True
-            },
-            Where=(rev.HOME_RESOURCE_ID == Parameter("homeID")).And(
-                rev.RESOURCE_ID == Parameter("resourceID")).And(
-                rev.RESOURCE_NAME == None)
-        )
-
-
-    @classproperty
-    def _unsharedRemovalQuery(cls):
-        """
-        DAL query to indicate an owned collection has been deleted.
-        """
-        rev = cls._revisionsSchema
-        return Update(
-            {
-                rev.RESOURCE_ID: None,
-                rev.REVISION: schema.REVISION_SEQ,
-                rev.DELETED: True
-            },
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
-                rev.RESOURCE_NAME == None),
-        )
-
-
-    @inlineCallbacks
-    def _deletedSyncToken(self, sharedRemoval=False):
-        """
-        When a collection is deleted we remove all the revision information for its child resources.
-        We update the collection's sync token to indicate it has been deleted - that way a sync on
-        the home collection can report the deletion of the collection.
-
-        @param sharedRemoval: indicates whether the collection being removed is shared
-        @type sharedRemoval: L{bool}
-        """
-        # Remove all child entries
-        yield self._deleteSyncTokenQuery.on(self._txn,
-                                            homeID=self._home._resourceID,
-                                            resourceID=self._resourceID)
-
-        # If this is a share being removed then we only mark this one specific
-        # home/resource-id as being deleted.  On the other hand, if it is a
-        # non-shared collection, then we need to mark all collections
-        # with the resource-id as being deleted to account for direct shares.
-        if sharedRemoval:
-            yield self._sharedRemovalQuery.on(self._txn,
-                                              homeID=self._home._resourceID,
-                                              resourceID=self._resourceID)
-        else:
-            yield self._unsharedRemovalQuery.on(self._txn,
-                                                resourceID=self._resourceID)
-        self._syncTokenRevision = None
-
-
-    def _insertRevision(self, name):
-        return self._changeRevision("insert", name)
-
-
-    def _updateRevision(self, name):
-        return self._changeRevision("update", name)
-
-
-    def _deleteRevision(self, name):
-        return self._changeRevision("delete", name)
-
-
-    @classproperty
-    def _deleteBumpTokenQuery(cls):
-        rev = cls._revisionsSchema
-        return Update(
-            {rev.REVISION: schema.REVISION_SEQ, rev.DELETED: True},
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
-                rev.RESOURCE_NAME == Parameter("name")),
-            Return=rev.REVISION
-        )
-
-
-    @classproperty
-    def _updateBumpTokenQuery(cls):
-        rev = cls._revisionsSchema
-        return Update(
-            {rev.REVISION: schema.REVISION_SEQ},
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
-                rev.RESOURCE_NAME == Parameter("name")),
-            Return=rev.REVISION
-        )
-
-
-    @classproperty
-    def _insertFindPreviouslyNamedQuery(cls):
-        rev = cls._revisionsSchema
-        return Select(
-            [rev.RESOURCE_ID],
-            From=rev,
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
-                rev.RESOURCE_NAME == Parameter("name"))
-        )
-
-
-    @classproperty
-    def _updatePreviouslyNamedQuery(cls):
-        rev = cls._revisionsSchema
-        return Update(
-            {rev.REVISION: schema.REVISION_SEQ, rev.DELETED: False},
-            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
-                rev.RESOURCE_NAME == Parameter("name")),
-            Return=rev.REVISION
-        )
-
-
-    @classproperty
-    def _completelyNewRevisionQuery(cls):
-        rev = cls._revisionsSchema
-        return Insert(
-            {
-                rev.HOME_RESOURCE_ID: Parameter("homeID"),
-                rev.RESOURCE_ID: Parameter("resourceID"),
-                rev.RESOURCE_NAME: Parameter("name"),
-                rev.REVISION: schema.REVISION_SEQ,
-                rev.DELETED: False
-            },
-            Return=rev.REVISION
-        )
-
-
-    @inlineCallbacks
-    def _changeRevision(self, action, name):
-
-        # Need to handle the case where for some reason the revision entry is
-        # actually missing. For a "delete" we don't care, for an "update" we
-        # will turn it into an "insert".
-        if action == "delete":
-            rows = (
-                yield self._deleteBumpTokenQuery.on(
-                    self._txn, resourceID=self._resourceID, name=name))
-            if rows:
-                self._syncTokenRevision = rows[0][0]
-        elif action == "update":
-            rows = (
-                yield self._updateBumpTokenQuery.on(
-                    self._txn, resourceID=self._resourceID, name=name))
-            if rows:
-                self._syncTokenRevision = rows[0][0]
-            else:
-                action = "insert"
-
-        if action == "insert":
-            # Note that an "insert" may happen for a resource that previously
-            # existed and then was deleted. In that case an entry in the
-            # REVISIONS table still exists so we have to detect that and do db
-            # INSERT or UPDATE as appropriate
-
-            found = bool((
-                yield self._insertFindPreviouslyNamedQuery.on(
-                    self._txn, resourceID=self._resourceID, name=name)))
-            if found:
-                self._syncTokenRevision = (
-                    yield self._updatePreviouslyNamedQuery.on(
-                        self._txn, resourceID=self._resourceID, name=name)
-                )[0][0]
-            else:
-                self._syncTokenRevision = (
-                    yield self._completelyNewRevisionQuery.on(
-                        self._txn, homeID=self.ownerHome()._resourceID,
-                        resourceID=self._resourceID, name=name)
-                )[0][0]
-        yield self._maybeNotify()
-        returnValue(self._syncTokenRevision)
-
-
-    def _maybeNotify(self):
-        """
-        Maybe notify changed.  (Overridden in NotificationCollection.)
-        """
-        return succeed(None)
-
-
-
-SharingInvitation = namedtuple(
-    "SharingInvitation",
-    ["uid", "ownerUID", "ownerHomeID", "shareeUID", "shareeHomeID", "mode", "status", "summary"]
-)
-
-
-
-class SharingMixIn(object):
-    """
-    Common class for CommonHomeChild and AddressBookObject
-    """
-
-    @classproperty
-    def _bindInsertQuery(cls, **kw):
-        """
-        DAL statement to create a bind entry that connects a collection to its
-        home.
-        """
-        bind = cls._bindSchema
-        return Insert({
-            bind.HOME_RESOURCE_ID: Parameter("homeID"),
-            bind.RESOURCE_ID: Parameter("resourceID"),
-            bind.EXTERNAL_ID: Parameter("externalID"),
-            bind.RESOURCE_NAME: Parameter("name"),
-            bind.BIND_MODE: Parameter("mode"),
-            bind.BIND_STATUS: Parameter("bindStatus"),
-            bind.MESSAGE: Parameter("message"),
-        })
-
-
-    @classmethod
-    def _updateBindColumnsQuery(cls, columnMap):
-        bind = cls._bindSchema
-        return Update(
-            columnMap,
-            Where=(bind.RESOURCE_ID == Parameter("resourceID")).And(
-                bind.HOME_RESOURCE_ID == Parameter("homeID")),
-        )
-
-
-    @classproperty
-    def _deleteBindForResourceIDAndHomeID(cls):
-        bind = cls._bindSchema
-        return Delete(
-            From=bind,
-            Where=(bind.RESOURCE_ID == Parameter("resourceID")).And(
-                bind.HOME_RESOURCE_ID == Parameter("homeID")),
-        )
-
-
-    @classmethod
-    def _bindFor(cls, condition):
-        bind = cls._bindSchema
-        columns = cls.bindColumns() + cls.additionalBindColumns()
-        return Select(
-            columns,
-            From=bind,
-            Where=condition
-        )
-
-
-    @classmethod
-    def _bindInviteFor(cls, condition):
-        home = cls._homeSchema
-        bind = cls._bindSchema
-        return Select(
-            [
-                home.OWNER_UID,
-                bind.HOME_RESOURCE_ID,
-                bind.RESOURCE_ID,
-                bind.RESOURCE_NAME,
-                bind.BIND_MODE,
-                bind.BIND_STATUS,
-                bind.MESSAGE,
-            ],
-            From=bind.join(home, on=(bind.HOME_RESOURCE_ID == home.RESOURCE_ID)),
-            Where=condition
-        )
-
-
-    @classproperty
-    def _sharedInvitationBindForResourceID(cls):
-        bind = cls._bindSchema
-        return cls._bindInviteFor(
-            (bind.RESOURCE_ID == Parameter("resourceID")).And
-            (bind.BIND_MODE != _BIND_MODE_OWN)
-        )
-
-
-    @classproperty
-    def _acceptedBindForHomeID(cls):
-        bind = cls._bindSchema
-        return cls._bindFor((bind.HOME_RESOURCE_ID == Parameter("homeID"))
-                            .And(bind.BIND_STATUS == _BIND_STATUS_ACCEPTED))
-
-
-    @classproperty
-    def _bindForResourceIDAndHomeID(cls):
-        """
-        DAL query that looks up home bind rows by home child
-        resource ID and home resource ID.
-        """
-        bind = cls._bindSchema
-        return cls._bindFor((bind.RESOURCE_ID == Parameter("resourceID"))
-                            .And(bind.HOME_RESOURCE_ID == Parameter("homeID")))
-
-
-    @classproperty
-    def _bindForExternalIDAndHomeID(cls):
-        """
-        DAL query that looks up home bind rows by home child
-        resource ID and home resource ID.
-        """
-        bind = cls._bindSchema
-        return cls._bindFor((bind.EXTERNAL_ID == Parameter("externalID"))
-                            .And(bind.HOME_RESOURCE_ID == Parameter("homeID")))
-
-
-    @classproperty
-    def _bindForNameAndHomeID(cls):
-        """
-        DAL query that looks up any bind rows by home child
-        resource ID and home resource ID.
-        """
-        bind = cls._bindSchema
-        return cls._bindFor((bind.RESOURCE_NAME == Parameter("name"))
-                            .And(bind.HOME_RESOURCE_ID == Parameter("homeID")))
-
-
-    #
-    # Higher level API
-    #
-    @inlineCallbacks
-    def inviteUIDToShare(self, shareeUID, mode, summary=None, shareName=None):
-        """
-        Invite a user to share this collection - either create the share if it does not exist, or
-        update the existing share with new values. Make sure a notification is sent as well.
-
-        @param shareeUID: UID of the sharee
-        @type shareeUID: C{str}
-        @param mode: access mode
-        @type mode: C{int}
-        @param summary: share message
-        @type summary: C{str}
-        """
-
-        # Look for existing invite and update its fields or create new one
-        shareeView = yield self.shareeView(shareeUID)
-        if shareeView is not None:
-            status = _BIND_STATUS_INVITED if shareeView.shareStatus() in (_BIND_STATUS_DECLINED, _BIND_STATUS_INVALID) else None
-            yield self.updateShare(shareeView, mode=mode, status=status, summary=summary)
-        else:
-            shareeView = yield self.createShare(shareeUID=shareeUID, mode=mode, summary=summary, shareName=shareName)
-
-        # Check for external
-        if shareeView.viewerHome().external():
-            yield self._sendExternalInvite(shareeView)
-        else:
-            # Send invite notification
-            yield self._sendInviteNotification(shareeView)
-        returnValue(shareeView)
-
-
-    @inlineCallbacks
-    def directShareWithUser(self, shareeUID, shareName=None):
-        """
-        Create a direct share with the specified user. Note it is currently up to the app layer
-        to enforce access control - this is not ideal as we really should have control of that in
-        the store. Once we do, this api will need to verify that access is allowed for a direct share.
-
-        NB no invitations are used with direct sharing.
-
-        @param shareeUID: UID of the sharee
-        @type shareeUID: C{str}
-        """
-
-        # Ignore if it already exists
-        shareeView = yield self.shareeView(shareeUID)
-        if shareeView is None:
-            shareeView = yield self.createShare(shareeUID=shareeUID, mode=_BIND_MODE_DIRECT, shareName=shareName)
-            yield shareeView.newShare()
-
-            # Check for external
-            if shareeView.viewerHome().external():
-                yield self._sendExternalInvite(shareeView)
-
-        returnValue(shareeView)
-
-
-    @inlineCallbacks
-    def uninviteUIDFromShare(self, shareeUID):
-        """
-        Remove a user from a share. Make sure a notification is sent as well.
-
-        @param shareeUID: UID of the sharee
-        @type shareeUID: C{str}
-        """
-        # Cancel invites - we'll just use whatever userid we are given
-
-        shareeView = yield self.shareeView(shareeUID)
-        if shareeView is not None:
-            if shareeView.viewerHome().external():
-                yield self._sendExternalUninvite(shareeView)
-            else:
-                # If current user state is accepted then we send an invite with the new state, otherwise
-                # we cancel any existing invites for the user
-                if not shareeView.direct():
-                    if shareeView.shareStatus() != _BIND_STATUS_ACCEPTED:
-                        yield self._removeInviteNotification(shareeView)
-                    else:
-                        yield self._sendInviteNotification(shareeView, notificationState=_BIND_STATUS_DELETED)
-
-            # Remove the bind
-            yield self.removeShare(shareeView)
-
-
-    @inlineCallbacks
-    def acceptShare(self, summary=None):
-        """
-        This share is being accepted.
-        """
-
-        if not self.direct() and self.shareStatus() != _BIND_STATUS_ACCEPTED:
-            if self.external():
-                yield self._replyExternalInvite(_BIND_STATUS_ACCEPTED, summary)
-            ownerView = yield self.ownerView()
-            yield ownerView.updateShare(self, status=_BIND_STATUS_ACCEPTED)
-            yield self.newShare(displayname=summary)
-            if not ownerView.external():
-                yield self._sendReplyNotification(ownerView, summary)
-
-
-    @inlineCallbacks
-    def declineShare(self):
-        """
-        This share is being declined.
-        """
-
-        if not self.direct() and self.shareStatus() != _BIND_STATUS_DECLINED:
-            if self.external():
-                yield self._replyExternalInvite(_BIND_STATUS_DECLINED)
-            ownerView = yield self.ownerView()
-            yield ownerView.updateShare(self, status=_BIND_STATUS_DECLINED)
-            if not ownerView.external():
-                yield self._sendReplyNotification(ownerView)
-
-
-    @inlineCallbacks
-    def deleteShare(self):
-        """
-        This share is being deleted (by the sharee) - either decline or remove (for direct shares).
-        """
-
-        ownerView = yield self.ownerView()
-        if self.direct():
-            yield ownerView.removeShare(self)
-            if ownerView.external():
-                yield self._replyExternalInvite(_BIND_STATUS_DECLINED)
-        else:
-            yield self.declineShare()
-
-
-    @inlineCallbacks
-    def ownerDeleteShare(self):
-        """
-        This share is being deleted (by the owner) - either decline or remove (for direct shares).
-        """
-
-        # Change status on store object
-        yield self.setShared(False)
-
-        # Remove all sharees (direct and invited)
-        for invitation in (yield self.sharingInvites()):
-            yield self.uninviteUIDFromShare(invitation.shareeUID)
-
-
-    def newShare(self, displayname=None):
-        """
-        Override in derived classes to do any specific operations needed when a share
-        is first accepted.
-        """
-        return succeed(None)
-
-
-    @inlineCallbacks
-    def allInvitations(self):
-        """
-        Get list of all invitations (non-direct) to this object.
-        """
-        invitations = yield self.sharingInvites()
-
-        # remove direct shares as those are not "real" invitations
-        invitations = filter(lambda x: x.mode != _BIND_MODE_DIRECT, invitations)
-        invitations.sort(key=lambda invitation: invitation.shareeUID)
-        returnValue(invitations)
-
-
-    @inlineCallbacks
-    def _sendInviteNotification(self, shareeView, notificationState=None):
-        """
-        Called on the owner's resource.
-        """
-        # When deleting the message is the sharee's display name
-        displayname = shareeView.shareMessage()
-        if notificationState == _BIND_STATUS_DELETED:
-            displayname = str(shareeView.properties().get(PropertyName.fromElement(element.DisplayName), displayname))
-
-        notificationtype = {
-            "notification-type": "invite-notification",
-            "shared-type": shareeView.sharedResourceType(),
-        }
-        notificationdata = {
-            "notification-type": "invite-notification",
-            "shared-type": shareeView.sharedResourceType(),
-            "dtstamp": DateTime.getNowUTC().getText(),
-            "owner": shareeView.ownerHome().uid(),
-            "sharee": shareeView.viewerHome().uid(),
-            "uid": shareeView.shareUID(),
-            "status": shareeView.shareStatus() if notificationState is None else notificationState,
-            "access": (yield shareeView.effectiveShareMode()),
-            "ownerName": self.shareName(),
-            "summary": displayname,
-        }
-        if hasattr(self, "getSupportedComponents"):
-            notificationdata["supported-components"] = self.getSupportedComponents()
-
-        # Add to sharee's collection
-        notifications = yield self._txn.notificationsWithUID(shareeView.viewerHome().uid())
-        yield notifications.writeNotificationObject(shareeView.shareUID(), notificationtype, notificationdata)
-
-
-    @inlineCallbacks
-    def _sendReplyNotification(self, ownerView, summary=None):
-        """
-        Create a reply notification based on the current state of this shared resource.
-        """
-
-        # Generate invite XML
-        notificationUID = "%s-reply" % (self.shareUID(),)
-
-        notificationtype = {
-            "notification-type": "invite-reply",
-            "shared-type": self.sharedResourceType(),
-        }
-
-        notificationdata = {
-            "notification-type": "invite-reply",
-            "shared-type": self.sharedResourceType(),
-            "dtstamp": DateTime.getNowUTC().getText(),
-            "owner": self.ownerHome().uid(),
-            "sharee": self.viewerHome().uid(),
-            "status": self.shareStatus(),
-            "ownerName": ownerView.shareName(),
-            "in-reply-to": self.shareUID(),
-            "summary": summary,
-        }
-
-        # Add to owner notification collection
-        notifications = yield self._txn.notificationsWithUID(self.ownerHome().uid())
-        yield notifications.writeNotificationObject(notificationUID, notificationtype, notificationdata)
-
-
-    @inlineCallbacks
-    def _removeInviteNotification(self, shareeView):
-        """
-        Called on the owner's resource.
-        """
-
-        # Remove from sharee's collection
-        notifications = yield self._txn.notificationsWithUID(shareeView.viewerHome().uid())
-        yield notifications.removeNotificationObjectWithUID(shareeView.shareUID())
-
-
-    #
-    # External/cross-pod API
-    #
-    @inlineCallbacks
-    def _sendExternalInvite(self, shareeView):
-
-        yield self._txn.store().conduit.send_shareinvite(
-            self._txn,
-            shareeView.ownerHome()._homeType,
-            shareeView.ownerHome().uid(),
-            self.id(),
-            self.shareName(),
-            shareeView.viewerHome().uid(),
-            shareeView.shareUID(),
-            shareeView.shareMode(),
-            shareeView.shareMessage(),
-            self.getInviteCopyProperties(),
-            supported_components=self.getSupportedComponents() if hasattr(self, "getSupportedComponents") else None,
-        )
-
-
-    @inlineCallbacks
-    def _sendExternalUninvite(self, shareeView):
-
-        yield self._txn.store().conduit.send_shareuninvite(
-            self._txn,
-            shareeView.ownerHome()._homeType,
-            shareeView.ownerHome().uid(),
-            self.id(),
-            shareeView.viewerHome().uid(),
-            shareeView.shareUID(),
-        )
-
-
-    @inlineCallbacks
-    def _replyExternalInvite(self, status, summary=None):
-
-        yield self._txn.store().conduit.send_sharereply(
-            self._txn,
-            self.viewerHome()._homeType,
-            self.ownerHome().uid(),
-            self.viewerHome().uid(),
-            self.shareUID(),
-            status,
-            summary,
-        )
-
-
-    #
-    # Lower level API
-    #
-    @inlineCallbacks
-    def ownerView(self):
-        """
-        Return the owner resource counterpart of this shared resource.
-
-        Note we have to play a trick with the property store to coerce it to match
-        the per-user properties for the owner.
-        """
-        # Get the child of the owner home that has the same resource id as the owned one
-        ownerView = yield self.ownerHome().childWithID(self.id())
-        returnValue(ownerView)
-
-
-    @inlineCallbacks
-    def shareeView(self, shareeUID):
-        """
-        Return the shared resource counterpart of this owned resource for the specified sharee.
-
-        Note we have to play a trick with the property store to coerce it to match
-        the per-user properties for the sharee.
-        """
-
-        # Never return the owner's own resource
-        if self._home.uid() == shareeUID:
-            returnValue(None)
-
-        # Get the child of the sharee home that has the same resource id as the owned one
-        shareeHome = yield self._txn.homeWithUID(self._home._homeType, shareeUID, authzUID=shareeUID)
-        shareeView = (yield shareeHome.allChildWithID(self.id())) if shareeHome is not None else None
-        returnValue(shareeView)
-
-
-    @inlineCallbacks
-    def shareWithUID(self, shareeUID, mode, status=None, summary=None, shareName=None):
-        """
-        Share this (owned) L{CommonHomeChild} with another principal.
-
-        @param shareeUID: The UID of the sharee.
-        @type: L{str}
-
-        @param mode: The sharing mode; L{_BIND_MODE_READ} or
-            L{_BIND_MODE_WRITE} or L{_BIND_MODE_DIRECT}
-        @type mode: L{str}
-
-        @param status: The sharing status; L{_BIND_STATUS_INVITED} or
-            L{_BIND_STATUS_ACCEPTED}
-        @type: L{str}
-
-        @param summary: The proposed message to go along with the share, which
-            will be used as the default display name.
-        @type: L{str}
-
-        @return: the name of the shared calendar in the new calendar home.
-        @rtype: L{str}
-        """
-        shareeHome = yield self._txn.calendarHomeWithUID(shareeUID, create=True)
-        returnValue(
-            (yield self.shareWith(shareeHome, mode, status, summary, shareName))
-        )
-
-
-    @inlineCallbacks
-    def shareWith(self, shareeHome, mode, status=None, summary=None, shareName=None):
-        """
-        Share this (owned) L{CommonHomeChild} with another home.
-
-        @param shareeHome: The home of the sharee.
-        @type: L{CommonHome}
-
-        @param mode: The sharing mode; L{_BIND_MODE_READ} or
-            L{_BIND_MODE_WRITE} or L{_BIND_MODE_DIRECT}
-        @type: L{str}
-
-        @param status: The sharing status; L{_BIND_STATUS_INVITED} or
-            L{_BIND_STATUS_ACCEPTED}
-        @type: L{str}
-
-        @param summary: The proposed message to go along with the share, which
-            will be used as the default display name.
-        @type: L{str}
-
-        @param shareName: The proposed name of the new share.
-        @type: L{str}
-
-        @return: the name of the shared calendar in the new calendar home.
-        @rtype: L{str}
-        """
-
-        if status is None:
-            status = _BIND_STATUS_ACCEPTED
-
-        @inlineCallbacks
-        def doInsert(subt):
-            newName = shareName if shareName is not None else self.newShareName()
-            yield self._bindInsertQuery.on(
-                subt,
-                homeID=shareeHome._resourceID,
-                resourceID=self._resourceID,
-                externalID=self._externalID,
-                name=newName,
-                mode=mode,
-                bindStatus=status,
-                message=summary
-            )
-            returnValue(newName)
-        try:
-            bindName = yield self._txn.subtransaction(doInsert)
-        except AllRetriesFailed:
-            # FIXME: catch more specific exception
-            child = yield shareeHome.allChildWithID(self._resourceID)
-            yield self.updateShare(
-                child, mode=mode, status=status,
-                summary=summary
-            )
-            bindName = child._name
-        else:
-            if status == _BIND_STATUS_ACCEPTED:
-                shareeView = yield shareeHome.anyObjectWithShareUID(bindName)
-                yield shareeView._initSyncToken()
-                yield shareeView._initBindRevision()
-
-        # Mark this as shared
-        yield self.setShared(True)
-
-        # Must send notification to ensure cache invalidation occurs
-        yield self.notifyPropertyChanged()
-        yield shareeHome.notifyChanged()
-
-        returnValue(bindName)
-
-
-    @inlineCallbacks
-    def createShare(self, shareeUID, mode, summary=None, shareName=None):
-        """
-        Create a new shared resource. If the mode is direct, the share is created in accepted state,
-        otherwise the share is created in invited state.
-        """
-        shareeHome = yield self._txn.homeWithUID(self.ownerHome()._homeType, shareeUID, create=True)
-
-        yield self.shareWith(
-            shareeHome,
-            mode=mode,
-            status=_BIND_STATUS_INVITED if mode != _BIND_MODE_DIRECT else _BIND_STATUS_ACCEPTED,
-            summary=summary,
-            shareName=shareName,
-        )
-        shareeView = yield self.shareeView(shareeUID)
-        returnValue(shareeView)
-
-
-    @inlineCallbacks
-    def updateShare(self, shareeView, mode=None, status=None, summary=None):
-        """
-        Update share mode, status, and message for a home child shared with
-        this (owned) L{CommonHomeChild}.
-
-        @param shareeView: The sharee home child that shares this.
-        @type shareeView: L{CommonHomeChild}
-
-        @param mode: The sharing mode; L{_BIND_MODE_READ} or
-            L{_BIND_MODE_WRITE} or None to not update
-        @type mode: L{str}
-
-        @param status: The sharing status; L{_BIND_STATUS_INVITED} or
-            L{_BIND_STATUS_ACCEPTED} or L{_BIND_STATUS_DECLINED} or
-            L{_BIND_STATUS_INVALID}  or None to not update
-        @type status: L{str}
-
-        @param summary: The proposed message to go along with the share, which
-            will be used as the default display name, or None to not update
-        @type summary: L{str}
-        """
-        # TODO: raise a nice exception if shareeView is not, in fact, a shared
-        # version of this same L{CommonHomeChild}
-
-        # remove None parameters, and substitute None for empty string
-        bind = self._bindSchema
-        columnMap = {}
-        if mode != None and mode != shareeView._bindMode:
-            columnMap[bind.BIND_MODE] = mode
-        if status != None and status != shareeView._bindStatus:
-            columnMap[bind.BIND_STATUS] = status
-        if summary != None and summary != shareeView._bindMessage:
-            columnMap[bind.MESSAGE] = summary
-
-        if columnMap:
-
-            # Count accepted
-            if bind.BIND_STATUS in columnMap:
-                previouslyAcceptedCount = yield shareeView._previousAcceptCount()
-
-            yield self._updateBindColumnsQuery(columnMap).on(
-                self._txn,
-                resourceID=self._resourceID, homeID=shareeView._home._resourceID
-            )
-
-            # Update affected attributes
-            if bind.BIND_MODE in columnMap:
-                shareeView._bindMode = columnMap[bind.BIND_MODE]
-
-            if bind.BIND_STATUS in columnMap:
-                shareeView._bindStatus = columnMap[bind.BIND_STATUS]
-                yield shareeView._changedStatus(previouslyAcceptedCount)
-
-            if bind.MESSAGE in columnMap:
-                shareeView._bindMessage = columnMap[bind.MESSAGE]
-
-            yield shareeView.invalidateQueryCache()
-
-            # Must send notification to ensure cache invalidation occurs
-            yield self.notifyPropertyChanged()
-            yield shareeView.viewerHome().notifyChanged()
-
-
-    def _previousAcceptCount(self):
-        return succeed(1)
-
-
-    @inlineCallbacks
-    def _changedStatus(self, previouslyAcceptedCount):
-        key = self._home._childrenKey(self.isInTrash())
-        if self._bindStatus == _BIND_STATUS_ACCEPTED:
-            yield self._initSyncToken()
-            yield self._initBindRevision()
-            self._home._children[key][self._name] = self
-            self._home._children[key][self._resourceID] = self
-        elif self._bindStatus in (_BIND_STATUS_INVITED, _BIND_STATUS_DECLINED):
-            yield self._deletedSyncToken(sharedRemoval=True)
-            self._home._children[key].pop(self._name, None)
-            self._home._children[key].pop(self._resourceID, None)
-
-
-    @inlineCallbacks
-    def removeShare(self, shareeView):
-        """
-        Remove the shared version of this (owned) L{CommonHomeChild} from the
-        referenced L{CommonHome}.
-
-        @see: L{CommonHomeChild.shareWith}
-
-        @param shareeView: The shared resource being removed.
-
-        @return: a L{Deferred} which will fire with the previous shareUID
-        """
-        key = self._home._childrenKey(self.isInTrash())
-
-        # remove sync tokens
-        shareeHome = shareeView.viewerHome()
-        yield shareeView._deletedSyncToken(sharedRemoval=True)
-        shareeHome._children[key].pop(shareeView._name, None)
-        shareeHome._children[key].pop(shareeView._resourceID, None)
-
-        # Must send notification to ensure cache invalidation occurs
-        yield self.notifyPropertyChanged()
-        yield shareeHome.notifyChanged()
-
-        # delete binds including invites
-        yield self._deleteBindForResourceIDAndHomeID.on(
-            self._txn,
-            resourceID=self._resourceID,
-            homeID=shareeHome._resourceID,
-        )
-
-        yield shareeView.invalidateQueryCache()
-
-
-    @inlineCallbacks
-    def unshare(self):
-        """
-        Unshares a collection, regardless of which "direction" it was shared.
-        """
-        if self.owned():
-            # This collection may be shared to others
-            invites = yield self.sharingInvites()
-            for invite in invites:
-                shareeView = yield self.shareeView(invite.shareeUID)
-                yield self.removeShare(shareeView)
-        else:
-            # This collection is shared to me
-            ownerView = yield self.ownerView()
-            yield ownerView.removeShare(self)
-
-
-    @inlineCallbacks
-    def sharingInvites(self):
-        """
-        Retrieve the list of all L{SharingInvitation}'s for this L{CommonHomeChild}, irrespective of mode.
-
-        @return: L{SharingInvitation} objects
-        @rtype: a L{Deferred} which fires with a L{list} of L{SharingInvitation}s.
-        """
-        if not self.owned():
-            returnValue([])
-
-        # get all accepted binds
-        invitedRows = yield self._sharedInvitationBindForResourceID.on(
-            self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
-        )
-
-        result = []
-        for homeUID, homeRID, _ignore_resourceID, resourceName, bindMode, bindStatus, bindMessage in invitedRows:
-            invite = SharingInvitation(
-                resourceName,
-                self.ownerHome().name(),
-                self.ownerHome().id(),
-                homeUID,
-                homeRID,
-                bindMode,
-                bindStatus,
-                bindMessage,
-            )
-            result.append(invite)
-        returnValue(result)
-
-
-    @inlineCallbacks
-    def _initBindRevision(self):
-        yield self.syncToken() # init self._syncTokenRevision if None
-        self._bindRevision = self._syncTokenRevision
-
-        bind = self._bindSchema
-        yield self._updateBindColumnsQuery(
-            {bind.BIND_REVISION : Parameter("revision"), }
-        ).on(
-            self._txn,
-            revision=self._bindRevision,
-            resourceID=self._resourceID,
-            homeID=self.viewerHome()._resourceID,
-        )
-        yield self.invalidateQueryCache()
-
-
-    def sharedResourceType(self):
-        """
-        The sharing resource type. Needs to be overridden by each type of resource that can be shared.
-
-        @return: an identifier for the type of the share.
-        @rtype: C{str}
-        """
-        return ""
-
-
-    def newShareName(self):
-        """
-        Name used when creating a new share. By default this is a UUID.
-        """
-        return str(uuid4())
-
-
-    def owned(self):
-        """
-        @see: L{ICalendar.owned}
-        """
-        return self._bindMode == _BIND_MODE_OWN
-
-
-    def isShared(self):
-        """
-        For an owned collection indicate whether it is shared.
-
-        @return: C{True} if shared, C{False} otherwise
-        @rtype: C{bool}
-        """
-        return self.owned() and self._bindMessage == "shared"
-
-
-    @inlineCallbacks
-    def setShared(self, shared):
-        """
-        Set an owned collection to shared or unshared state. Technically this is not useful as "shared"
-        really means it has invitees, but the current sharing spec supports a notion of a shared collection
-        that has not yet had invitees added. For the time being we will support that option by using a new
-        MESSAGE value to indicate an owned collection that is "shared".
-
-        @param shared: whether or not the owned collection is "shared"
-        @type shared: C{bool}
-        """
-        assert self.owned(), "Cannot change share mode on a shared collection"
-
-        # Only if change is needed
-        newMessage = "shared" if shared else None
-        if self._bindMessage == newMessage:
-            returnValue(None)
-
-        self._bindMessage = newMessage
-
-        bind = self._bindSchema
-        yield Update(
-            {bind.MESSAGE: self._bindMessage},
-            Where=(bind.RESOURCE_ID == Parameter("resourceID")).And(
-                bind.HOME_RESOURCE_ID == Parameter("homeID")),
-        ).on(self._txn, resourceID=self._resourceID, homeID=self.viewerHome()._resourceID)
-
-        yield self.invalidateQueryCache()
-        yield self.notifyPropertyChanged()
-
-
-    def direct(self):
-        """
-        Is this a "direct" share?
-
-        @return: a boolean indicating whether it's direct.
-        """
-        return self._bindMode == _BIND_MODE_DIRECT
-
-
-    def indirect(self):
-        """
-        Is this an "indirect" share?
-
-        @return: a boolean indicating whether it's indirect.
-        """
-        return self._bindMode == _BIND_MODE_INDIRECT
-
-
-    def shareUID(self):
-        """
-        @see: L{ICalendar.shareUID}
-        """
-        return self.name()
-
-
-    def shareMode(self):
-        """
-        @see: L{ICalendar.shareMode}
-        """
-        return self._bindMode
-
-
-    def _effectiveShareMode(self, bindMode, viewerUID, txn):
-        """
-        Get the effective share mode without a calendar object
-        """
-        return bindMode
-
-
-    def effectiveShareMode(self):
-        """
-        @see: L{ICalendar.shareMode}
-        """
-        return self._bindMode
-
-
-    def shareName(self):
-        """
-        This is a path like name for the resource within the home being shared. For object resource
-        shares this will be a combination of the L{CommonHomeChild} name and the L{CommonObjecrResource}
-        name. Otherwise it is just the L{CommonHomeChild} name. This is needed to expose a value to the
-        app-layer such that it can construct a URI for the actual WebDAV resource being shared.
-        """
-        name = self.name()
-        if self.sharedResourceType() == "group":
-            name = self.parentCollection().name() + "/" + name
-        return name
-
-
-    def shareStatus(self):
-        """
-        @see: L{ICalendar.shareStatus}
-        """
-        return self._bindStatus
-
-
-    def accepted(self):
-        """
-        @see: L{ICalendar.shareStatus}
-        """
-        return self._bindStatus == _BIND_STATUS_ACCEPTED
-
-
-    def shareMessage(self):
-        """
-        @see: L{ICalendar.shareMessage}
-        """
-        return self._bindMessage
-
-
-    def getInviteCopyProperties(self):
-        """
-        Get a dictionary of property name/values (as strings) for properties that are shadowable and
-        need to be copied to a sharee's collection when an external (cross-pod) share is created.
-        Sub-classes should override to expose the properties they care about.
-        """
-        return {}
-
-
-    def setInviteCopyProperties(self, props):
-        """
-        Copy a set of shadowable properties (as name/value strings) onto this shared resource when
-        a cross-pod invite is processed. Sub-classes should override to expose the properties they
-        care about.
-        """
-        pass
-
-
-    @classmethod
-    def metadataColumns(cls):
-        """
-        Return a list of column name for retrieval of metadata. This allows
-        different child classes to have their own type specific data, but still make use of the
-        common base logic.
-        """
-
-        # Common behavior is to have created and modified
-
-        return (
-            cls._homeChildMetaDataSchema.CREATED,
-            cls._homeChildMetaDataSchema.MODIFIED,
-        )
-
-
-    @classmethod
-    def metadataAttributes(cls):
-        """
-        Return a list of attribute names for retrieval of metadata. This allows
-        different child classes to have their own type specific data, but still make use of the
-        common base logic.
-        """
-
-        # Common behavior is to have created and modified
-
-        return (
-            "_created",
-            "_modified",
-        )
-
-
-    @classmethod
-    def bindColumns(cls):
-        """
-        Return a list of column names for retrieval during creation. This allows
-        different child classes to have their own type specific data, but still make use of the
-        common base logic.
-        """
-
-        return (
-            cls._bindSchema.BIND_MODE,
-            cls._bindSchema.HOME_RESOURCE_ID,
-            cls._bindSchema.RESOURCE_ID,
-            cls._bindSchema.EXTERNAL_ID,
-            cls._bindSchema.RESOURCE_NAME,
-            cls._bindSchema.BIND_STATUS,
-            cls._bindSchema.BIND_REVISION,
-            cls._bindSchema.MESSAGE
-        )
-
-
-    @classmethod
-    def bindAttributes(cls):
-        """
-        Return a list of column names for retrieval during creation. This allows
-        different child classes to have their own type specific data, but still make use of the
-        common base logic.
-        """
-
-        return (
-            "_bindMode",
-            "_homeResourceID",
-            "_resourceID",
-            "_externalID",
-            "_name",
-            "_bindStatus",
-            "_bindRevision",
-            "_bindMessage",
-        )
-
-    bindColumnCount = 8
-
-    @classmethod
-    def additionalBindColumns(cls):
-        """
-        Return a list of column names for retrieval during creation. This allows
-        different child classes to have their own type specific data, but still make use of the
-        common base logic.
-        """
-
-        return ()
-
-
-    @classmethod
-    def additionalBindAttributes(cls):
-        """
-        Return a list of attribute names for retrieval of during creation. This allows
-        different child classes to have their own type specific data, but still make use of the
-        common base logic.
-        """
-
-        return ()
-
-
-    @classproperty
-    def _childrenAndMetadataForHomeID(cls):
-        bind = cls._bindSchema
-        child = cls._homeChildSchema
-        childMetaData = cls._homeChildMetaDataSchema
-
-        columns = cls.bindColumns() + cls.additionalBindColumns() + cls.metadataColumns()
-        return Select(
-            columns,
-            From=child.join(
-                bind, child.RESOURCE_ID == bind.RESOURCE_ID,
-                'left outer').join(
-                    childMetaData, childMetaData.RESOURCE_ID == bind.RESOURCE_ID,
-                    'left outer'),
-            Where=(bind.HOME_RESOURCE_ID == Parameter("homeID")).And(
-                bind.BIND_STATUS == _BIND_STATUS_ACCEPTED)
-        )
-
-
-    @classmethod
-    def _revisionsForResourceIDs(cls, resourceIDs):
-        rev = cls._revisionsSchema
-        return Select(
-            [rev.RESOURCE_ID, Max(rev.REVISION)],
-            From=rev,
-            Where=rev.RESOURCE_ID.In(Parameter("resourceIDs", len(resourceIDs))).And(
-                (rev.RESOURCE_NAME != None).Or(rev.DELETED == False)),
-            GroupBy=rev.RESOURCE_ID
-        )
-
-
-    @inlineCallbacks
-    def invalidateQueryCache(self):
-        queryCacher = self._txn._queryCacher
-        if queryCacher is not None:
-            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForHomeChildMetaData(self._resourceID))
-            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithName(self._home._resourceID, self._name))
-            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithResourceID(self._home._resourceID, self._resourceID))
-            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithExternalID(self._home._resourceID, self._externalID))
-
-
-
 class CommonHomeChild(FancyEqMixin, Memoizable, _SharedSyncLogic, HomeChildBase, SharingMixIn):
     """
     Common ancestor class of AddressBooks and Calendars.
@@ -5593,6 +2905,11 @@
     )
 
     _externalClass = None
+    _homeRecordClass = None
+    _metadataRecordClass = None
+    _bindRecordClass = None
+    _bindHomeIDAttributeName = None
+    _bindResourceIDAttributeName = None
     _objectResourceClass = None
 
     _bindSchema = None
@@ -5602,7 +2919,7 @@
     _revisionsSchema = None
     _objectSchema = None
 
-    _childType = None
+    _childType = _CHILD_TYPE_NORMAL
 
 
     @classmethod
@@ -5628,7 +2945,7 @@
         @rtype: L{CommonHomeChild}
         """
 
-        bindMode, _ignore_homeID, resourceID, externalID, name, bindStatus, bindRevision, bindMessage = bindData
+        _ignore_homeID, resourceID, name, bindMode, bindStatus, bindRevision, bindUID, bindMessage = bindData
 
         if ownerHome is None:
             if bindMode == _BIND_MODE_OWN:
@@ -5639,7 +2956,7 @@
         else:
             ownerName = None
 
-        c = cls._externalClass if ownerHome.externalClass() else cls
+        c = cls._externalClass if ownerHome and ownerHome.externalClass() else cls
         child = c(
             home=home,
             name=name,
@@ -5650,7 +2967,7 @@
             message=bindMessage,
             ownerHome=ownerHome,
             ownerName=ownerName,
-            externalID=externalID,
+            bindUID=bindUID,
         )
 
         if additionalBindData:
@@ -5660,10 +2977,12 @@
         if metadataData:
             for attr, value in zip(child.metadataAttributes(), metadataData):
                 setattr(child, attr, value)
+            child._created = parseSQLTimestamp(child._created)
+            child._modified = parseSQLTimestamp(child._modified)
 
         # We have to re-adjust the property store object to account for possible shared
         # collections as previously we loaded them all as if they were owned
-        if propstore and bindMode != _BIND_MODE_OWN:
+        if ownerHome and propstore and bindMode != _BIND_MODE_OWN:
             propstore._setDefaultUserUID(ownerHome.uid())
         yield child._loadPropertyStore(propstore)
 
@@ -5672,10 +2991,10 @@
 
     @classmethod
     @inlineCallbacks
-    def _getDBData(cls, home, name, resourceID, externalID):
+    def _getDBData(cls, home, name, resourceID, bindUID):
         """
         Given a set of identifying information, load the data rows for the object. Only one of
-        L{name}, L{resourceID} or L{externalID} is specified - others are C{None}.
+        L{name}, L{resourceID} or L{bindUID} is specified - others are C{None}.
 
         @param home: the parent home object
         @type home: L{CommonHome}
@@ -5683,8 +3002,8 @@
         @type name: C{str}
         @param resourceID: the resource ID
         @type resourceID: C{int}
-        @param externalID: the resource ID of the external (cross-pod) referenced item
-        @type externalID: C{int}
+        @param bindUID: the unique ID of the external (cross-pod) referenced item
+        @type bindUID: C{int}
         """
 
         # Get the bind row data
@@ -5697,8 +3016,8 @@
                 cacheKey = queryCacher.keyForObjectWithName(home._resourceID, name)
             elif resourceID:
                 cacheKey = queryCacher.keyForObjectWithResourceID(home._resourceID, resourceID)
-            elif externalID:
-                cacheKey = queryCacher.keyForObjectWithExternalID(home._resourceID, externalID)
+            elif bindUID:
+                cacheKey = queryCacher.keyForObjectWithBindUID(home._resourceID, bindUID)
             row = yield queryCacher.get(cacheKey)
 
         if row is None:
@@ -5707,8 +3026,8 @@
                 rows = yield cls._bindForNameAndHomeID.on(home._txn, name=name, homeID=home._resourceID)
             elif resourceID:
                 rows = yield cls._bindForResourceIDAndHomeID.on(home._txn, resourceID=resourceID, homeID=home._resourceID)
-            elif externalID:
-                rows = yield cls._bindForExternalIDAndHomeID.on(home._txn, externalID=externalID, homeID=home._resourceID)
+            elif bindUID:
+                rows = yield cls._bindForBindUIDAndHomeID.on(home._txn, bindUID=bindUID, homeID=home._resourceID)
             row = rows[0] if rows else None
 
         if not row:
@@ -5718,7 +3037,7 @@
             # Cache the result
             queryCacher.setAfterCommit(home._txn, queryCacher.keyForObjectWithName(home._resourceID, name), row)
             queryCacher.setAfterCommit(home._txn, queryCacher.keyForObjectWithResourceID(home._resourceID, resourceID), row)
-            queryCacher.setAfterCommit(home._txn, queryCacher.keyForObjectWithExternalID(home._resourceID, externalID), row)
+            queryCacher.setAfterCommit(home._txn, queryCacher.keyForObjectWithBindUID(home._resourceID, bindUID), row)
 
         bindData = row[:cls.bindColumnCount]
         additionalBindData = row[cls.bindColumnCount:cls.bindColumnCount + len(cls.additionalBindColumns())]
@@ -5741,15 +3060,15 @@
         returnValue((bindData, additionalBindData, metadataData,))
 
 
-    def __init__(self, home, name, resourceID, mode, status, revision=0, message=None, ownerHome=None, ownerName=None, externalID=None):
+    def __init__(self, home, name, resourceID, mode, status, revision=0, message=None, ownerHome=None, ownerName=None, bindUID=None):
 
         self._home = home
         self._name = name
         self._resourceID = resourceID
-        self._externalID = externalID
         self._bindMode = mode
         self._bindStatus = status
         self._bindRevision = revision
+        self._bindUID = bindUID
         self._bindMessage = message
         self._ownerHome = home if ownerHome is None else ownerHome
         self._ownerName = name if ownerName is None else ownerName
@@ -5805,9 +3124,10 @@
         # Load from the main table first
         dataRows = (yield cls._childrenAndMetadataForHomeID.on(home._txn, homeID=home._resourceID))
 
+        resourceID_index = cls.bindColumns().index(cls._bindSchema.RESOURCE_ID)
         if dataRows:
             # Get property stores
-            childResourceIDs = [dataRow[2] for dataRow in dataRows]
+            childResourceIDs = [dataRow[resourceID_index] for dataRow in dataRows]
 
             propertyStores = yield PropertyStore.forMultipleResourcesWithResourceIDs(
                 home.uid(), None, None, home._txn, childResourceIDs
@@ -5820,7 +3140,7 @@
         # Create the actual objects merging in properties
         for dataRow in dataRows:
             bindData = dataRow[:cls.bindColumnCount]
-            resourceID = bindData[cls.bindColumns().index(cls._bindSchema.RESOURCE_ID)]
+            resourceID = bindData[resourceID_index]
             additionalBindData = dataRow[cls.bindColumnCount:cls.bindColumnCount + len(cls.additionalBindColumns())]
             metadataData = dataRow[cls.bindColumnCount + len(cls.additionalBindColumns()):]
             propstore = propertyStores.get(resourceID, None)
@@ -5843,16 +3163,13 @@
 
 
     @classmethod
-    def objectWithExternalID(cls, home, externalID, accepted=True, onlyInTrash=False):
-        return cls.objectWith(home, externalID=externalID, accepted=accepted, onlyInTrash=onlyInTrash)
+    def objectWithBindUID(cls, home, bindUID, accepted=True, onlyInTrash=False):
+        return cls.objectWith(home, bindUID=bindUID, accepted=accepted, onlyInTrash=onlyInTrash)
 
 
     @classmethod
     @inlineCallbacks
-    def objectWith(
-        cls, home, name=None, resourceID=None, externalID=None, accepted=True,
-        onlyInTrash=False
-    ):
+    def objectWith(cls, home, name=None, resourceID=None, bindUID=None, accepted=True, onlyInTrash=False):
         """
         Create the object using one of the specified arguments as the key to load it. One
         and only one of the keyword arguments must be set.
@@ -5872,7 +3189,7 @@
         @rtype: C{CommonHomeChild}
         """
 
-        dbData = yield cls._getDBData(home, name, resourceID, externalID)
+        dbData = yield cls._getDBData(home, name, resourceID, bindUID)
         if dbData is None:
             returnValue(None)
         bindData, additionalBindData, metadataData = dbData
@@ -5904,9 +3221,7 @@
         """
         child = cls._homeChildSchema
         return Insert(
-            {
-                child.RESOURCE_ID: schema.RESOURCE_ID_SEQ,
-            },
+            {child.RESOURCE_ID: schema.RESOURCE_ID_SEQ},
             Return=(child.RESOURCE_ID)
         )
 
@@ -5928,7 +3243,7 @@
 
     @classmethod
     @inlineCallbacks
-    def create(cls, home, name, externalID=None):
+    def create(cls, home, name, bindUID=None):
 
         if (yield cls._bindForNameAndHomeID.on(home._txn, name=name, homeID=home._resourceID)):
             raise HomeChildNameAlreadyExistsError(name)
@@ -5940,14 +3255,13 @@
         resourceID = (yield cls._insertHomeChild.on(home._txn))[0][0]
 
         # Initialize this object
-        _created, _modified = (
-            yield cls._insertHomeChildMetaData.on(
-                home._txn, resourceID=resourceID, childType=cls._childType
-            )
-        )[0]
+        yield cls._insertHomeChildMetaData.on(
+            home._txn, resourceID=resourceID, childType=cls._childType,
+        )
+
         # Bind table needs entry
         yield cls._bindInsertQuery.on(
-            home._txn, homeID=home._resourceID, resourceID=resourceID, externalID=externalID,
+            home._txn, homeID=home._resourceID, resourceID=resourceID, bindUID=bindUID,
             name=name, mode=_BIND_MODE_OWN, bindStatus=_BIND_STATUS_ACCEPTED,
             message=None,
         )
@@ -5984,15 +3298,6 @@
         return self._resourceID
 
 
-    def external_id(self):
-        """
-        Retrieve the external store identifier for this collection.
-
-        @return: a string.
-        """
-        return self._externalID
-
-
     def external(self):
         """
         Is this an external home.
@@ -6011,7 +3316,7 @@
         return self.ownerHome().externalClass()
 
 
-    def externalize(self):
+    def serialize(self):
         """
         Create a dictionary mapping key attributes so this object can be sent over a cross-pod call
         and reconstituted at the other end. Note that the other end may have a different schema so
@@ -6021,14 +3326,16 @@
         data["bindData"] = dict([(attr[1:], getattr(self, attr, None)) for attr in self.bindAttributes()])
         data["additionalBindData"] = dict([(attr[1:], getattr(self, attr, None)) for attr in self.additionalBindAttributes()])
         data["metadataData"] = dict([(attr[1:], getattr(self, attr, None)) for attr in self.metadataAttributes()])
+        data["metadataData"]["created"] = data["metadataData"]["created"].isoformat(" ")
+        data["metadataData"]["modified"] = data["metadataData"]["modified"].isoformat(" ")
         return data
 
 
     @classmethod
     @inlineCallbacks
-    def internalize(cls, parent, mapping):
+    def deserialize(cls, parent, mapping):
         """
-        Given a mapping generated by L{externalize}, convert the values into an array of database
+        Given a mapping generated by L{serialize}, convert the values into an array of database
         like items that conforms to the ordering of L{_allColumns} so it can be fed into L{makeClass}.
         Note that there may be a schema mismatch with the external data, so treat missing items as
         C{None} and ignore extra items.
@@ -6248,25 +3555,29 @@
                 yield child.fromTrash()
 
 
-
     @classproperty
     def _selectIsInTrashQuery(cls):
         table = cls._homeChildMetaDataSchema
         return Select((table.IS_IN_TRASH, table.TRASHED), From=table, Where=table.RESOURCE_ID == Parameter("resourceID"))
 
 
-
     def isInTrash(self):
         return getattr(self, "_isInTrash", False)
 
 
-
     def whenTrashed(self):
         if self._trashed is None:
             return None
         return parseSQLTimestamp(self._trashed)
 
 
+    def purge(self):
+        """
+        Do a "silent" removal of this object resource.
+        """
+        return self.reallyRemove()
+
+
     def ownerHome(self):
         """
         @see: L{ICalendar.ownerCalendarHome}
@@ -6822,11 +4133,11 @@
 
 
     def created(self):
-        return datetimeMktime(parseSQLTimestamp(self._created)) if self._created else None
+        return datetimeMktime(self._created) if self._created else None
 
 
     def modified(self):
-        return datetimeMktime(parseSQLTimestamp(self._modified)) if self._modified else None
+        return datetimeMktime(self._modified) if self._modified else None
 
 
     def addNotifier(self, factory_name, notifier):
@@ -6942,11 +4253,11 @@
             returnValue(result)
 
         try:
-            self._modified = (
+            self._modified = parseSQLTimestamp((
                 yield self._txn.subtransaction(
                     _bumpModified, retries=0, failureOK=True
                 )
-            )[0][0]
+            )[0][0])
 
             queryCacher = self._txn._queryCacher
             if queryCacher is not None:
@@ -7011,6 +4322,8 @@
 
         for attr, value in zip(child._rowAttributes(), objectData):
             setattr(child, attr, value)
+        child._created = parseSQLTimestamp(child._created)
+        child._modified = parseSQLTimestamp(child._modified)
 
         yield child._loadPropertyStore(propstore)
 
@@ -7035,8 +4348,8 @@
         """
 
         rows = None
+        parentID = parent._resourceID
         if name:
-            parentID = parent._resourceID
             rows = yield cls._allColumnsWithParentAndName.on(
                 parent._txn,
                 name=name,
@@ -7046,13 +4359,13 @@
             rows = yield cls._allColumnsWithParentAndUID.on(
                 parent._txn,
                 uid=uid,
-                parentID=parent._resourceID
+                parentID=parentID
             )
         elif resourceID:
             rows = yield cls._allColumnsWithParentAndID.on(
                 parent._txn,
                 resourceID=resourceID,
-                parentID=parent._resourceID
+                parentID=parentID
             )
 
         returnValue(rows[0] if rows else None)
@@ -7407,20 +4720,23 @@
         )
 
 
-    def externalize(self):
+    def serialize(self):
         """
         Create a dictionary mapping key attributes so this object can be sent over a cross-pod call
         and reconstituted at the other end. Note that the other end may have a different schema so
         the attributes may not match exactly and will need to be processed accordingly.
         """
-        return dict([(attr[1:], getattr(self, attr, None)) for attr in itertools.chain(self._rowAttributes(), self._otherSerializedAttributes())])
+        data = dict([(attr[1:], getattr(self, attr, None)) for attr in itertools.chain(self._rowAttributes(), self._otherSerializedAttributes())])
+        data["created"] = data["created"].isoformat(" ")
+        data["modified"] = data["modified"].isoformat(" ")
+        return data
 
 
     @classmethod
     @inlineCallbacks
-    def internalize(cls, parent, mapping):
+    def deserialize(cls, parent, mapping):
         """
-        Given a mapping generated by L{externalize}, convert the values into an array of database
+        Given a mapping generated by L{serialize}, convert the values into an array of database
         like items that conforms to the ordering of L{_allColumns} so it can be fed into L{makeClass}.
         Note that there may be a schema mismatch with the external data, so treat missing items as
         C{None} and ignore extra items.
@@ -7589,20 +4905,18 @@
         raise NotImplementedError
 
 
-    @inlineCallbacks
     def remove(self, options=None):
         """
         Just moves the object to the trash
         """
 
-
         if config.EnableTrashCollection:
             if self._parentCollection.isTrash():
                 raise AlreadyInTrashError
             else:
-                yield self.toTrash()
+                return self.toTrash()
         else:
-            yield self.reallyRemove(options=options)
+            return self.reallyRemove(options=options)
 
 
     @inlineCallbacks
@@ -7610,7 +4924,6 @@
         """
         Remove, bypassing the trash
         """
-
         yield self._deleteQuery.on(self._txn, NoSuchObjectResourceError,
                                    resourceID=self._resourceID)
         yield self.properties()._removeResource()
@@ -7655,7 +4968,7 @@
 
     @inlineCallbacks
     def originalCollection(self):
-        originalCollectionID, whenTrashed = (
+        originalCollectionID, _ignore_whenTrashed = (
             yield self._selectTrashDataQuery.on(
                 self._txn, resourceID=self._resourceID
             )
@@ -7664,7 +4977,6 @@
         returnValue(originalCollection)
 
 
-
     @inlineCallbacks
     def toTrash(self):
         originalCollection = self._parentCollection._resourceID
@@ -7714,6 +5026,13 @@
         )
 
 
+    def purge(self):
+        """
+        Do a "silent" removal of this object resource.
+        """
+        return self.reallyRemove()
+
+
     def removeNotifyCategory(self):
         """
         Indicates what category to use when determining the priority of push
@@ -7748,11 +5067,11 @@
 
 
     def created(self):
-        return datetimeMktime(parseSQLTimestamp(self._created))
+        return datetimeMktime(self._created)
 
 
     def modified(self):
-        return datetimeMktime(parseSQLTimestamp(self._modified))
+        return datetimeMktime(self._modified)
 
 
     @classproperty
@@ -7780,1045 +5099,3 @@
                 raise ConcurrentModification()
         else:
             returnValue(self._textData)
-
-
-
-class NotificationCollection(FancyEqMixin, _SharedSyncLogic):
-    log = Logger()
-
-    implements(INotificationCollection)
-
-    compareAttributes = (
-        "_uid",
-        "_resourceID",
-    )
-
-    _revisionsSchema = schema.NOTIFICATION_OBJECT_REVISIONS
-    _homeSchema = schema.NOTIFICATION_HOME
-
-
-    def __init__(self, txn, uid, resourceID):
-
-        self._txn = txn
-        self._uid = uid
-        self._resourceID = resourceID
-        self._dataVersion = None
-        self._notifications = {}
-        self._notificationNames = None
-        self._syncTokenRevision = None
-
-        # Make sure we have push notifications setup to push on this collection
-        # as well as the home it is in
-        self._notifiers = dict([(factory_name, factory.newNotifier(self),) for factory_name, factory in txn._notifierFactories.items()])
-
-    _resourceIDFromUIDQuery = Select(
-        [_homeSchema.RESOURCE_ID], From=_homeSchema,
-        Where=_homeSchema.OWNER_UID == Parameter("uid"))
-
-    _UIDFromResourceIDQuery = Select(
-        [_homeSchema.OWNER_UID], From=_homeSchema,
-        Where=_homeSchema.RESOURCE_ID == Parameter("rid"))
-
-    _provisionNewNotificationsQuery = Insert(
-        {_homeSchema.OWNER_UID: Parameter("uid")},
-        Return=_homeSchema.RESOURCE_ID
-    )
-
-
-    @property
-    def _home(self):
-        """
-        L{NotificationCollection} serves as its own C{_home} for the purposes of
-        working with L{_SharedSyncLogic}.
-        """
-        return self
-
-
-    @classmethod
-    @inlineCallbacks
-    def notificationsWithUID(cls, txn, uid, create):
-        """
-        @param uid: I'm going to assume uid is utf-8 encoded bytes
-        """
-        rows = yield cls._resourceIDFromUIDQuery.on(txn, uid=uid)
-
-        if rows:
-            resourceID = rows[0][0]
-            created = False
-        elif create:
-            # Determine if the user is local or external
-            record = yield txn.directoryService().recordWithUID(uid.decode("utf-8"))
-            if record is None:
-                raise DirectoryRecordNotFoundError("Cannot create home for UID since no directory record exists: {}".format(uid))
-
-            state = _HOME_STATUS_NORMAL if record.thisServer() else _HOME_STATUS_EXTERNAL
-            if state == _HOME_STATUS_EXTERNAL:
-                raise RecordNotAllowedError("Cannot store notifications for external user: {}".format(uid))
-
-            # Use savepoint so we can do a partial rollback if there is a race
-            # condition where this row has already been inserted
-            savepoint = SavepointAction("notificationsWithUID")
-            yield savepoint.acquire(txn)
-
-            try:
-                resourceID = str((
-                    yield cls._provisionNewNotificationsQuery.on(txn, uid=uid)
-                )[0][0])
-            except Exception:
-                # FIXME: Really want to trap the pg.DatabaseError but in a non-
-                # DB specific manner
-                yield savepoint.rollback(txn)
-
-                # Retry the query - row may exist now, if not re-raise
-                rows = yield cls._resourceIDFromUIDQuery.on(txn, uid=uid)
-                if rows:
-                    resourceID = rows[0][0]
-                    created = False
-                else:
-                    raise
-            else:
-                created = True
-                yield savepoint.release(txn)
-        else:
-            returnValue(None)
-        collection = cls(txn, uid, resourceID)
-        yield collection._loadPropertyStore()
-        if created:
-            yield collection._initSyncToken()
-            yield collection.notifyChanged()
-        returnValue(collection)
-
-
-    @classmethod
-    @inlineCallbacks
-    def notificationsWithResourceID(cls, txn, rid):
-        rows = yield cls._UIDFromResourceIDQuery.on(txn, rid=rid)
-
-        if rows:
-            uid = rows[0][0]
-            result = (yield cls.notificationsWithUID(txn, uid, create=False))
-            returnValue(result)
-        else:
-            returnValue(None)
-
-
-    @inlineCallbacks
-    def _loadPropertyStore(self):
-        self._propertyStore = yield PropertyStore.load(
-            self._uid,
-            self._uid,
-            None,
-            self._txn,
-            self._resourceID,
-            notifyCallback=self.notifyChanged
-        )
-
-
-    def __repr__(self):
-        return "<%s: %s>" % (self.__class__.__name__, self._resourceID)
-
-
-    def id(self):
-        """
-        Retrieve the store identifier for this collection.
-
-        @return: store identifier.
-        @rtype: C{int}
-        """
-        return self._resourceID
-
-
-    @classproperty
-    def _dataVersionQuery(cls):
-        nh = cls._homeSchema
-        return Select(
-            [nh.DATAVERSION], From=nh,
-            Where=nh.RESOURCE_ID == Parameter("resourceID")
-        )
-
-
-    @inlineCallbacks
-    def dataVersion(self):
-        if self._dataVersion is None:
-            self._dataVersion = (yield self._dataVersionQuery.on(
-                self._txn, resourceID=self._resourceID))[0][0]
-        returnValue(self._dataVersion)
-
-
-    def name(self):
-        return "notification"
-
-
-    def uid(self):
-        return self._uid
-
-
-    def owned(self):
-        return True
-
-
-    def ownerHome(self):
-        return self._home
-
-
-    def viewerHome(self):
-        return self._home
-
-
-    @inlineCallbacks
-    def notificationObjects(self):
-        results = (yield NotificationObject.loadAllObjects(self))
-        for result in results:
-            self._notifications[result.uid()] = result
-        self._notificationNames = sorted([result.name() for result in results])
-        returnValue(results)
-
-    _notificationUIDsForHomeQuery = Select(
-        [schema.NOTIFICATION.NOTIFICATION_UID], From=schema.NOTIFICATION,
-        Where=schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID ==
-        Parameter("resourceID"))
-
-
-    @inlineCallbacks
-    def listNotificationObjects(self):
-        if self._notificationNames is None:
-            rows = yield self._notificationUIDsForHomeQuery.on(
-                self._txn, resourceID=self._resourceID)
-            self._notificationNames = sorted([row[0] for row in rows])
-        returnValue(self._notificationNames)
-
-
-    # used by _SharedSyncLogic.resourceNamesSinceRevision()
-    def listObjectResources(self):
-        return self.listNotificationObjects()
-
-
-    def _nameToUID(self, name):
-        """
-        Based on the file-backed implementation, the 'name' is just uid +
-        ".xml".
-        """
-        return name.rsplit(".", 1)[0]
-
-
-    def notificationObjectWithName(self, name):
-        return self.notificationObjectWithUID(self._nameToUID(name))
-
-
-    @memoizedKey("uid", "_notifications")
-    @inlineCallbacks
-    def notificationObjectWithUID(self, uid):
-        """
-        Create an empty notification object first then have it initialize itself
-        from the store.
-        """
-        no = NotificationObject(self, uid)
-        no = (yield no.initFromStore())
-        returnValue(no)
-
-
-    @inlineCallbacks
-    def writeNotificationObject(self, uid, notificationtype, notificationdata):
-
-        inserting = False
-        notificationObject = yield self.notificationObjectWithUID(uid)
-        if notificationObject is None:
-            notificationObject = NotificationObject(self, uid)
-            inserting = True
-        yield notificationObject.setData(uid, notificationtype, notificationdata, inserting=inserting)
-        if inserting:
-            yield self._insertRevision("%s.xml" % (uid,))
-            if self._notificationNames is not None:
-                self._notificationNames.append(notificationObject.uid())
-        else:
-            yield self._updateRevision("%s.xml" % (uid,))
-        yield self.notifyChanged()
-
-
-    def removeNotificationObjectWithName(self, name):
-        if self._notificationNames is not None:
-            self._notificationNames.remove(self._nameToUID(name))
-        return self.removeNotificationObjectWithUID(self._nameToUID(name))
-
-    _removeByUIDQuery = Delete(
-        From=schema.NOTIFICATION,
-        Where=(schema.NOTIFICATION.NOTIFICATION_UID == Parameter("uid")).And(
-            schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID
-            == Parameter("resourceID")))
-
-
-    @inlineCallbacks
-    def removeNotificationObjectWithUID(self, uid):
-        yield self._removeByUIDQuery.on(
-            self._txn, uid=uid, resourceID=self._resourceID)
-        self._notifications.pop(uid, None)
-        yield self._deleteRevision("%s.xml" % (uid,))
-        yield self.notifyChanged()
-
-    _initSyncTokenQuery = Insert(
-        {
-            _revisionsSchema.HOME_RESOURCE_ID : Parameter("resourceID"),
-            _revisionsSchema.RESOURCE_NAME    : None,
-            _revisionsSchema.REVISION         : schema.REVISION_SEQ,
-            _revisionsSchema.DELETED          : False
-        }, Return=_revisionsSchema.REVISION
-    )
-
-
-    @inlineCallbacks
-    def _initSyncToken(self):
-        self._syncTokenRevision = (yield self._initSyncTokenQuery.on(
-            self._txn, resourceID=self._resourceID))[0][0]
-
-    _syncTokenQuery = Select(
-        [Max(_revisionsSchema.REVISION)], From=_revisionsSchema,
-        Where=_revisionsSchema.HOME_RESOURCE_ID == Parameter("resourceID")
-    )
-
-
-    @inlineCallbacks
-    def syncToken(self):
-        if self._syncTokenRevision is None:
-            self._syncTokenRevision = (
-                yield self._syncTokenQuery.on(
-                    self._txn, resourceID=self._resourceID)
-            )[0][0]
-            if self._syncTokenRevision is None:
-                self._syncTokenRevision = int((yield self._txn.calendarserverValue("MIN-VALID-REVISION")))
-        returnValue("%s_%s" % (self._resourceID, self._syncTokenRevision))
-
-
-    def properties(self):
-        return self._propertyStore
-
-
-    def addNotifier(self, factory_name, notifier):
-        if self._notifiers is None:
-            self._notifiers = {}
-        self._notifiers[factory_name] = notifier
-
-
-    def getNotifier(self, factory_name):
-        return self._notifiers.get(factory_name)
-
-
-    def notifierID(self):
-        return (self._txn._homeClass[self._txn._primaryHomeType]._notifierPrefix, "%s/notification" % (self.ownerHome().uid(),),)
-
-
-    def parentNotifierID(self):
-        return (self._txn._homeClass[self._txn._primaryHomeType]._notifierPrefix, "%s" % (self.ownerHome().uid(),),)
-
-
-    @inlineCallbacks
-    def notifyChanged(self, category=ChangeCategory.default):
-        """
-        Send notifications, change sync token and bump last modified because
-        the resource has changed.  We ensure we only do this once per object
-        per transaction.
-        """
-        if self._txn.isNotifiedAlready(self):
-            returnValue(None)
-        self._txn.notificationAddedForObject(self)
-
-        # Send notifications
-        if self._notifiers:
-            # cache notifiers run in post commit
-            notifier = self._notifiers.get("cache", None)
-            if notifier:
-                self._txn.postCommit(notifier.notify)
-            # push notifiers add their work items immediately
-            notifier = self._notifiers.get("push", None)
-            if notifier:
-                yield notifier.notify(self._txn, priority=category.value)
-
-        returnValue(None)
-
-
-    @classproperty
-    def _completelyNewRevisionQuery(cls):
-        rev = cls._revisionsSchema
-        return Insert({rev.HOME_RESOURCE_ID: Parameter("homeID"),
-                       # rev.RESOURCE_ID: Parameter("resourceID"),
-                       rev.RESOURCE_NAME: Parameter("name"),
-                       rev.REVISION: schema.REVISION_SEQ,
-                       rev.DELETED: False},
-                      Return=rev.REVISION)
-
-
-    def _maybeNotify(self):
-        """
-        Emit a push notification after C{_changeRevision}.
-        """
-        return self.notifyChanged()
-
-
-    @inlineCallbacks
-    def remove(self):
-        """
-        Remove DB rows corresponding to this notification home.
-        """
-        # Delete NOTIFICATION rows
-        no = schema.NOTIFICATION
-        kwds = {"ResourceID": self._resourceID}
-        yield Delete(
-            From=no,
-            Where=(
-                no.NOTIFICATION_HOME_RESOURCE_ID == Parameter("ResourceID")
-            ),
-        ).on(self._txn, **kwds)
-
-        # Delete NOTIFICATION_HOME (will cascade to NOTIFICATION_OBJECT_REVISIONS)
-        nh = schema.NOTIFICATION_HOME
-        yield Delete(
-            From=nh,
-            Where=(
-                nh.RESOURCE_ID == Parameter("ResourceID")
-            ),
-        ).on(self._txn, **kwds)
-
-
-
-class NotificationObject(FancyEqMixin, object):
-    """
-    This used to store XML data and an XML element for the type. But we are now switching it
-    to use JSON internally. The app layer will convert that to XML and fill in the "blanks" as
-    needed for the app.
-    """
-    log = Logger()
-
-    implements(INotificationObject)
-
-    compareAttributes = (
-        "_resourceID",
-        "_home",
-    )
-
-    _objectSchema = schema.NOTIFICATION
-
-    def __init__(self, home, uid):
-        self._home = home
-        self._resourceID = None
-        self._uid = uid
-        self._md5 = None
-        self._size = None
-        self._created = None
-        self._modified = None
-        self._notificationType = None
-        self._notificationData = None
-
-
-    def __repr__(self):
-        return "<%s: %s>" % (self.__class__.__name__, self._resourceID)
-
-
-    @classproperty
-    def _allColumnsByHomeIDQuery(cls):
-        """
-        DAL query to load all columns by home ID.
-        """
-        obj = cls._objectSchema
-        return Select(
-            [obj.RESOURCE_ID, obj.NOTIFICATION_UID, obj.MD5,
-             Len(obj.NOTIFICATION_DATA), obj.NOTIFICATION_TYPE, obj.CREATED, obj.MODIFIED],
-            From=obj,
-            Where=(obj.NOTIFICATION_HOME_RESOURCE_ID == Parameter("homeID"))
-        )
-
-
-    @classmethod
-    @inlineCallbacks
-    def loadAllObjects(cls, parent):
-        """
-        Load all child objects and return a list of them. This must create the
-        child classes and initialize them using "batched" SQL operations to keep
-        this constant wrt the number of children. This is an optimization for
-        Depth:1 operations on the collection.
-        """
-
-        results = []
-
-        # Load from the main table first
-        dataRows = (
-            yield cls._allColumnsByHomeIDQuery.on(parent._txn,
-                                                  homeID=parent._resourceID))
-
-        if dataRows:
-            # Get property stores for all these child resources (if any found)
-            propertyStores = (yield PropertyStore.forMultipleResources(
-                parent.uid(),
-                None,
-                None,
-                parent._txn,
-                schema.NOTIFICATION.RESOURCE_ID,
-                schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID,
-                parent._resourceID,
-            ))
-
-        # Create the actual objects merging in properties
-        for row in dataRows:
-            child = cls(parent, None)
-            (child._resourceID,
-             child._uid,
-             child._md5,
-             child._size,
-             child._notificationType,
-             child._created,
-             child._modified,) = tuple(row)
-            try:
-                child._notificationType = json.loads(child._notificationType)
-            except ValueError:
-                pass
-            if isinstance(child._notificationType, unicode):
-                child._notificationType = child._notificationType.encode("utf-8")
-            child._loadPropertyStore(
-                props=propertyStores.get(child._resourceID, None)
-            )
-            results.append(child)
-
-        returnValue(results)
-
-
-    @classproperty
-    def _oneNotificationQuery(cls):
-        no = cls._objectSchema
-        return Select(
-            [
-                no.RESOURCE_ID,
-                no.MD5,
-                Len(no.NOTIFICATION_DATA),
-                no.NOTIFICATION_TYPE,
-                no.CREATED,
-                no.MODIFIED
-            ],
-            From=no,
-            Where=(no.NOTIFICATION_UID ==
-                   Parameter("uid")).And(no.NOTIFICATION_HOME_RESOURCE_ID ==
-                                         Parameter("homeID")))
-
-
-    @inlineCallbacks
-    def initFromStore(self):
-        """
-        Initialise this object from the store, based on its UID and home
-        resource ID. We read in and cache all the extra metadata from the DB to
-        avoid having to do DB queries for those individually later.
-
-        @return: L{self} if object exists in the DB, else C{None}
-        """
-        rows = (yield self._oneNotificationQuery.on(
-            self._txn, uid=self._uid, homeID=self._home._resourceID))
-        if rows:
-            (self._resourceID,
-             self._md5,
-             self._size,
-             self._notificationType,
-             self._created,
-             self._modified,) = tuple(rows[0])
-            try:
-                self._notificationType = json.loads(self._notificationType)
-            except ValueError:
-                pass
-            if isinstance(self._notificationType, unicode):
-                self._notificationType = self._notificationType.encode("utf-8")
-            self._loadPropertyStore()
-            returnValue(self)
-        else:
-            returnValue(None)
-
-
-    def _loadPropertyStore(self, props=None, created=False):
-        if props is None:
-            props = NonePropertyStore(self._home.uid())
-        self._propertyStore = props
-
-
-    def properties(self):
-        return self._propertyStore
-
-
-    def id(self):
-        """
-        Retrieve the store identifier for this object.
-
-        @return: store identifier.
-        @rtype: C{int}
-        """
-        return self._resourceID
-
-
-    @property
-    def _txn(self):
-        return self._home._txn
-
-
-    def notificationCollection(self):
-        return self._home
-
-
-    def uid(self):
-        return self._uid
-
-
-    def name(self):
-        return self.uid() + ".xml"
-
-
-    @classproperty
-    def _newNotificationQuery(cls):
-        no = cls._objectSchema
-        return Insert(
-            {
-                no.NOTIFICATION_HOME_RESOURCE_ID: Parameter("homeID"),
-                no.NOTIFICATION_UID: Parameter("uid"),
-                no.NOTIFICATION_TYPE: Parameter("notificationType"),
-                no.NOTIFICATION_DATA: Parameter("notificationData"),
-                no.MD5: Parameter("md5"),
-            },
-            Return=[no.RESOURCE_ID, no.CREATED, no.MODIFIED]
-        )
-
-
-    @classproperty
-    def _updateNotificationQuery(cls):
-        no = cls._objectSchema
-        return Update(
-            {
-                no.NOTIFICATION_TYPE: Parameter("notificationType"),
-                no.NOTIFICATION_DATA: Parameter("notificationData"),
-                no.MD5: Parameter("md5"),
-            },
-            Where=(no.NOTIFICATION_HOME_RESOURCE_ID == Parameter("homeID")).And(
-                no.NOTIFICATION_UID == Parameter("uid")),
-            Return=no.MODIFIED
-        )
-
-
-    @inlineCallbacks
-    def setData(self, uid, notificationtype, notificationdata, inserting=False):
-        """
-        Set the object resource data and update and cached metadata.
-        """
-
-        notificationtext = json.dumps(notificationdata)
-        self._notificationType = notificationtype
-        self._md5 = hashlib.md5(notificationtext).hexdigest()
-        self._size = len(notificationtext)
-        if inserting:
-            rows = yield self._newNotificationQuery.on(
-                self._txn, homeID=self._home._resourceID, uid=uid,
-                notificationType=json.dumps(self._notificationType),
-                notificationData=notificationtext, md5=self._md5
-            )
-            self._resourceID, self._created, self._modified = rows[0]
-            self._loadPropertyStore()
-        else:
-            rows = yield self._updateNotificationQuery.on(
-                self._txn, homeID=self._home._resourceID, uid=uid,
-                notificationType=json.dumps(self._notificationType),
-                notificationData=notificationtext, md5=self._md5
-            )
-            self._modified = rows[0][0]
-        self._notificationData = notificationdata
-
-    _notificationDataFromID = Select(
-        [_objectSchema.NOTIFICATION_DATA], From=_objectSchema,
-        Where=_objectSchema.RESOURCE_ID == Parameter("resourceID"))
-
-
-    @inlineCallbacks
-    def notificationData(self):
-        if self._notificationData is None:
-            self._notificationData = (yield self._notificationDataFromID.on(self._txn, resourceID=self._resourceID))[0][0]
-            try:
-                self._notificationData = json.loads(self._notificationData)
-            except ValueError:
-                pass
-            if isinstance(self._notificationData, unicode):
-                self._notificationData = self._notificationData.encode("utf-8")
-        returnValue(self._notificationData)
-
-
-    def contentType(self):
-        """
-        The content type of NotificationObjects is text/xml.
-        """
-        return MimeType.fromString("text/xml")
-
-
-    def md5(self):
-        return self._md5
-
-
-    def size(self):
-        return self._size
-
-
-    def notificationType(self):
-        return self._notificationType
-
-
-    def created(self):
-        return datetimeMktime(parseSQLTimestamp(self._created))
-
-
-    def modified(self):
-        return datetimeMktime(parseSQLTimestamp(self._modified))
-
-
-
-def determineNewest(uid, homeType):
-    """
-    Construct a query to determine the modification time of the newest object
-    in a given home.
-
-    @param uid: the UID of the home to scan.
-    @type uid: C{str}
-
-    @param homeType: The type of home to scan; C{ECALENDARTYPE},
-        C{ENOTIFICATIONTYPE}, or C{EADDRESSBOOKTYPE}.
-    @type homeType: C{int}
-
-    @return: A select query that will return a single row containing a single
-        column which is the maximum value.
-    @rtype: L{Select}
-    """
-    if homeType == ENOTIFICATIONTYPE:
-        return Select(
-            [Max(schema.NOTIFICATION.MODIFIED)],
-            From=schema.NOTIFICATION_HOME.join(
-                schema.NOTIFICATION,
-                on=schema.NOTIFICATION_HOME.RESOURCE_ID ==
-                schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID),
-            Where=schema.NOTIFICATION_HOME.OWNER_UID == uid
-        )
-    homeTypeName = {ECALENDARTYPE: "CALENDAR",
-                    EADDRESSBOOKTYPE: "ADDRESSBOOK"}[homeType]
-    home = getattr(schema, homeTypeName + "_HOME")
-    bind = getattr(schema, homeTypeName + "_BIND")
-    child = getattr(schema, homeTypeName)
-    obj = getattr(schema, homeTypeName + "_OBJECT")
-    return Select(
-        [Max(obj.MODIFIED)],
-        From=home.join(bind, on=bind.HOME_RESOURCE_ID == home.RESOURCE_ID).join(
-            child, on=child.RESOURCE_ID == bind.RESOURCE_ID).join(
-            obj, on=obj.PARENT_RESOURCE_ID == child.RESOURCE_ID),
-        Where=(bind.BIND_MODE == 0).And(home.OWNER_UID == uid)
-    )
-
-
-
- at inlineCallbacks
-def mergeHomes(sqlTxn, one, other, homeType):
-    """
-    Merge two homes together.  This determines which of C{one} or C{two} is
-    newer - that is, has been modified more recently - and pulls all the data
-    from the older into the newer home.  Then, it changes the UID of the old
-    home to its UID, normalized and prefixed with "old.", and then re-names the
-    new home to its name, normalized.
-
-    Because the UIDs of both homes have changed, B{both one and two will be
-    invalid to all other callers from the start of the invocation of this
-    function}.
-
-    @param sqlTxn: the transaction to use
-    @type sqlTxn: A L{CommonTransaction}
-
-    @param one: A calendar home.
-    @type one: L{ICalendarHome}
-
-    @param two: Another, different calendar home.
-    @type two: L{ICalendarHome}
-
-    @param homeType: The type of home to scan; L{ECALENDARTYPE} or
-        L{EADDRESSBOOKTYPE}.
-    @type homeType: C{int}
-
-    @return: a L{Deferred} which fires with with the newer of C{one} or C{two},
-        into which the data from the other home has been merged, when the merge
-        is complete.
-    """
-    from txdav.caldav.datastore.util import migrateHome as migrateCalendarHome
-    from txdav.carddav.datastore.util import migrateHome as migrateABHome
-    migrateHome = {EADDRESSBOOKTYPE: migrateABHome,
-                   ECALENDARTYPE: migrateCalendarHome,
-                   ENOTIFICATIONTYPE: _dontBotherWithNotifications}[homeType]
-    homeTable = {EADDRESSBOOKTYPE: schema.ADDRESSBOOK_HOME,
-                 ECALENDARTYPE: schema.CALENDAR_HOME,
-                 ENOTIFICATIONTYPE: schema.NOTIFICATION_HOME}[homeType]
-    both = []
-    both.append([one,
-                 (yield determineNewest(one.uid(), homeType).on(sqlTxn))])
-    both.append([other,
-                 (yield determineNewest(other.uid(), homeType).on(sqlTxn))])
-    both.sort(key=lambda x: x[1])
-
-    older = both[0][0]
-    newer = both[1][0]
-    yield migrateHome(older, newer, merge=True)
-    # Rename the old one to 'old.<correct-guid>'
-    newNormalized = normalizeUUIDOrNot(newer.uid())
-    oldNormalized = normalizeUUIDOrNot(older.uid())
-    yield _renameHome(sqlTxn, homeTable, older.uid(), "old." + oldNormalized)
-    # Rename the new one to '<correct-guid>'
-    if newer.uid() != newNormalized:
-        yield _renameHome(sqlTxn, homeTable, newer.uid(), newNormalized)
-    yield returnValue(newer)
-
-
-
-def _renameHome(txn, table, oldUID, newUID):
-    """
-    Rename a calendar, addressbook, or notification home.  Note that this
-    function is only safe in transactions that have had caching disabled, and
-    more specifically should only ever be used during upgrades.  Running this
-    in a normal transaction will have unpredictable consequences, especially
-    with respect to memcache.
-
-    @param txn: an SQL transaction to use for this update
-    @type txn: L{twext.enterprise.ienterprise.IAsyncTransaction}
-
-    @param table: the storage table of the desired home type
-    @type table: L{TableSyntax}
-
-    @param oldUID: the old UID, the existing home's UID
-    @type oldUID: L{str}
-
-    @param newUID: the new UID, to change the UID to
-    @type newUID: L{str}
-
-    @return: a L{Deferred} which fires when the home is renamed.
-    """
-    return Update({table.OWNER_UID: newUID},
-                  Where=table.OWNER_UID == oldUID).on(txn)
-
-
-
-def _dontBotherWithNotifications(older, newer, merge):
-    """
-    Notifications are more transient and can be easily worked around; don't
-    bother to migrate all of them when there is a UUID case mismatch.
-    """
-    pass
-
-
-
- at inlineCallbacks
-def _normalizeHomeUUIDsIn(t, homeType):
-    """
-    Normalize the UUIDs in the given L{txdav.common.datastore.CommonStore}.
-
-    This changes the case of the UUIDs in the calendar home.
-
-    @param t: the transaction to normalize all the UUIDs in.
-    @type t: L{CommonStoreTransaction}
-
-    @param homeType: The type of home to scan, L{ECALENDARTYPE},
-        L{EADDRESSBOOKTYPE}, or L{ENOTIFICATIONTYPE}.
-    @type homeType: C{int}
-
-    @return: a L{Deferred} which fires with C{None} when the UUID normalization
-        is complete.
-    """
-    from txdav.caldav.datastore.util import fixOneCalendarHome
-    homeTable = {EADDRESSBOOKTYPE: schema.ADDRESSBOOK_HOME,
-                 ECALENDARTYPE: schema.CALENDAR_HOME,
-                 ENOTIFICATIONTYPE: schema.NOTIFICATION_HOME}[homeType]
-    homeTypeName = homeTable.model.name.split("_")[0]
-
-    allUIDs = yield Select([homeTable.OWNER_UID],
-                           From=homeTable,
-                           OrderBy=homeTable.OWNER_UID).on(t)
-    total = len(allUIDs)
-    allElapsed = []
-    for n, [UID] in enumerate(allUIDs):
-        start = time.time()
-        if allElapsed:
-            estimate = "%0.3d" % ((sum(allElapsed) / len(allElapsed)) *
-                                  total - n)
-        else:
-            estimate = "unknown"
-        log.info(
-            "Scanning UID {uid} [{homeType}] "
-            "({pct!0.2d}%, {estimate} seconds remaining)...",
-            uid=UID, pct=(n / float(total)) * 100, estimate=estimate,
-            homeType=homeTypeName
-        )
-        other = None
-        this = yield _getHome(t, homeType, UID)
-        if homeType == ECALENDARTYPE:
-            fixedThisHome = yield fixOneCalendarHome(this)
-        else:
-            fixedThisHome = 0
-        fixedOtherHome = 0
-        if this is None:
-            log.info(
-                "{uid!r} appears to be missing, already processed", uid=UID
-            )
-        try:
-            uuidobj = UUID(UID)
-        except ValueError:
-            pass
-        else:
-            newname = str(uuidobj).upper()
-            if UID != newname:
-                log.info(
-                    "Detected case variance: {uid} {newuid}[{homeType}]",
-                    uid=UID, newuid=newname, homeType=homeTypeName
-                )
-                other = yield _getHome(t, homeType, newname)
-                if other is None:
-                    # No duplicate: just fix the name.
-                    yield _renameHome(t, homeTable, UID, newname)
-                else:
-                    if homeType == ECALENDARTYPE:
-                        fixedOtherHome = yield fixOneCalendarHome(other)
-                    this = yield mergeHomes(t, this, other, homeType)
-                # NOTE: WE MUST NOT TOUCH EITHER HOME OBJECT AFTER THIS POINT.
-                # THE UIDS HAVE CHANGED AND ALL OPERATIONS WILL FAIL.
-
-        end = time.time()
-        elapsed = end - start
-        allElapsed.append(elapsed)
-        log.info(
-            "Scanned UID {uid}; {elapsed} seconds elapsed,"
-            " {fixes} properties fixed ({duplicate} fixes in duplicate).",
-            uid=UID, elapsed=elapsed, fixes=fixedThisHome,
-            duplicate=fixedOtherHome
-        )
-    returnValue(None)
-
-
-
-def _getHome(txn, homeType, uid):
-    """
-    Like L{CommonHome.homeWithUID} but also honoring ENOTIFICATIONTYPE which
-    isn't I{really} a type of home.
-
-    @param txn: the transaction to retrieve the home from
-    @type txn: L{CommonStoreTransaction}
-
-    @param homeType: L{ENOTIFICATIONTYPE}, L{ECALENDARTYPE}, or
-        L{EADDRESSBOOKTYPE}.
-
-    @param uid: the UID of the home to retrieve.
-    @type uid: L{str}
-
-    @return: a L{Deferred} that fires with the L{CommonHome} or
-        L{NotificationHome} when it has been retrieved.
-    """
-    if homeType == ENOTIFICATIONTYPE:
-        return txn.notificationsWithUID(uid, create=False)
-    else:
-        return txn.homeWithUID(homeType, uid)
-
-
-
- at inlineCallbacks
-def _normalizeColumnUUIDs(txn, column):
-    """
-    Upper-case the UUIDs in the given SQL DAL column.
-
-    @param txn: The transaction.
-    @type txn: L{CommonStoreTransaction}
-
-    @param column: the column, which may contain UIDs, to normalize.
-    @type column: L{ColumnSyntax}
-
-    @return: A L{Deferred} that will fire when the UUID normalization of the
-        given column has completed.
-    """
-    tableModel = column.model.table
-    # Get a primary key made of column syntax objects for querying and
-    # comparison later.
-    pkey = [ColumnSyntax(columnModel)
-            for columnModel in tableModel.primaryKey]
-    for row in (yield Select([column] + pkey,
-                             From=TableSyntax(tableModel)).on(txn)):
-        before = row[0]
-        pkeyparts = row[1:]
-        after = normalizeUUIDOrNot(before)
-        if after != before:
-            where = _AndNothing
-            # Build a where clause out of the primary key and the parts of the
-            # primary key that were found.
-            for pkeycol, pkeypart in zip(pkeyparts, pkey):
-                where = where.And(pkeycol == pkeypart)
-            yield Update({column: after}, Where=where).on(txn)
-
-
-
-class _AndNothing(object):
-    """
-    Simple placeholder for iteratively generating a 'Where' clause; the 'And'
-    just returns its argument, so it can be used at the start of the loop.
-    """
-    @staticmethod
-    def And(self):
-        """
-        Return the argument.
-        """
-        return self
-
-
-
- at inlineCallbacks
-def _needsNormalizationUpgrade(txn):
-    """
-    Determine whether a given store requires a UUID normalization data upgrade.
-
-    @param txn: the transaction to use
-    @type txn: L{CommonStoreTransaction}
-
-    @return: a L{Deferred} that fires with C{True} or C{False} depending on
-        whether we need the normalization upgrade or not.
-    """
-    for x in [schema.CALENDAR_HOME, schema.ADDRESSBOOK_HOME,
-              schema.NOTIFICATION_HOME]:
-        slct = Select([x.OWNER_UID], From=x,
-                      Where=x.OWNER_UID != Upper(x.OWNER_UID))
-        rows = yield slct.on(txn)
-        if rows:
-            for [uid] in rows:
-                if normalizeUUIDOrNot(uid) != uid:
-                    returnValue(True)
-    returnValue(False)
-
-
-
- at inlineCallbacks
-def fixUUIDNormalization(store):
-    """
-    Fix all UUIDs in the given SQL store to be in a canonical form;
-    00000000-0000-0000-0000-000000000000 format and upper-case.
-    """
-    t = store.newTransaction(disableCache=True)
-
-    # First, let's see if there are any calendar, addressbook, or notification
-    # homes that have a de-normalized OWNER_UID.  If there are none, then we can
-    # early-out and avoid the tedious and potentially expensive inspection of
-    # oodles of calendar data.
-    if not (yield _needsNormalizationUpgrade(t)):
-        log.info("No potentially denormalized UUIDs detected, "
-                 "skipping normalization upgrade.")
-        yield t.abort()
-        returnValue(None)
-    try:
-        yield _normalizeHomeUUIDsIn(t, ECALENDARTYPE)
-        yield _normalizeHomeUUIDsIn(t, EADDRESSBOOKTYPE)
-        yield _normalizeHomeUUIDsIn(t, ENOTIFICATIONTYPE)
-        yield _normalizeColumnUUIDs(t, schema.RESOURCE_PROPERTY.VIEWER_UID)
-        yield _normalizeColumnUUIDs(t, schema.APN_SUBSCRIPTIONS.SUBSCRIBER_GUID)
-    except:
-        log.failure("Unable to normalize UUIDs")
-        yield t.abort()
-        # There's a lot of possible problems here which are very hard to test
-        # for individually; unexpected data that might cause constraint
-        # violations under one of the manipulations done by
-        # normalizeHomeUUIDsIn. Since this upgrade does not come along with a
-        # schema version bump and may be re- attempted at any time, just raise
-        # the exception and log it so that we can try again later, and the
-        # service will survive for everyone _not_ affected by this somewhat
-        # obscure bug.
-    else:
-        yield t.commit()

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_apn.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_apn.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_apn.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_apn.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,121 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.record import SerializableRecord, fromTable
+from twext.python.log import Logger
+from twisted.internet.defer import inlineCallbacks
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.icommondatastore import InvalidSubscriptionValues
+
+log = Logger()
+
+"""
+Classes and methods that relate to APN objects in the SQL store.
+"""
+
+class APNSubscriptionsRecord(SerializableRecord, fromTable(schema.APN_SUBSCRIPTIONS)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.APN_SUBSCRIPTIONS}.
+    """
+    pass
+
+
+
+class APNSubscriptionsMixin(object):
+    """
+    A mixin for L{CommonStoreTransaction} that covers the APN API.
+    """
+
+    @inlineCallbacks
+    def addAPNSubscription(
+        self, token, key, timestamp, subscriber,
+        userAgent, ipAddr
+    ):
+        if not (token and key and timestamp and subscriber):
+            raise InvalidSubscriptionValues()
+
+        # Cap these values at 255 characters
+        userAgent = userAgent[:255]
+        ipAddr = ipAddr[:255]
+
+        records = yield APNSubscriptionsRecord.querysimple(
+            self,
+            token=token, resourceKey=key
+        )
+        if not records:  # Subscription does not yet exist
+            try:
+                yield APNSubscriptionsRecord.create(
+                    self,
+                    token=token,
+                    resourceKey=key,
+                    modified=timestamp,
+                    subscriberGUID=subscriber,
+                    userAgent=userAgent,
+                    ipAddr=ipAddr
+                )
+            except Exception:
+                # Subscription may have been added by someone else, which is fine
+                pass
+
+        else:  # Subscription exists, so update with new timestamp and subscriber
+            try:
+                yield records[0].update(
+                    modified=timestamp,
+                    subscriberGUID=subscriber,
+                    userAgent=userAgent,
+                    ipAddr=ipAddr,
+                )
+            except Exception:
+                # Subscription may have been added by someone else, which is fine
+                pass
+
+
+    def removeAPNSubscription(self, token, key):
+        return APNSubscriptionsRecord.deletesimple(
+            self,
+            token=token,
+            resourceKey=key
+        )
+
+
+    def purgeOldAPNSubscriptions(self, olderThan):
+        return APNSubscriptionsRecord.deletesome(
+            self,
+            APNSubscriptionsRecord.modified < olderThan,
+        )
+
+
+    def apnSubscriptionsByToken(self, token):
+        return APNSubscriptionsRecord.querysimple(
+            self,
+            token=token,
+        )
+
+
+    def apnSubscriptionsByKey(self, key):
+        return APNSubscriptionsRecord.querysimple(
+            self,
+            resourceKey=key,
+        )
+
+
+    def apnSubscriptionsBySubscriber(self, guid):
+        return APNSubscriptionsRecord.querysimple(
+            self,
+            subscriberGUID=guid,
+        )

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_directory.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_directory.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_directory.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_directory.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,848 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.record import SerializableRecord, fromTable
+from twext.enterprise.dal.syntax import SavepointAction, Select
+from twext.python.log import Logger
+from twisted.internet.defer import inlineCallbacks, returnValue
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.icommondatastore import AllRetriesFailed, NotFoundError
+from txdav.who.delegates import Delegates
+import datetime
+import hashlib
+
+log = Logger()
+
+"""
+Classes and methods that relate to directory objects in the SQL store. e.g.,
+delegates, groups etc
+"""
+
+class GroupsRecord(SerializableRecord, fromTable(schema.GROUPS)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.GROUPS}.
+    """
+
+    @classmethod
+    def groupsForMember(cls, txn, memberUID):
+
+        return GroupsRecord.query(
+            txn,
+            GroupsRecord.groupID.In(
+                GroupMembershipRecord.queryExpr(
+                    GroupMembershipRecord.memberUID == memberUID.encode("utf-8"),
+                    attributes=(GroupMembershipRecord.groupID,),
+                )
+            ),
+        )
+
+
+
+class GroupMembershipRecord(SerializableRecord, fromTable(schema.GROUP_MEMBERSHIP)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.GROUP_MEMBERSHIP}.
+    """
+    pass
+
+
+
+class DelegateRecord(SerializableRecord, fromTable(schema.DELEGATES)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.DELEGATES}.
+    """
+    pass
+
+
+
+class DelegateGroupsRecord(SerializableRecord, fromTable(schema.DELEGATE_GROUPS)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.DELEGATE_GROUPS}.
+    """
+
+    @classmethod
+    def allGroupDelegates(cls, txn):
+        """
+        Get the directly-delegated-to groups.
+        """
+
+        return GroupsRecord.query(
+            txn,
+            GroupsRecord.groupID.In(
+                DelegateGroupsRecord.queryExpr(
+                    None,
+                    attributes=(DelegateGroupsRecord.groupID,),
+                )
+            ),
+        )
+
+
+    @classmethod
+    def delegateGroups(cls, txn, delegator, readWrite):
+        """
+        Get the directly-delegated-to groups.
+        """
+
+        return GroupsRecord.query(
+            txn,
+            GroupsRecord.groupID.In(
+                DelegateGroupsRecord.queryExpr(
+                    (DelegateGroupsRecord.delegator == delegator.encode("utf-8")).And(
+                        DelegateGroupsRecord.readWrite == (1 if readWrite else 0)
+                    ),
+                    attributes=(DelegateGroupsRecord.groupID,),
+                )
+            ),
+        )
+
+
+    @classmethod
+    def indirectDelegators(cls, txn, delegate, readWrite):
+        """
+        Get delegators who have delegated to groups the delegate is a member of.
+        """
+
+        return cls.query(
+            txn,
+            cls.groupID.In(
+                GroupMembershipRecord.queryExpr(
+                    GroupMembershipRecord.memberUID == delegate.encode("utf-8"),
+                    attributes=(GroupMembershipRecord.groupID,),
+                )
+            ).And(cls.readWrite == (1 if readWrite else 0)),
+        )
+
+
+    @classmethod
+    def indirectDelegates(cls, txn, delegator, readWrite):
+        """
+        Get delegates who are in groups which have been delegated to.
+        """
+
+        return GroupMembershipRecord.query(
+            txn,
+            GroupMembershipRecord.groupID.In(
+                DelegateGroupsRecord.queryExpr(
+                    (DelegateGroupsRecord.delegator == delegator.encode("utf-8")).And(
+                        DelegateGroupsRecord.readWrite == (1 if readWrite else 0)
+                    ),
+                    attributes=(DelegateGroupsRecord.groupID,),
+                )
+            ),
+        )
+
+
+    @classmethod
+    @inlineCallbacks
+    def delegatorGroups(cls, txn, delegator):
+        """
+        Get delegator/group pairs for the specified delegator.
+        """
+
+        # Do a join to get what we need
+        rows = yield Select(
+            list(DelegateGroupsRecord.table) + list(GroupsRecord.table),
+            From=DelegateGroupsRecord.table.join(GroupsRecord.table, DelegateGroupsRecord.groupID == GroupsRecord.groupID),
+            Where=(DelegateGroupsRecord.delegator == delegator.encode("utf-8"))
+        ).on(txn)
+
+        results = []
+        delegatorNames = [DelegateGroupsRecord.__colmap__[column] for column in list(DelegateGroupsRecord.table)]
+        groupsNames = [GroupsRecord.__colmap__[column] for column in list(GroupsRecord.table)]
+        split_point = len(delegatorNames)
+        for row in rows:
+            delegatorRow = row[:split_point]
+            delegatorRecord = DelegateGroupsRecord()
+            delegatorRecord._attributesFromRow(zip(delegatorNames, delegatorRow))
+            delegatorRecord.transaction = txn
+            groupsRow = row[split_point:]
+            groupsRecord = GroupsRecord()
+            groupsRecord._attributesFromRow(zip(groupsNames, groupsRow))
+            groupsRecord.transaction = txn
+            results.append((delegatorRecord, groupsRecord,))
+
+        returnValue(results)
+
+
+
+class ExternalDelegateGroupsRecord(SerializableRecord, fromTable(schema.EXTERNAL_DELEGATE_GROUPS)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.EXTERNAL_DELEGATE_GROUPS}.
+    """
+    pass
+
+
+
+class GroupsAPIMixin(object):
+    """
+    A mixin for L{CommonStoreTransaction} that covers the groups API.
+    """
+
+    @inlineCallbacks
+    def addGroup(self, groupUID, name, membershipHash):
+        """
+        @type groupUID: C{unicode}
+        @type name: C{unicode}
+        @type membershipHash: C{str}
+        """
+        record = yield self.directoryService().recordWithUID(groupUID)
+        if record is None:
+            returnValue(None)
+
+        group = yield GroupsRecord.create(
+            self,
+            name=name.encode("utf-8"),
+            groupUID=groupUID.encode("utf-8"),
+            membershipHash=membershipHash,
+        )
+
+        yield self.refreshGroup(group, record)
+        returnValue(group)
+
+
+    def updateGroup(self, groupUID, name, membershipHash, extant=True):
+        """
+        @type groupUID: C{unicode}
+        @type name: C{unicode}
+        @type membershipHash: C{str}
+        @type extant: C{boolean}
+        """
+        timestamp = datetime.datetime.utcnow()
+        group = yield self.groupByUID(groupUID, create=False)
+        if group is not None:
+            yield group.update(
+                name=name.encode("utf-8"),
+                membershipHash=membershipHash,
+                extant=(1 if extant else 0),
+                modified=timestamp,
+            )
+
+
+    @inlineCallbacks
+    def groupByUID(self, groupUID, create=True):
+        """
+        Return or create a record for the group UID.
+
+        @type groupUID: C{unicode}
+
+        @return: Deferred firing with tuple of group ID C{str}, group name
+            C{unicode}, membership hash C{str}, modified timestamp, and
+            extant C{boolean}
+        """
+        results = yield GroupsRecord.query(
+            self,
+            GroupsRecord.groupUID == groupUID.encode("utf-8")
+        )
+        if results:
+            returnValue(results[0])
+        elif create:
+            savepoint = SavepointAction("groupByUID")
+            yield savepoint.acquire(self)
+            try:
+                group = yield self.addGroup(groupUID, u"", "")
+                if group is None:
+                    # The record does not actually exist within the directory
+                    yield savepoint.release(self)
+                    returnValue(None)
+
+            except Exception:
+                yield savepoint.rollback(self)
+                results = yield GroupsRecord.query(
+                    self,
+                    GroupsRecord.groupUID == groupUID.encode("utf-8")
+                )
+                returnValue(results[0] if results else None)
+            else:
+                yield savepoint.release(self)
+                returnValue(group)
+        else:
+            returnValue(None)
+
+
+    @inlineCallbacks
+    def groupByID(self, groupID):
+        """
+        Given a group ID, return the group UID, or raise NotFoundError
+
+        @type groupID: C{str}
+        @return: Deferred firing with a tuple of group UID C{unicode},
+            group name C{unicode}, membership hash C{str}, and extant C{boolean}
+        """
+        results = yield GroupsRecord.query(
+            self,
+            GroupsRecord.groupID == groupID,
+        )
+        if results:
+            returnValue(results[0])
+        else:
+            raise NotFoundError
+
+
+
+class GroupCacherAPIMixin(object):
+    """
+    A mixin for L{CommonStoreTransaction} that covers the group cacher API.
+    """
+
+    def addMemberToGroup(self, memberUID, groupID):
+        return GroupMembershipRecord.create(self, groupID=groupID, memberUID=memberUID.encode("utf-8"))
+
+
+    def removeMemberFromGroup(self, memberUID, groupID):
+        return GroupMembershipRecord.deletesimple(
+            self, groupID=groupID, memberUID=memberUID.encode("utf-8")
+        )
+
+
+    @inlineCallbacks
+    def groupMemberUIDs(self, groupID):
+        """
+        Returns the cached set of UIDs for members of the given groupID.
+        Sub-groups are not returned in the results but their members are,
+        because the group membership has already been expanded/flattened
+        before storing in the db.
+
+        @param groupID: the group ID
+        @type groupID: C{int}
+        @return: the set of member UIDs
+        @rtype: a Deferred which fires with a set() of C{str} UIDs
+        """
+
+        members = yield GroupMembershipRecord.query(self, GroupMembershipRecord.groupID == groupID)
+        returnValue(set([record.memberUID.decode("utf-8") for record in members]))
+
+
+    @inlineCallbacks
+    def refreshGroup(self, group, record):
+        """
+        @param group: the group record
+        @type group: L{GroupsRecord}
+        @param record: the directory record
+        @type record: C{iDirectoryRecord}
+
+        @return: Deferred firing with membershipChanged C{boolean}
+
+        """
+
+        if record is not None:
+            memberUIDs = yield record.expandedMemberUIDs()
+            name = record.displayName
+            extant = True
+        else:
+            memberUIDs = frozenset()
+            name = group.name
+            extant = False
+
+        membershipHashContent = hashlib.md5()
+        for memberUID in sorted(memberUIDs):
+            membershipHashContent.update(str(memberUID))
+        membershipHash = membershipHashContent.hexdigest()
+
+        if group.membershipHash != membershipHash:
+            membershipChanged = True
+            log.debug(
+                "Group '{group}' changed", group=name
+            )
+        else:
+            membershipChanged = False
+
+        if membershipChanged or extant != group.extant:
+            # also updates group mod date
+            yield group.update(
+                name=name,
+                membershipHash=membershipHash,
+                extant=(1 if extant else 0),
+            )
+
+        if membershipChanged:
+            addedUIDs, removedUIDs = yield self.synchronizeMembers(group.groupID, set(memberUIDs))
+        else:
+            addedUIDs = removedUIDs = None
+
+        returnValue((membershipChanged, addedUIDs, removedUIDs,))
+
+
+    @inlineCallbacks
+    def synchronizeMembers(self, groupID, newMemberUIDs):
+        """
+        Update the group membership table in the database to match the new membership list. This
+        method will diff the existing set with the new set and apply the changes. It also calls out
+        to a groupChanged() method with the set of added and removed members so that other modules
+        that depend on groups can monitor the changes.
+
+        @param groupID: group id of group to update
+        @type groupID: L{str}
+        @param newMemberUIDs: set of new member UIDs in the group
+        @type newMemberUIDs: L{set} of L{str}
+        """
+        cachedMemberUIDs = yield self.groupMemberUIDs(groupID)
+
+        removed = cachedMemberUIDs - newMemberUIDs
+        for memberUID in removed:
+            yield self.removeMemberFromGroup(memberUID, groupID)
+
+        added = newMemberUIDs - cachedMemberUIDs
+        for memberUID in added:
+            yield self.addMemberToGroup(memberUID, groupID)
+
+        yield self.groupChanged(groupID, added, removed)
+
+        returnValue((added, removed,))
+
+
+    @inlineCallbacks
+    def groupChanged(self, groupID, addedUIDs, removedUIDs):
+        """
+        Called when membership of a group changes.
+
+        @param groupID: group id of group that changed
+        @type groupID: L{str}
+        @param addedUIDs: set of new member UIDs added to the group
+        @type addedUIDs: L{set} of L{str}
+        @param removedUIDs: set of old member UIDs removed from the group
+        @type removedUIDs: L{set} of L{str}
+        """
+        yield Delegates.groupChanged(self, groupID, addedUIDs, removedUIDs)
+
+
+    @inlineCallbacks
+    def groupMembers(self, groupID):
+        """
+        The members of the given group as recorded in the db
+        """
+        members = set()
+        memberUIDs = (yield self.groupMemberUIDs(groupID))
+        for uid in memberUIDs:
+            record = (yield self.directoryService().recordWithUID(uid))
+            if record is not None:
+                members.add(record)
+        returnValue(members)
+
+
+    @inlineCallbacks
+    def groupUIDsFor(self, uid):
+        """
+        Returns the cached set of UIDs for the groups this given uid is
+        a member of.
+
+        @param uid: the uid
+        @type uid: C{unicode}
+        @return: the set of group IDs
+        @rtype: a Deferred which fires with a set() of C{int} group IDs
+        """
+        groups = yield GroupsRecord.groupsForMember(self, uid)
+        returnValue(set([group.groupUID.decode("utf-8") for group in groups]))
+
+
+
+class DelegatesAPIMixin(object):
+    """
+    A mixin for L{CommonStoreTransaction} that covers the delegates API.
+    """
+
+    @inlineCallbacks
+    def addDelegate(self, delegator, delegate, readWrite):
+        """
+        Adds a row to the DELEGATES table.  The delegate should not be a
+        group.  To delegate to a group, call addDelegateGroup() instead.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param delegate: the UID of the delegate
+        @type delegate: C{unicode}
+        @param readWrite: grant read and write access if True, otherwise
+            read-only access
+        @type readWrite: C{boolean}
+        """
+
+        def _addDelegate(subtxn):
+            return DelegateRecord.create(
+                subtxn,
+                delegator=delegator.encode("utf-8"),
+                delegate=delegate.encode("utf-8"),
+                readWrite=1 if readWrite else 0
+            )
+
+        try:
+            yield self.subtransaction(_addDelegate, retries=0, failureOK=True)
+        except AllRetriesFailed:
+            pass
+
+
+    @inlineCallbacks
+    def addDelegateGroup(self, delegator, delegateGroupID, readWrite,
+                         isExternal=False):
+        """
+        Adds a row to the DELEGATE_GROUPS table.  The delegate should be a
+        group.  To delegate to a person, call addDelegate() instead.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param delegateGroupID: the GROUP_ID of the delegate group
+        @type delegateGroupID: C{int}
+        @param readWrite: grant read and write access if True, otherwise
+            read-only access
+        @type readWrite: C{boolean}
+        """
+
+        def _addDelegateGroup(subtxn):
+            return DelegateGroupsRecord.create(
+                subtxn,
+                delegator=delegator.encode("utf-8"),
+                groupID=delegateGroupID,
+                readWrite=1 if readWrite else 0,
+                isExternal=1 if isExternal else 0
+            )
+
+        try:
+            yield self.subtransaction(_addDelegateGroup, retries=0, failureOK=True)
+        except AllRetriesFailed:
+            pass
+
+
+    def removeDelegate(self, delegator, delegate, readWrite):
+        """
+        Removes a row from the DELEGATES table.  The delegate should not be a
+        group.  To remove a delegate group, call removeDelegateGroup() instead.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param delegate: the UID of the delegate
+        @type delegate: C{unicode}
+        @param readWrite: remove read and write access if True, otherwise
+            read-only access
+        @type readWrite: C{boolean}
+        """
+        return DelegateRecord.deletesimple(
+            self,
+            delegator=delegator.encode("utf-8"),
+            delegate=delegate.encode("utf-8"),
+            readWrite=(1 if readWrite else 0),
+        )
+
+
+    def removeDelegates(self, delegator, readWrite):
+        """
+        Removes all rows for this delegator/readWrite combination from the
+        DELEGATES table.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param readWrite: remove read and write access if True, otherwise
+            read-only access
+        @type readWrite: C{boolean}
+        """
+        return DelegateRecord.deletesimple(
+            self,
+            delegator=delegator.encode("utf-8"),
+            readWrite=(1 if readWrite else 0)
+        )
+
+
+    def removeDelegateGroup(self, delegator, delegateGroupID, readWrite):
+        """
+        Removes a row from the DELEGATE_GROUPS table.  The delegate should be a
+        group.  To remove a delegate person, call removeDelegate() instead.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param delegateGroupID: the GROUP_ID of the delegate group
+        @type delegateGroupID: C{int}
+        @param readWrite: remove read and write access if True, otherwise
+            read-only access
+        @type readWrite: C{boolean}
+        """
+        return DelegateGroupsRecord.deletesimple(
+            self,
+            delegator=delegator.encode("utf-8"),
+            groupID=delegateGroupID,
+            readWrite=(1 if readWrite else 0),
+        )
+
+
+    def removeDelegateGroups(self, delegator, readWrite):
+        """
+        Removes all rows for this delegator/readWrite combination from the
+        DELEGATE_GROUPS table.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param readWrite: remove read and write access if True, otherwise
+            read-only access
+        @type readWrite: C{boolean}
+        """
+        return DelegateGroupsRecord.deletesimple(
+            self,
+            delegator=delegator.encode("utf-8"),
+            readWrite=(1 if readWrite else 0),
+        )
+
+
+    @inlineCallbacks
+    def delegates(self, delegator, readWrite, expanded=False):
+        """
+        Returns the UIDs of all delegates for the given delegator.  If
+        expanded is False, only the direct delegates (users and groups)
+        are returned.  If expanded is True, the expanded membership is
+        returned, not including the groups themselves.
+
+        @param delegator: the UID of the delegator
+        @type delegator: C{unicode}
+        @param readWrite: the access-type to check for; read and write
+            access if True, otherwise read-only access
+        @type readWrite: C{boolean}
+        @returns: the UIDs of the delegates (for the specified access
+            type)
+        @rtype: a Deferred resulting in a set
+        """
+        delegates = set()
+        delegatorU = delegator.encode("utf-8")
+
+        # First get the direct delegates
+        results = yield DelegateRecord.query(
+            self,
+            (DelegateRecord.delegator == delegatorU).And(
+                DelegateRecord.readWrite == (1 if readWrite else 0)
+            )
+        )
+        delegates.update([record.delegate.decode("utf-8") for record in results])
+
+        if expanded:
+            # Get those who are in groups which have been delegated to
+            results = yield DelegateGroupsRecord.indirectDelegates(
+                self, delegator, readWrite
+            )
+            # Skip the delegator if they are in one of the groups
+            delegates.update([record.memberUID.decode("utf-8") for record in results if record.memberUID != delegatorU])
+
+        else:
+            # Get the directly-delegated-to groups
+            results = yield DelegateGroupsRecord.delegateGroups(
+                self, delegator, readWrite,
+            )
+            delegates.update([record.groupUID.decode("utf-8") for record in results])
+
+        returnValue(delegates)
+
+
+    @inlineCallbacks
+    def delegators(self, delegate, readWrite):
+        """
+        Returns the UIDs of all delegators which have granted access to
+        the given delegate, either directly or indirectly via groups.
+
+        @param delegate: the UID of the delegate
+        @type delegate: C{unicode}
+        @param readWrite: the access-type to check for; read and write
+            access if True, otherwise read-only access
+        @type readWrite: C{boolean}
+        @returns: the UIDs of the delegators (for the specified access
+            type)
+        @rtype: a Deferred resulting in a set
+        """
+        delegators = set()
+        delegateU = delegate.encode("utf-8")
+
+        # First get the direct delegators
+        results = yield DelegateRecord.query(
+            self,
+            (DelegateRecord.delegate == delegateU).And(
+                DelegateRecord.readWrite == (1 if readWrite else 0)
+            )
+        )
+        delegators.update([record.delegator.decode("utf-8") for record in results])
+
+        # Finally get those who have delegated to groups the delegate
+        # is a member of
+        results = yield DelegateGroupsRecord.indirectDelegators(
+            self, delegate, readWrite
+        )
+        # Skip the delegator if they are in one of the groups
+        delegators.update([record.delegator.decode("utf-8") for record in results if record.delegator != delegateU])
+
+        returnValue(delegators)
+
+
+    @inlineCallbacks
+    def delegatorsToGroup(self, delegateGroupID, readWrite):
+        """
+        Return the UIDs of those who have delegated to the given group with the
+        given access level.
+
+        @param delegateGroupID: the group ID of the delegate group
+        @type delegateGroupID: C{int}
+        @param readWrite: the access-type to check for; read and write
+            access if True, otherwise read-only access
+        @type readWrite: C{boolean}
+        @returns: the UIDs of the delegators (for the specified access
+            type)
+        @rtype: a Deferred resulting in a set
+
+        """
+        results = yield DelegateGroupsRecord.query(
+            self,
+            (DelegateGroupsRecord.groupID == delegateGroupID).And(
+                DelegateGroupsRecord.readWrite == (1 if readWrite else 0)
+            )
+        )
+        delegators = set([record.delegator.decode("utf-8") for record in results])
+        returnValue(delegators)
+
+
+    @inlineCallbacks
+    def allGroupDelegates(self):
+        """
+        Return the UIDs of all groups which have been delegated to.  Useful
+        for obtaining the set of groups which need to be synchronized from
+        the directory.
+
+        @returns: the UIDs of all delegated-to groups
+        @rtype: a Deferred resulting in a set
+        """
+
+        results = yield DelegateGroupsRecord.allGroupDelegates(self)
+        delegates = set([record.groupUID.decode("utf-8") for record in results])
+
+        returnValue(delegates)
+
+
+    @inlineCallbacks
+    def externalDelegates(self):
+        """
+        Returns a dictionary mapping delegate UIDs to (read-group, write-group)
+        tuples, including only those assignments that originated from the
+        directory.
+
+        @returns: dictionary mapping delegator uid to (readDelegateUID,
+            writeDelegateUID) tuples
+        @rtype: a Deferred resulting in a dictionary
+        """
+        delegates = {}
+
+        # Get the externally managed delegates (which are all groups)
+        results = yield ExternalDelegateGroupsRecord.all(self)
+        for record in results:
+            delegates[record.delegator.encode("utf-8")] = (
+                record.groupUIDRead.encode("utf-8") if record.groupUIDRead else None,
+                record.groupUIDWrite.encode("utf-8") if record.groupUIDWrite else None
+            )
+
+        returnValue(delegates)
+
+
+    @inlineCallbacks
+    def assignExternalDelegates(
+        self, delegator, readDelegateGroupID, writeDelegateGroupID,
+        readDelegateUID, writeDelegateUID
+    ):
+        """
+        Update the external delegate group table so we can quickly identify
+        diffs next time, and update the delegate group table itself
+
+        @param delegator
+        @type delegator: C{UUID}
+        """
+
+        # Delete existing external assignments for the delegator
+        yield DelegateGroupsRecord.deletesimple(
+            self,
+            delegator=str(delegator),
+            isExternal=1,
+        )
+
+        # Remove from the external comparison table
+        yield ExternalDelegateGroupsRecord.deletesimple(
+            self,
+            delegator=str(delegator),
+        )
+
+        # Store new assignments in the external comparison table
+        if readDelegateUID or writeDelegateUID:
+            readDelegateForDB = (
+                readDelegateUID.encode("utf-8") if readDelegateUID else ""
+            )
+            writeDelegateForDB = (
+                writeDelegateUID.encode("utf-8") if writeDelegateUID else ""
+            )
+            yield ExternalDelegateGroupsRecord.create(
+                self,
+                delegator=str(delegator),
+                groupUIDRead=readDelegateForDB,
+                groupUIDWrite=writeDelegateForDB,
+            )
+
+        # Apply new assignments
+        if readDelegateGroupID is not None:
+            yield self.addDelegateGroup(
+                delegator, readDelegateGroupID, False, isExternal=True
+            )
+        if writeDelegateGroupID is not None:
+            yield self.addDelegateGroup(
+                delegator, writeDelegateGroupID, True, isExternal=True
+            )
+
+
+    def dumpIndividualDelegatesLocal(self, delegator):
+        """
+        Get the L{DelegateRecord} for all delegates associated with this delegator.
+        """
+        return DelegateRecord.querysimple(self, delegator=delegator.encode("utf-8"))
+
+
+    @inlineCallbacks
+    def dumpIndividualDelegatesExternal(self, delegator):
+        """
+        Get the L{DelegateRecord} for all delegates associated with this delegator.
+        """
+        raw_results = yield self.store().conduit.send_dump_individual_delegates(self, delegator)
+        returnValue([DelegateRecord.deserialize(row) for row in raw_results])
+
+
+    def dumpGroupDelegatesLocal(self, delegator):
+        """
+        Get the L{DelegateGroupsRecord},L{GroupsRecord} for all group delegates associated with this delegator.
+        """
+        return DelegateGroupsRecord.delegatorGroups(self, delegator)
+
+
+    @inlineCallbacks
+    def dumpGroupDelegatesExternal(self, delegator):
+        """
+        Get the L{DelegateGroupsRecord},L{GroupsRecord} for all delegates associated with this delegator.
+        """
+        raw_results = yield self.store().conduit.send_dump_group_delegates(self, delegator)
+        returnValue([(DelegateGroupsRecord.deserialize(row[0]), GroupsRecord.deserialize(row[1]),) for row in raw_results])
+
+
+    def dumpExternalDelegatesLocal(self, delegator):
+        """
+        Get the L{ExternalDelegateGroupsRecord} for all delegates associated with this delegator.
+        """
+        return ExternalDelegateGroupsRecord.querysimple(self, delegator=delegator.encode("utf-8"))
+
+
+    @inlineCallbacks
+    def dumpExternalDelegatesExternal(self, delegator):
+        """
+        Get the L{ExternalDelegateGroupsRecord} for all delegates associated with this delegator.
+        """
+        raw_results = yield self.store().conduit.send_dump_external_delegates(self, delegator)
+        returnValue([ExternalDelegateGroupsRecord.deserialize(row) for row in raw_results])

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_external.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_external.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_external.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -18,7 +18,6 @@
 SQL data store.
 """
 
-from twext.internet.decorate import memoizedKey
 from twext.python.log import Logger
 
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
@@ -26,6 +25,8 @@
 from txdav.base.propertystore.sql import PropertyStore
 from txdav.common.datastore.sql import CommonHome, CommonHomeChild, \
     CommonObjectResource
+from txdav.common.datastore.sql_notification import NotificationCollection, \
+    NotificationObjectRecord
 from txdav.common.datastore.sql_tables import _HOME_STATUS_EXTERNAL
 from txdav.common.icommondatastore import NonExistentExternalShare, \
     ExternalShareFailed
@@ -40,19 +41,63 @@
     are all stubbed out since no data for the user is actually hosted in this store.
     """
 
-    def __init__(self, transaction, ownerUID, resourceID):
-        super(CommonHomeExternal, self).__init__(transaction, ownerUID)
-        self._resourceID = resourceID
-        self._status = _HOME_STATUS_EXTERNAL
+    @classmethod
+    def makeSyntheticExternalHome(cls, transaction, diruid, resourceID):
+        """
+        During migration we need to refer to the remote home as an external home but without have a local representation
+        of it in the store. There will be a new local store home for the migrating user that will operate on local store
+        objects. The synthetic home operates only on remote objects.
 
+        @param diruid: directory UID of user
+        @type diruid: L{str}
+        @param resourceID: resource ID in the remote store
+        @type resourceID: L{int}
+        """
+        attrMap = {
+            "_resourceID": resourceID,
+            "_ownerUID": diruid,
+            "_status": _HOME_STATUS_EXTERNAL,
+        }
+        homeData = [attrMap.get(attr) for attr in cls.homeAttributes()]
+        result = cls(transaction, homeData)
+        result._childClass = result._childClass._externalClass
+        return result
 
-    def initFromStore(self, no_cache=False):
+
+    def __init__(self, transaction, homeData):
+        super(CommonHomeExternal, self).__init__(transaction, homeData)
+
+
+    def initFromStore(self):
         """
-        Never called - this should be done by CommonHome.initFromStore only.
+        NoOp for an external share as there is no metadata or properties.
         """
-        raise AssertionError("CommonHomeExternal: not supported")
+        return succeed(self)
 
 
+    @inlineCallbacks
+    def readMetaData(self):
+        """
+        Read the home metadata from remote home and save as attributes on this object.
+        """
+        mapping = yield self._txn.store().conduit.send_home_metadata(self)
+        self.deserialize(mapping)
+
+
+    def setStatus(self, newStatus):
+        return self._txn.store().conduit.send_home_set_status(self, newStatus)
+
+
+    def setLocalStatus(self, newStatus):
+        """
+        Set the status on the object in the local store not the remote one.
+
+        @param newStatus: the new status to set
+        @type newStatus: L{int}
+        """
+        return super(CommonHomeExternal, self).setStatus(newStatus)
+
+
     def external(self):
         """
         Is this an external home.
@@ -76,15 +121,14 @@
         raise AssertionError("CommonHomeExternal: not supported")
 
 
-    @memoizedKey("name", "_children")
     @inlineCallbacks
-    def createChildWithName(self, name, externalID=None):
+    def createChildWithName(self, name, bindUID=None):
         """
         No real children - only external ones.
         """
-        if externalID is None:
+        if bindUID is None:
             raise AssertionError("CommonHomeExternal: not supported")
-        child = yield super(CommonHomeExternal, self).createChildWithName(name, externalID)
+        child = yield super(CommonHomeExternal, self).createChildWithName(name, bindUID)
         returnValue(child)
 
 
@@ -101,7 +145,7 @@
         Remove an external child. Check that it is invalid or unused before calling this because if there
         are valid references to it, removing will break things.
         """
-        if child._externalID is None:
+        if child._bindUID is None:
             raise AssertionError("CommonHomeExternal: not supported")
         yield super(CommonHomeExternal, self).removeChildWithName(child.name())
 
@@ -175,11 +219,17 @@
         raise AssertionError("CommonHomeExternal: not supported")
 
 
-#    def ownerHomeAndChildNameForChildID(self, resourceID):
-#        """
-#        No children.
-#        """
-#        raise AssertionError("CommonHomeExternal: not supported")
+    @inlineCallbacks
+    def sharedToBindRecords(self):
+        results = yield self._txn.store().conduit.send_home_shared_to_records(self)
+        returnValue(dict([(
+            k,
+            (
+                self._childClass._bindRecordClass.deserialize(v[0]),
+                self._childClass._bindRecordClass.deserialize(v[1]),
+                self._childClass._metadataRecordClass.deserialize(v[2]),
+            ),
+        ) for k, v in results.items()]))
 
 
 
@@ -190,7 +240,6 @@
     """
 
     @classmethod
-    @inlineCallbacks
     def listObjects(cls, home):
         """
         Retrieve the names of the children that exist in the given home.
@@ -198,8 +247,7 @@
         @return: an iterable of C{str}s.
         """
 
-        results = yield home._txn.store().conduit.send_homechild_listobjects(home)
-        returnValue(results)
+        return home._txn.store().conduit.send_homechild_listobjects(home)
 
 
     @classmethod
@@ -209,18 +257,18 @@
 
         results = []
         for mapping in raw_results:
-            child = yield cls.internalize(home, mapping)
+            child = yield cls.deserialize(home, mapping)
             results.append(child)
         returnValue(results)
 
 
     @classmethod
     @inlineCallbacks
-    def objectWith(cls, home, name=None, resourceID=None, externalID=None, accepted=True, onlyInTrash=False):
-        mapping = yield home._txn.store().conduit.send_homechild_objectwith(home, name, resourceID, externalID, accepted, onlyInTrash)
+    def objectWith(cls, home, name=None, resourceID=None, bindUID=None, accepted=True, onlyInTrash=False):
+        mapping = yield home._txn.store().conduit.send_homechild_objectwith(home, name, resourceID, bindUID, accepted, onlyInTrash)
 
         if mapping:
-            child = yield cls.internalize(home, mapping)
+            child = yield cls.deserialize(home, mapping)
             returnValue(child)
         else:
             returnValue(None)
@@ -310,15 +358,14 @@
 
 
     @inlineCallbacks
-    def syncToken(self):
+    def syncTokenRevision(self):
         if self._syncTokenRevision is None:
             try:
-                token = yield self._txn.store().conduit.send_homechild_synctoken(self)
-                self._syncTokenRevision = self.revisionFromToken(token)
+                revision = yield self._txn.store().conduit.send_homechild_synctokenrevision(self)
             except NonExistentExternalShare:
                 yield self.fixNonExistentExternalShare()
                 raise ExternalShareFailed("External share does not exist")
-        returnValue(("%s_%s" % (self._externalID, self._syncTokenRevision,)))
+        returnValue(revision)
 
 
     @inlineCallbacks
@@ -343,7 +390,17 @@
         returnValue(results)
 
 
+    @inlineCallbacks
+    def sharingBindRecords(self):
+        results = yield self._txn.store().conduit.send_homechild_sharing_records(self)
+        returnValue(dict([(k, self._bindRecordClass.deserialize(v),) for k, v in results.items()]))
 
+
+    def migrateBindRecords(self, bindUID):
+        return self._txn.store().conduit.send_homechild_migrate_sharing_records(self, bindUID)
+
+
+
 class CommonObjectResourceExternal(CommonObjectResource):
     """
     A CommonObjectResource for a resource not hosted on this system, but on another pod. This will forward
@@ -358,7 +415,7 @@
         results = []
         if mapping_list:
             for mapping in mapping_list:
-                child = yield cls.internalize(parent, mapping)
+                child = yield cls.deserialize(parent, mapping)
                 results.append(child)
         returnValue(results)
 
@@ -371,23 +428,19 @@
         results = []
         if mapping_list:
             for mapping in mapping_list:
-                child = yield cls.internalize(parent, mapping)
+                child = yield cls.deserialize(parent, mapping)
                 results.append(child)
         returnValue(results)
 
 
     @classmethod
-    @inlineCallbacks
     def listObjects(cls, parent):
-        results = yield parent._txn.store().conduit.send_objectresource_listobjects(parent)
-        returnValue(results)
+        return parent._txn.store().conduit.send_objectresource_listobjects(parent)
 
 
     @classmethod
-    @inlineCallbacks
     def countObjects(cls, parent):
-        result = yield parent._txn.store().conduit.send_objectresource_countobjects(parent)
-        returnValue(result)
+        return parent._txn.store().conduit.send_objectresource_countobjects(parent)
 
 
     @classmethod
@@ -396,24 +449,20 @@
         mapping = yield parent._txn.store().conduit.send_objectresource_objectwith(parent, name, uid, resourceID)
 
         if mapping:
-            child = yield cls.internalize(parent, mapping)
+            child = yield cls.deserialize(parent, mapping)
             returnValue(child)
         else:
             returnValue(None)
 
 
     @classmethod
-    @inlineCallbacks
     def resourceNameForUID(cls, parent, uid):
-        result = yield parent._txn.store().conduit.send_objectresource_resourcenameforuid(parent, uid)
-        returnValue(result)
+        return parent._txn.store().conduit.send_objectresource_resourcenameforuid(parent, uid)
 
 
     @classmethod
-    @inlineCallbacks
     def resourceUIDForName(cls, parent, name):
-        result = yield parent._txn.store().conduit.send_objectresource_resourceuidforname(parent, name)
-        returnValue(result)
+        return parent._txn.store().conduit.send_objectresource_resourceuidforname(parent, name)
 
 
     @classmethod
@@ -422,7 +471,7 @@
         mapping = yield parent._txn.store().conduit.send_objectresource_create(parent, name, str(component), options=options)
 
         if mapping:
-            child = yield cls.internalize(parent, mapping)
+            child = yield cls.deserialize(parent, mapping)
             returnValue(child)
         else:
             returnValue(None)
@@ -444,6 +493,46 @@
         returnValue(self._cachedComponent)
 
 
+    def remove(self):
+        return self._txn.store().conduit.send_objectresource_remove(self)
+
+
+
+class NotificationCollectionExternal(NotificationCollection):
+    """
+    A NotificationCollection for a resource not hosted on this system, but on another pod. This will forward
+    specific apis to the other pod using cross-pod requests.
+    """
+
+    @classmethod
+    def notificationsWithUID(cls, txn, uid, create=False):
+        return super(NotificationCollectionExternal, cls).notificationsWithUID(txn, uid, status=_HOME_STATUS_EXTERNAL, create=create)
+
+
+    def initFromStore(self):
+        """
+        NoOp for an external share as there are no properties.
+        """
+        return succeed(self)
+
+
     @inlineCallbacks
-    def remove(self):
-        yield self._txn.store().conduit.send_objectresource_remove(self)
+    def notificationObjectRecords(self):
+        results = yield self._txn.store().conduit.send_notification_all_records(self)
+        returnValue(map(NotificationObjectRecord.deserialize, results))
+
+
+    def setStatus(self, newStatus):
+        return self._txn.store().conduit.send_notification_set_status(self, newStatus)
+
+
+    def setLocalStatus(self, newStatus):
+        """
+        Set the status on the object in the local store not the remote one.
+
+        @param newStatus: the new status to set
+        @type newStatus: L{int}
+        """
+        return super(NotificationCollectionExternal, self).setStatus(newStatus)
+
+NotificationCollection._externalClass = NotificationCollectionExternal

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_imip.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_imip.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_imip.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_imip.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,102 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.record import SerializableRecord, fromTable
+from twext.enterprise.dal.syntax import utcNowSQL
+from twext.python.log import Logger
+from twisted.internet.defer import inlineCallbacks, returnValue
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.icommondatastore import InvalidIMIPTokenValues
+from uuid import uuid4
+
+log = Logger()
+
+"""
+Classes and methods that relate to iMIP objects in the SQL store.
+"""
+
+class iMIPTokensRecord(SerializableRecord, fromTable(schema.IMIP_TOKENS)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.IMIP_TOKENS}.
+    """
+    pass
+
+
+
+class imipAPIMixin(object):
+    """
+    A mixin for L{CommonStoreTransaction} that covers the iMIP API.
+    """
+
+    # Create IMIP token
+    @inlineCallbacks
+    def imipCreateToken(self, organizer, attendee, icaluid, token=None):
+        if not (organizer and attendee and icaluid):
+            raise InvalidIMIPTokenValues()
+
+        if token is None:
+            token = str(uuid4())
+
+        try:
+            record = yield iMIPTokensRecord.create(
+                self,
+                token=token,
+                organizer=organizer,
+                attendee=attendee,
+                icaluid=icaluid
+            )
+        except Exception:
+            # TODO: is it okay if someone else created the same row just now?
+            record = yield self.imipGetToken(organizer, attendee, icaluid)
+        returnValue(record)
+
+
+    # Lookup IMIP organizer+attendee+icaluid for token
+    def imipLookupByToken(self, token):
+        return iMIPTokensRecord.querysimple(self, token=token)
+
+
+    # Lookup IMIP token for organizer+attendee+icaluid
+    @inlineCallbacks
+    def imipGetToken(self, organizer, attendee, icaluid):
+        records = yield iMIPTokensRecord.querysimple(
+            self,
+            organizer=organizer,
+            attendee=attendee,
+            icaluid=icaluid,
+        )
+        if records:
+            # update the timestamp
+            record = records[0]
+            yield record.update(accessed=utcNowSQL)
+        else:
+            record = None
+        returnValue(record)
+
+
+    # Remove IMIP token
+    def imipRemoveToken(self, token):
+        return iMIPTokensRecord.deletesimple(self, token=token)
+
+
+    # Purge old IMIP tokens
+    def purgeOldIMIPTokens(self, olderThan):
+        """
+        @type olderThan: datetime
+        """
+        return iMIPTokensRecord.delete(self, iMIPTokensRecord.accessed < olderThan)

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_notification.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_notification.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_notification.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_notification.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,892 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.record import SerializableRecord, fromTable
+from twext.enterprise.dal.syntax import Select, Parameter, Insert, \
+    SavepointAction, Delete, Max, Len, Update
+from twext.enterprise.util import parseSQLTimestamp
+from twext.internet.decorate import memoizedKey
+from twext.python.clsprop import classproperty
+from twext.python.log import Logger
+from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.python.util import FancyEqMixin
+from twistedcaldav.dateops import datetimeMktime
+from txdav.base.propertystore.sql import PropertyStore
+from txdav.common.datastore.sql_tables import schema, _HOME_STATUS_NORMAL, \
+    _HOME_STATUS_EXTERNAL, _HOME_STATUS_DISABLED, _HOME_STATUS_MIGRATING
+from txdav.common.datastore.sql_util import _SharedSyncLogic
+from txdav.common.icommondatastore import RecordNotAllowedError
+from txdav.common.idirectoryservice import DirectoryRecordNotFoundError
+from txdav.common.inotifications import INotificationCollection, \
+    INotificationObject
+from txdav.idav import ChangeCategory
+from txweb2.dav.noneprops import NonePropertyStore
+from txweb2.http_headers import MimeType
+from zope.interface.declarations import implements
+import hashlib
+import json
+
+"""
+Classes and methods that relate to the Notification collection in the SQL store.
+"""
+class NotificationCollection(FancyEqMixin, _SharedSyncLogic):
+    log = Logger()
+
+    implements(INotificationCollection)
+
+    compareAttributes = (
+        "_ownerUID",
+        "_resourceID",
+    )
+
+    _revisionsSchema = schema.NOTIFICATION_OBJECT_REVISIONS
+    _homeSchema = schema.NOTIFICATION_HOME
+
+    _externalClass = None
+
+
+    @classmethod
+    def makeClass(cls, transaction, homeData):
+        """
+        Build the actual home class taking into account the possibility that we might need to
+        switch in the external version of the class.
+
+        @param transaction: transaction
+        @type transaction: L{CommonStoreTransaction}
+        @param homeData: home table column data
+        @type homeData: C{list}
+        """
+
+        status = homeData[cls.homeColumns().index(cls._homeSchema.STATUS)]
+        if status == _HOME_STATUS_EXTERNAL:
+            home = cls._externalClass(transaction, homeData)
+        else:
+            home = cls(transaction, homeData)
+        return home.initFromStore()
+
+
+    @classmethod
+    def homeColumns(cls):
+        """
+        Return a list of column names to retrieve when doing an ownerUID->home lookup.
+        """
+
+        # Common behavior is to have created and modified
+
+        return (
+            cls._homeSchema.RESOURCE_ID,
+            cls._homeSchema.OWNER_UID,
+            cls._homeSchema.STATUS,
+        )
+
+
+    @classmethod
+    def homeAttributes(cls):
+        """
+        Return a list of attributes names to map L{homeColumns} to.
+        """
+
+        # Common behavior is to have created and modified
+
+        return (
+            "_resourceID",
+            "_ownerUID",
+            "_status",
+        )
+
+
+    def __init__(self, txn, homeData):
+
+        self._txn = txn
+
+        for attr, value in zip(self.homeAttributes(), homeData):
+            setattr(self, attr, value)
+
+        self._txn = txn
+        self._dataVersion = None
+        self._notifications = {}
+        self._notificationNames = None
+        self._syncTokenRevision = None
+
+        # Make sure we have push notifications setup to push on this collection
+        # as well as the home it is in
+        self._notifiers = dict([(factory_name, factory.newNotifier(self),) for factory_name, factory in txn._notifierFactories.items()])
+
+
+    @inlineCallbacks
+    def initFromStore(self):
+        """
+        Initialize this object from the store.
+        """
+
+        yield self._loadPropertyStore()
+        returnValue(self)
+
+
+    @property
+    def _home(self):
+        """
+        L{NotificationCollection} serves as its own C{_home} for the purposes of
+        working with L{_SharedSyncLogic}.
+        """
+        return self
+
+
+    @classmethod
+    def notificationsWithUID(cls, txn, uid, status=None, create=False):
+        return cls.notificationsWith(txn, None, uid, status=status, create=create)
+
+
+    @classmethod
+    def notificationsWithResourceID(cls, txn, rid):
+        return cls.notificationsWith(txn, rid, None)
+
+
+    @classmethod
+    @inlineCallbacks
+    def notificationsWith(cls, txn, rid, uid, status=None, create=False):
+        """
+        @param uid: I'm going to assume uid is utf-8 encoded bytes
+        """
+        if rid is not None:
+            query = cls._homeSchema.RESOURCE_ID == rid
+        elif uid is not None:
+            query = cls._homeSchema.OWNER_UID == uid
+            if status is not None:
+                query = query.And(cls._homeSchema.STATUS == status)
+            else:
+                statusSet = (_HOME_STATUS_NORMAL, _HOME_STATUS_EXTERNAL,)
+                if txn._allowDisabled:
+                    statusSet += (_HOME_STATUS_DISABLED,)
+                query = query.And(cls._homeSchema.STATUS.In(statusSet))
+        else:
+            raise AssertionError("One of rid or uid must be set")
+
+        results = yield Select(
+            cls.homeColumns(),
+            From=cls._homeSchema,
+            Where=query,
+        ).on(txn)
+
+        if len(results) > 1:
+            # Pick the best one in order: normal, disabled and external
+            byStatus = dict([(result[cls.homeColumns().index(cls._homeSchema.STATUS)], result) for result in results])
+            result = byStatus.get(_HOME_STATUS_NORMAL)
+            if result is None:
+                result = byStatus.get(_HOME_STATUS_DISABLED)
+            if result is None:
+                result = byStatus.get(_HOME_STATUS_EXTERNAL)
+        elif results:
+            result = results[0]
+        else:
+            result = None
+
+        if result:
+            # Return object that already exists in the store
+            homeObject = yield cls.makeClass(txn, result)
+            returnValue(homeObject)
+        else:
+            # Can only create when uid is specified
+            if not create or uid is None:
+                returnValue(None)
+
+            # Determine if the user is local or external
+            record = yield txn.directoryService().recordWithUID(uid.decode("utf-8"))
+            if record is None:
+                raise DirectoryRecordNotFoundError("Cannot create home for UID since no directory record exists: {}".format(uid))
+
+            if status is None:
+                createStatus = _HOME_STATUS_NORMAL if record.thisServer() else _HOME_STATUS_EXTERNAL
+            elif status == _HOME_STATUS_MIGRATING:
+                if record.thisServer():
+                    raise RecordNotAllowedError("Cannot migrate a user data for a user already hosted on this server")
+                createStatus = status
+            elif status in (_HOME_STATUS_NORMAL, _HOME_STATUS_EXTERNAL,):
+                createStatus = status
+            else:
+                raise RecordNotAllowedError("Cannot create home with status {}: {}".format(status, uid))
+
+            # Use savepoint so we can do a partial rollback if there is a race
+            # condition where this row has already been inserted
+            savepoint = SavepointAction("notificationsWithUID")
+            yield savepoint.acquire(txn)
+
+            try:
+                resourceid = (yield Insert(
+                    {
+                        cls._homeSchema.OWNER_UID: uid,
+                        cls._homeSchema.STATUS: createStatus,
+                    },
+                    Return=cls._homeSchema.RESOURCE_ID
+                ).on(txn))[0][0]
+            except Exception:
+                # FIXME: Really want to trap the pg.DatabaseError but in a non-
+                # DB specific manner
+                yield savepoint.rollback(txn)
+
+                # Retry the query - row may exist now, if not re-raise
+                results = yield Select(
+                    cls.homeColumns(),
+                    From=cls._homeSchema,
+                    Where=query,
+                ).on(txn)
+                if results:
+                    homeObject = yield cls.makeClass(txn, results[0])
+                    returnValue(homeObject)
+                else:
+                    raise
+            else:
+                yield savepoint.release(txn)
+
+                # Note that we must not cache the owner_uid->resource_id
+                # mapping in the query cacher when creating as we don't want that to appear
+                # until AFTER the commit
+                results = yield Select(
+                    cls.homeColumns(),
+                    From=cls._homeSchema,
+                    Where=cls._homeSchema.RESOURCE_ID == resourceid,
+                ).on(txn)
+                homeObject = yield cls.makeClass(txn, results[0])
+                if homeObject.normal():
+                    yield homeObject._initSyncToken()
+                    yield homeObject.notifyChanged()
+                returnValue(homeObject)
+
+
+    @inlineCallbacks
+    def _loadPropertyStore(self):
+        self._propertyStore = yield PropertyStore.load(
+            self._ownerUID,
+            self._ownerUID,
+            None,
+            self._txn,
+            self._resourceID,
+            notifyCallback=self.notifyChanged
+        )
+
+
+    def __repr__(self):
+        return "<%s: %s>" % (self.__class__.__name__, self._resourceID)
+
+
+    def id(self):
+        """
+        Retrieve the store identifier for this collection.
+
+        @return: store identifier.
+        @rtype: C{int}
+        """
+        return self._resourceID
+
+
+    @classproperty
+    def _dataVersionQuery(cls):
+        nh = cls._homeSchema
+        return Select(
+            [nh.DATAVERSION], From=nh,
+            Where=nh.RESOURCE_ID == Parameter("resourceID")
+        )
+
+
+    @inlineCallbacks
+    def dataVersion(self):
+        if self._dataVersion is None:
+            self._dataVersion = (yield self._dataVersionQuery.on(
+                self._txn, resourceID=self._resourceID))[0][0]
+        returnValue(self._dataVersion)
+
+
+    def name(self):
+        return "notification"
+
+
+    def uid(self):
+        return self._ownerUID
+
+
+    def status(self):
+        return self._status
+
+
+    @inlineCallbacks
+    def setStatus(self, newStatus):
+        """
+        Mark this home as being purged.
+        """
+        # Only if different
+        if self._status != newStatus:
+            yield Update(
+                {self._homeSchema.STATUS: newStatus},
+                Where=(self._homeSchema.RESOURCE_ID == self._resourceID),
+            ).on(self._txn)
+            self._status = newStatus
+
+
+    def normal(self):
+        """
+        Is this an normal (internal) home.
+
+        @return: a L{bool}.
+        """
+        return self._status == _HOME_STATUS_NORMAL
+
+
+    def external(self):
+        """
+        Is this an external home.
+
+        @return: a L{bool}.
+        """
+        return self._status == _HOME_STATUS_EXTERNAL
+
+
+    def owned(self):
+        return True
+
+
+    def ownerHome(self):
+        return self._home
+
+
+    def viewerHome(self):
+        return self._home
+
+
+    def notificationObjectRecords(self):
+        return NotificationObjectRecord.querysimple(self._txn, notificationHomeResourceID=self.id())
+
+
+    @inlineCallbacks
+    def notificationObjects(self):
+        results = (yield NotificationObject.loadAllObjects(self))
+        for result in results:
+            self._notifications[result.uid()] = result
+        self._notificationNames = sorted([result.name() for result in results])
+        returnValue(results)
+
+    _notificationUIDsForHomeQuery = Select(
+        [schema.NOTIFICATION.NOTIFICATION_UID], From=schema.NOTIFICATION,
+        Where=schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID ==
+        Parameter("resourceID"))
+
+
+    @inlineCallbacks
+    def listNotificationObjects(self):
+        if self._notificationNames is None:
+            rows = yield self._notificationUIDsForHomeQuery.on(
+                self._txn, resourceID=self._resourceID)
+            self._notificationNames = sorted([row[0] for row in rows])
+        returnValue(self._notificationNames)
+
+
+    # used by _SharedSyncLogic.resourceNamesSinceRevision()
+    def listObjectResources(self):
+        return self.listNotificationObjects()
+
+
+    def _nameToUID(self, name):
+        """
+        Based on the file-backed implementation, the 'name' is just uid +
+        ".xml".
+        """
+        return name.rsplit(".", 1)[0]
+
+
+    def notificationObjectWithName(self, name):
+        return self.notificationObjectWithUID(self._nameToUID(name))
+
+
+    @memoizedKey("uid", "_notifications")
+    @inlineCallbacks
+    def notificationObjectWithUID(self, uid):
+        """
+        Create an empty notification object first then have it initialize itself
+        from the store.
+        """
+        no = NotificationObject(self, uid)
+        no = (yield no.initFromStore())
+        returnValue(no)
+
+
+    @inlineCallbacks
+    def writeNotificationObject(self, uid, notificationtype, notificationdata):
+
+        inserting = False
+        notificationObject = yield self.notificationObjectWithUID(uid)
+        if notificationObject is None:
+            notificationObject = NotificationObject(self, uid)
+            inserting = True
+        yield notificationObject.setData(uid, notificationtype, notificationdata, inserting=inserting)
+        if inserting:
+            yield self._insertRevision("%s.xml" % (uid,))
+            if self._notificationNames is not None:
+                self._notificationNames.append(notificationObject.uid())
+        else:
+            yield self._updateRevision("%s.xml" % (uid,))
+        yield self.notifyChanged()
+        returnValue(notificationObject)
+
+
+    def removeNotificationObjectWithName(self, name):
+        if self._notificationNames is not None:
+            self._notificationNames.remove(self._nameToUID(name))
+        return self.removeNotificationObjectWithUID(self._nameToUID(name))
+
+    _removeByUIDQuery = Delete(
+        From=schema.NOTIFICATION,
+        Where=(schema.NOTIFICATION.NOTIFICATION_UID == Parameter("uid")).And(
+            schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID
+            == Parameter("resourceID")))
+
+
+    @inlineCallbacks
+    def removeNotificationObjectWithUID(self, uid):
+        yield self._removeByUIDQuery.on(
+            self._txn, uid=uid, resourceID=self._resourceID)
+        self._notifications.pop(uid, None)
+        yield self._deleteRevision("%s.xml" % (uid,))
+        yield self.notifyChanged()
+
+    _initSyncTokenQuery = Insert(
+        {
+            _revisionsSchema.HOME_RESOURCE_ID : Parameter("resourceID"),
+            _revisionsSchema.RESOURCE_NAME    : None,
+            _revisionsSchema.REVISION         : schema.REVISION_SEQ,
+            _revisionsSchema.DELETED          : False
+        }, Return=_revisionsSchema.REVISION
+    )
+
+
+    @inlineCallbacks
+    def _initSyncToken(self):
+        self._syncTokenRevision = (yield self._initSyncTokenQuery.on(
+            self._txn, resourceID=self._resourceID))[0][0]
+
+    _syncTokenQuery = Select(
+        [Max(_revisionsSchema.REVISION)], From=_revisionsSchema,
+        Where=_revisionsSchema.HOME_RESOURCE_ID == Parameter("resourceID")
+    )
+
+
+    @inlineCallbacks
+    def syncToken(self):
+        if self._syncTokenRevision is None:
+            self._syncTokenRevision = yield self.syncTokenRevision()
+        returnValue("%s_%s" % (self._resourceID, self._syncTokenRevision))
+
+
+    @inlineCallbacks
+    def syncTokenRevision(self):
+        revision = (yield self._syncTokenQuery.on(self._txn, resourceID=self._resourceID))[0][0]
+        if revision is None:
+            revision = int((yield self._txn.calendarserverValue("MIN-VALID-REVISION")))
+        returnValue(revision)
+
+
+    def properties(self):
+        return self._propertyStore
+
+
+    def addNotifier(self, factory_name, notifier):
+        if self._notifiers is None:
+            self._notifiers = {}
+        self._notifiers[factory_name] = notifier
+
+
+    def getNotifier(self, factory_name):
+        return self._notifiers.get(factory_name)
+
+
+    def notifierID(self):
+        return (self._txn._homeClass[self._txn._primaryHomeType]._notifierPrefix, "%s/notification" % (self.ownerHome().uid(),),)
+
+
+    def parentNotifierID(self):
+        return (self._txn._homeClass[self._txn._primaryHomeType]._notifierPrefix, "%s" % (self.ownerHome().uid(),),)
+
+
+    @inlineCallbacks
+    def notifyChanged(self, category=ChangeCategory.default):
+        """
+        Send notifications, change sync token and bump last modified because
+        the resource has changed.  We ensure we only do this once per object
+        per transaction.
+        """
+        if self._txn.isNotifiedAlready(self):
+            returnValue(None)
+        self._txn.notificationAddedForObject(self)
+
+        # Send notifications
+        if self._notifiers:
+            # cache notifiers run in post commit
+            notifier = self._notifiers.get("cache", None)
+            if notifier:
+                self._txn.postCommit(notifier.notify)
+            # push notifiers add their work items immediately
+            notifier = self._notifiers.get("push", None)
+            if notifier:
+                yield notifier.notify(self._txn, priority=category.value)
+
+        returnValue(None)
+
+
+    @classproperty
+    def _completelyNewRevisionQuery(cls):
+        rev = cls._revisionsSchema
+        return Insert({rev.HOME_RESOURCE_ID: Parameter("homeID"),
+                       # rev.RESOURCE_ID: Parameter("resourceID"),
+                       rev.RESOURCE_NAME: Parameter("name"),
+                       rev.REVISION: schema.REVISION_SEQ,
+                       rev.DELETED: False},
+                      Return=rev.REVISION)
+
+
+    def _maybeNotify(self):
+        """
+        Emit a push notification after C{_changeRevision}.
+        """
+        return self.notifyChanged()
+
+
+    @inlineCallbacks
+    def remove(self):
+        """
+        Remove DB rows corresponding to this notification home.
+        """
+        # Delete NOTIFICATION rows
+        no = schema.NOTIFICATION
+        kwds = {"ResourceID": self._resourceID}
+        yield Delete(
+            From=no,
+            Where=(
+                no.NOTIFICATION_HOME_RESOURCE_ID == Parameter("ResourceID")
+            ),
+        ).on(self._txn, **kwds)
+
+        # Delete NOTIFICATION_HOME (will cascade to NOTIFICATION_OBJECT_REVISIONS)
+        nh = schema.NOTIFICATION_HOME
+        yield Delete(
+            From=nh,
+            Where=(
+                nh.RESOURCE_ID == Parameter("ResourceID")
+            ),
+        ).on(self._txn, **kwds)
+
+
+
+class NotificationObjectRecord(SerializableRecord, fromTable(schema.NOTIFICATION)):
+    """
+    @DynamicAttrs
+    L{Record} for L{schema.NOTIFICATION}.
+    """
+    pass
+
+
+
+class NotificationObject(FancyEqMixin, object):
+    """
+    This used to store XML data and an XML element for the type. But we are now switching it
+    to use JSON internally. The app layer will convert that to XML and fill in the "blanks" as
+    needed for the app.
+    """
+    log = Logger()
+
+    implements(INotificationObject)
+
+    compareAttributes = (
+        "_resourceID",
+        "_home",
+    )
+
+    _objectSchema = schema.NOTIFICATION
+
+    def __init__(self, home, uid):
+        self._home = home
+        self._resourceID = None
+        self._uid = uid
+        self._md5 = None
+        self._size = None
+        self._created = None
+        self._modified = None
+        self._notificationType = None
+        self._notificationData = None
+
+
+    def __repr__(self):
+        return "<%s: %s>" % (self.__class__.__name__, self._resourceID)
+
+
+    @classproperty
+    def _allColumnsByHomeIDQuery(cls):
+        """
+        DAL query to load all columns by home ID.
+        """
+        obj = cls._objectSchema
+        return Select(
+            [obj.RESOURCE_ID, obj.NOTIFICATION_UID, obj.MD5,
+             Len(obj.NOTIFICATION_DATA), obj.NOTIFICATION_TYPE, obj.CREATED, obj.MODIFIED],
+            From=obj,
+            Where=(obj.NOTIFICATION_HOME_RESOURCE_ID == Parameter("homeID"))
+        )
+
+
+    @classmethod
+    @inlineCallbacks
+    def loadAllObjects(cls, parent):
+        """
+        Load all child objects and return a list of them. This must create the
+        child classes and initialize them using "batched" SQL operations to keep
+        this constant wrt the number of children. This is an optimization for
+        Depth:1 operations on the collection.
+        """
+
+        results = []
+
+        # Load from the main table first
+        dataRows = (
+            yield cls._allColumnsByHomeIDQuery.on(parent._txn,
+                                                  homeID=parent._resourceID))
+
+        if dataRows:
+            # Get property stores for all these child resources (if any found)
+            propertyStores = (yield PropertyStore.forMultipleResources(
+                parent.uid(),
+                None,
+                None,
+                parent._txn,
+                schema.NOTIFICATION.RESOURCE_ID,
+                schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID,
+                parent._resourceID,
+            ))
+
+        # Create the actual objects merging in properties
+        for row in dataRows:
+            child = cls(parent, None)
+            (child._resourceID,
+             child._uid,
+             child._md5,
+             child._size,
+             child._notificationType,
+             child._created,
+             child._modified,) = tuple(row)
+            child._created = parseSQLTimestamp(child._created)
+            child._modified = parseSQLTimestamp(child._modified)
+            try:
+                child._notificationType = json.loads(child._notificationType)
+            except ValueError:
+                pass
+            if isinstance(child._notificationType, unicode):
+                child._notificationType = child._notificationType.encode("utf-8")
+            child._loadPropertyStore(
+                props=propertyStores.get(child._resourceID, None)
+            )
+            results.append(child)
+
+        returnValue(results)
+
+
+    @classproperty
+    def _oneNotificationQuery(cls):
+        no = cls._objectSchema
+        return Select(
+            [
+                no.RESOURCE_ID,
+                no.MD5,
+                Len(no.NOTIFICATION_DATA),
+                no.NOTIFICATION_TYPE,
+                no.CREATED,
+                no.MODIFIED
+            ],
+            From=no,
+            Where=(no.NOTIFICATION_UID ==
+                   Parameter("uid")).And(no.NOTIFICATION_HOME_RESOURCE_ID ==
+                                         Parameter("homeID")))
+
+
+    @inlineCallbacks
+    def initFromStore(self):
+        """
+        Initialise this object from the store, based on its UID and home
+        resource ID. We read in and cache all the extra metadata from the DB to
+        avoid having to do DB queries for those individually later.
+
+        @return: L{self} if object exists in the DB, else C{None}
+        """
+        rows = (yield self._oneNotificationQuery.on(
+            self._txn, uid=self._uid, homeID=self._home._resourceID))
+        if rows:
+            (self._resourceID,
+             self._md5,
+             self._size,
+             self._notificationType,
+             self._created,
+             self._modified,) = tuple(rows[0])
+            self._created = parseSQLTimestamp(self._created)
+            self._modified = parseSQLTimestamp(self._modified)
+            try:
+                self._notificationType = json.loads(self._notificationType)
+            except ValueError:
+                pass
+            if isinstance(self._notificationType, unicode):
+                self._notificationType = self._notificationType.encode("utf-8")
+            self._loadPropertyStore()
+            returnValue(self)
+        else:
+            returnValue(None)
+
+
+    def _loadPropertyStore(self, props=None, created=False):
+        if props is None:
+            props = NonePropertyStore(self._home.uid())
+        self._propertyStore = props
+
+
+    def properties(self):
+        return self._propertyStore
+
+
+    def id(self):
+        """
+        Retrieve the store identifier for this object.
+
+        @return: store identifier.
+        @rtype: C{int}
+        """
+        return self._resourceID
+
+
+    @property
+    def _txn(self):
+        return self._home._txn
+
+
+    def notificationCollection(self):
+        return self._home
+
+
+    def uid(self):
+        return self._uid
+
+
+    def name(self):
+        return self.uid() + ".xml"
+
+
+    @classproperty
+    def _newNotificationQuery(cls):
+        no = cls._objectSchema
+        return Insert(
+            {
+                no.NOTIFICATION_HOME_RESOURCE_ID: Parameter("homeID"),
+                no.NOTIFICATION_UID: Parameter("uid"),
+                no.NOTIFICATION_TYPE: Parameter("notificationType"),
+                no.NOTIFICATION_DATA: Parameter("notificationData"),
+                no.MD5: Parameter("md5"),
+            },
+            Return=[no.RESOURCE_ID, no.CREATED, no.MODIFIED]
+        )
+
+
+    @classproperty
+    def _updateNotificationQuery(cls):
+        no = cls._objectSchema
+        return Update(
+            {
+                no.NOTIFICATION_TYPE: Parameter("notificationType"),
+                no.NOTIFICATION_DATA: Parameter("notificationData"),
+                no.MD5: Parameter("md5"),
+            },
+            Where=(no.NOTIFICATION_HOME_RESOURCE_ID == Parameter("homeID")).And(
+                no.NOTIFICATION_UID == Parameter("uid")),
+            Return=no.MODIFIED
+        )
+
+
+    @inlineCallbacks
+    def setData(self, uid, notificationtype, notificationdata, inserting=False):
+        """
+        Set the object resource data and update and cached metadata.
+        """
+
+        notificationtext = json.dumps(notificationdata)
+        self._notificationType = notificationtype
+        self._md5 = hashlib.md5(notificationtext).hexdigest()
+        self._size = len(notificationtext)
+        if inserting:
+            rows = yield self._newNotificationQuery.on(
+                self._txn, homeID=self._home._resourceID, uid=uid,
+                notificationType=json.dumps(self._notificationType),
+                notificationData=notificationtext, md5=self._md5
+            )
+            self._resourceID, self._created, self._modified = (
+                rows[0][0],
+                parseSQLTimestamp(rows[0][1]),
+                parseSQLTimestamp(rows[0][2]),
+            )
+            self._loadPropertyStore()
+        else:
+            rows = yield self._updateNotificationQuery.on(
+                self._txn, homeID=self._home._resourceID, uid=uid,
+                notificationType=json.dumps(self._notificationType),
+                notificationData=notificationtext, md5=self._md5
+            )
+            self._modified = parseSQLTimestamp(rows[0][0])
+        self._notificationData = notificationdata
+
+    _notificationDataFromID = Select(
+        [_objectSchema.NOTIFICATION_DATA], From=_objectSchema,
+        Where=_objectSchema.RESOURCE_ID == Parameter("resourceID"))
+
+
+    @inlineCallbacks
+    def notificationData(self):
+        if self._notificationData is None:
+            self._notificationData = (yield self._notificationDataFromID.on(self._txn, resourceID=self._resourceID))[0][0]
+            try:
+                self._notificationData = json.loads(self._notificationData)
+            except ValueError:
+                pass
+            if isinstance(self._notificationData, unicode):
+                self._notificationData = self._notificationData.encode("utf-8")
+        returnValue(self._notificationData)
+
+
+    def contentType(self):
+        """
+        The content type of NotificationObjects is text/xml.
+        """
+        return MimeType.fromString("text/xml")
+
+
+    def md5(self):
+        return self._md5
+
+
+    def size(self):
+        return self._size
+
+
+    def notificationType(self):
+        return self._notificationType
+
+
+    def created(self):
+        return datetimeMktime(self._created)
+
+
+    def modified(self):
+        return datetimeMktime(self._modified)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current-oracle-dialect.sql	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current-oracle-dialect.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -29,9 +29,10 @@
 
 create table CALENDAR_HOME (
     "RESOURCE_ID" integer primary key,
-    "OWNER_UID" nvarchar2(255) unique,
+    "OWNER_UID" nvarchar2(255),
     "STATUS" integer default 0 not null,
-    "DATAVERSION" integer default 0 not null
+    "DATAVERSION" integer default 0 not null, 
+    unique ("OWNER_UID", "STATUS")
 );
 
 create table HOME_STATUS (
@@ -42,6 +43,8 @@
 insert into HOME_STATUS (DESCRIPTION, ID) values ('normal', 0);
 insert into HOME_STATUS (DESCRIPTION, ID) values ('external', 1);
 insert into HOME_STATUS (DESCRIPTION, ID) values ('purging', 2);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('migrating', 3);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('disabled', 4);
 create table CALENDAR (
     "RESOURCE_ID" integer primary key
 );
@@ -64,15 +67,35 @@
 create table CALENDAR_METADATA (
     "RESOURCE_ID" integer primary key references CALENDAR on delete cascade,
     "SUPPORTED_COMPONENTS" nvarchar2(255) default null,
+    "CHILD_TYPE" integer default 0 not null,
+    "TRASHED" timestamp default null,
+    "IS_IN_TRASH" integer default 0 not null,
     "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
     "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
 );
 
+create table CHILD_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CHILD_TYPE (DESCRIPTION, ID) values ('normal', 0);
+insert into CHILD_TYPE (DESCRIPTION, ID) values ('inbox', 1);
+insert into CHILD_TYPE (DESCRIPTION, ID) values ('trash', 2);
+create table CALENDAR_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references CALENDAR on delete cascade,
+    "LAST_SYNC_TOKEN" nvarchar2(255), 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
 create table NOTIFICATION_HOME (
     "RESOURCE_ID" integer primary key,
-    "OWNER_UID" nvarchar2(255) unique,
+    "OWNER_UID" nvarchar2(255),
     "STATUS" integer default 0 not null,
-    "DATAVERSION" integer default 0 not null
+    "DATAVERSION" integer default 0 not null, 
+    unique ("OWNER_UID", "STATUS")
 );
 
 create table NOTIFICATION (
@@ -90,11 +113,11 @@
 create table CALENDAR_BIND (
     "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
     "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
-    "EXTERNAL_ID" integer default null,
     "CALENDAR_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
+    "BIND_UID" nvarchar2(36) default null,
     "MESSAGE" nclob,
     "TRANSP" integer default 0 not null,
     "ALARM_VEVENT_TIMED" nclob default null,
@@ -154,6 +177,8 @@
     "SCHEDULE_ETAGS" nclob default null,
     "PRIVATE_COMMENTS" integer default 0 not null,
     "MD5" nchar(32),
+    "TRASHED" timestamp default null,
+    "ORIGINAL_COLLECTION" integer default null,
     "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
     "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
     "DATAVERSION" integer default 0 not null, 
@@ -208,6 +233,13 @@
     primary key ("TIME_RANGE_INSTANCE_ID", "USER_ID")
 );
 
+create table CALENDAR_OBJECT_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references CALENDAR_OBJECT on delete cascade, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
 create table ATTACHMENT (
     "ATTACHMENT_ID" integer primary key,
     "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
@@ -228,6 +260,13 @@
     unique ("MANAGED_ID", "CALENDAR_OBJECT_RESOURCE_ID")
 );
 
+create table ATTACHMENT_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references ATTACHMENT on delete cascade, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
 create table RESOURCE_PROPERTY (
     "RESOURCE_ID" integer not null,
     "NAME" nvarchar2(255),
@@ -239,9 +278,10 @@
 create table ADDRESSBOOK_HOME (
     "RESOURCE_ID" integer primary key,
     "ADDRESSBOOK_PROPERTY_STORE_ID" integer not null,
-    "OWNER_UID" nvarchar2(255) unique,
+    "OWNER_UID" nvarchar2(255),
     "STATUS" integer default 0 not null,
-    "DATAVERSION" integer default 0 not null
+    "DATAVERSION" integer default 0 not null, 
+    unique ("OWNER_UID", "STATUS")
 );
 
 create table ADDRESSBOOK_HOME_METADATA (
@@ -254,11 +294,11 @@
 create table SHARED_ADDRESSBOOK_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
-    "EXTERNAL_ID" integer default null,
     "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
+    "BIND_UID" nvarchar2(36) default null,
     "MESSAGE" nclob, 
     primary key ("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
     unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
@@ -272,6 +312,8 @@
     "VCARD_UID" nvarchar2(255),
     "KIND" integer not null,
     "MD5" nchar(32),
+    "TRASHED" timestamp default null,
+    "IS_IN_TRASH" integer default 0 not null,
     "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
     "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
     "DATAVERSION" integer default 0 not null, 
@@ -308,11 +350,11 @@
 create table SHARED_GROUP_BIND (
     "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
     "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
-    "EXTERNAL_ID" integer default null,
     "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
     "BIND_MODE" integer not null,
     "BIND_STATUS" integer not null,
     "BIND_REVISION" integer default 0 not null,
+    "BIND_UID" nvarchar2(36) default null,
     "MESSAGE" nclob, 
     primary key ("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
     unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
@@ -607,7 +649,7 @@
     "VALUE" nvarchar2(255)
 );
 
-insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '51');
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '53');
 insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '6');
 insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
 insert into CALENDARSERVER (NAME, VALUE) values ('NOTIFICATION-DATAVERSION', '1');
@@ -624,6 +666,10 @@
     DEFAULT_POLLS
 );
 
+create index CALENDAR_MIGRATION_LO_0525c72b on CALENDAR_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
 create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
     NOTIFICATION_HOME_RESOURCE_ID
 );
@@ -659,6 +705,15 @@
     CALENDAR_OBJECT_RESOURCE_ID
 );
 
+create index CALENDAR_OBJECT_MIGRA_0502cbef on CALENDAR_OBJECT_MIGRATION (
+    CALENDAR_HOME_RESOURCE_ID,
+    LOCAL_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_MIGRA_3577efd9 on CALENDAR_OBJECT_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
 create index ATTACHMENT_CALENDAR_H_0078845c on ATTACHMENT (
     CALENDAR_HOME_RESOURCE_ID
 );
@@ -671,6 +726,15 @@
     CALENDAR_OBJECT_RESOURCE_ID
 );
 
+create index ATTACHMENT_MIGRATION__804bf85e on ATTACHMENT_MIGRATION (
+    CALENDAR_HOME_RESOURCE_ID,
+    LOCAL_RESOURCE_ID
+);
+
+create index ATTACHMENT_MIGRATION__816947fe on ATTACHMENT_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
 create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
     OWNER_HOME_RESOURCE_ID
 );

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current.sql	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/current.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -70,9 +70,11 @@
 
 create table CALENDAR_HOME (
   RESOURCE_ID      integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
-  OWNER_UID        varchar(255) not null unique,                                -- implicit index
+  OWNER_UID        varchar(255) not null,                		                -- implicit index
   STATUS           integer      default 0 not null,                             -- enum HOME_STATUS
-  DATAVERSION      integer      default 0 not null
+  DATAVERSION      integer      default 0 not null,
+  
+  unique (OWNER_UID, STATUS)	-- implicit index
 );
 
 -- Enumeration of statuses
@@ -85,6 +87,8 @@
 insert into HOME_STATUS values (0, 'normal' );
 insert into HOME_STATUS values (1, 'external');
 insert into HOME_STATUS values (2, 'purging');
+insert into HOME_STATUS values (3, 'migrating');
+insert into HOME_STATUS values (4, 'disabled');
 
 
 --------------
@@ -130,24 +134,53 @@
 create table CALENDAR_METADATA (
   RESOURCE_ID           integer      primary key references CALENDAR on delete cascade, -- implicit index
   SUPPORTED_COMPONENTS  varchar(255) default null,
+  CHILD_TYPE            integer      default 0 not null,                             	-- enum CHILD_TYPE
+  TRASHED               timestamp    default null,
+  IS_IN_TRASH           boolean      default false not null, -- collection is in the trash
   CREATED               timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
-  MODIFIED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
-  CHILD_TYPE            varchar(10)  default null, -- None, inbox, trash (FIXME: convert this to enumeration)
-  TRASHED               timestamp    default null,
-  IS_IN_TRASH           boolean      default false not null -- collection is in the trash
+  MODIFIED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
 
+-- Enumeration of child type
+
+create table CHILD_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
 );
 
+insert into CHILD_TYPE values (0, 'normal');
+insert into CHILD_TYPE values (1, 'inbox');
+insert into CHILD_TYPE values (2, 'trash');
 
+
+------------------------
+-- Calendar Migration --
+------------------------
+
+create table CALENDAR_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references CALENDAR on delete cascade,
+  LAST_SYNC_TOKEN				varchar(255),
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index CALENDAR_MIGRATION_LOCAL_RESOURCE_ID on
+  CALENDAR_MIGRATION(LOCAL_RESOURCE_ID);
+
+
 ---------------------------
 -- Sharing Notifications --
 ---------------------------
 
 create table NOTIFICATION_HOME (
   RESOURCE_ID integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
-  OWNER_UID   varchar(255) not null unique,                                -- implicit index
+  OWNER_UID   varchar(255) not null,	                                   -- implicit index
   STATUS      integer      default 0 not null,                             -- enum HOME_STATUS
-  DATAVERSION integer      default 0 not null
+  DATAVERSION integer      default 0 not null,
+    
+  unique (OWNER_UID, STATUS)	-- implicit index
 );
 
 create table NOTIFICATION (
@@ -176,11 +209,11 @@
 create table CALENDAR_BIND (
   CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
   CALENDAR_RESOURCE_ID      integer      not null references CALENDAR on delete cascade,
-  EXTERNAL_ID               integer      default null,
   CALENDAR_RESOURCE_NAME    varchar(255) not null,
   BIND_MODE                 integer      not null, -- enum CALENDAR_BIND_MODE
   BIND_STATUS               integer      not null, -- enum CALENDAR_BIND_STATUS
   BIND_REVISION             integer      default 0 not null,
+  BIND_UID                  varchar(36)  default null,
   MESSAGE                   text,
   TRANSP                    integer      default 0 not null, -- enum CALENDAR_TRANSP
   ALARM_VEVENT_TIMED        text         default null,
@@ -259,11 +292,11 @@
   SCHEDULE_ETAGS       text         default null,
   PRIVATE_COMMENTS     boolean      default false not null,
   MD5                  char(32)     not null,
+  TRASHED              timestamp    default null,
+  ORIGINAL_COLLECTION  integer      default null, -- calendar_resource_id prior to trash
   CREATED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
   MODIFIED             timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
   DATAVERSION          integer      default 0 not null,
-  TRASHED              timestamp    default null,
-  ORIGINAL_COLLECTION  integer      default null, -- calendar_resource_id prior to trash
 
   unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
 
@@ -369,6 +402,24 @@
 );
 
 
+-------------------------------
+-- Calendar Object Migration --
+-------------------------------
+
+create table CALENDAR_OBJECT_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references CALENDAR_OBJECT on delete cascade,
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index CALENDAR_OBJECT_MIGRATION_HOME_LOCAL on
+  CALENDAR_OBJECT_MIGRATION(CALENDAR_HOME_RESOURCE_ID, LOCAL_RESOURCE_ID);
+create index CALENDAR_OBJECT_MIGRATION_LOCAL_RESOURCE_ID on
+  CALENDAR_OBJECT_MIGRATION(LOCAL_RESOURCE_ID);
+
+
 ----------------
 -- Attachment --
 ----------------
@@ -406,6 +457,24 @@
 create index ATTACHMENT_CALENDAR_OBJECT_CALENDAR_OBJECT_RESOURCE_ID on
   ATTACHMENT_CALENDAR_OBJECT(CALENDAR_OBJECT_RESOURCE_ID);
 
+-----------------------------------
+-- Calendar Attachment Migration --
+-----------------------------------
+
+create table ATTACHMENT_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references ATTACHMENT on delete cascade,
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index ATTACHMENT_MIGRATION_HOME_LOCAL on
+  ATTACHMENT_MIGRATION(CALENDAR_HOME_RESOURCE_ID, LOCAL_RESOURCE_ID);
+create index ATTACHMENT_MIGRATION_LOCAL_RESOURCE_ID on
+  ATTACHMENT_MIGRATION(LOCAL_RESOURCE_ID);
+
+
 -----------------------
 -- Resource Property --
 -----------------------
@@ -427,9 +496,11 @@
 create table ADDRESSBOOK_HOME (
   RESOURCE_ID                   integer         primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
   ADDRESSBOOK_PROPERTY_STORE_ID integer         default nextval('RESOURCE_ID_SEQ') not null,    -- implicit index
-  OWNER_UID                     varchar(255)    not null unique,                                -- implicit index
+  OWNER_UID                     varchar(255)    not null,
   STATUS                        integer         default 0 not null,                             -- enum HOME_STATUS
-  DATAVERSION                   integer         default 0 not null
+  DATAVERSION                   integer         default 0 not null,
+    
+  unique (OWNER_UID, STATUS)	-- implicit index
 );
 
 
@@ -454,11 +525,11 @@
 create table SHARED_ADDRESSBOOK_BIND (
   ADDRESSBOOK_HOME_RESOURCE_ID          integer         not null references ADDRESSBOOK_HOME,
   OWNER_HOME_RESOURCE_ID                integer         not null references ADDRESSBOOK_HOME on delete cascade,
-  EXTERNAL_ID                           integer         default null,
   ADDRESSBOOK_RESOURCE_NAME             varchar(255)    not null,
   BIND_MODE                             integer         not null, -- enum CALENDAR_BIND_MODE
   BIND_STATUS                           integer         not null, -- enum CALENDAR_BIND_STATUS
   BIND_REVISION                         integer         default 0 not null,
+  BIND_UID                              varchar(36)     default null,
   MESSAGE                               text,                     -- FIXME: xml?
 
   primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID), -- implicit index
@@ -481,11 +552,11 @@
   VCARD_UID                     varchar(255)    not null,
   KIND                          integer         not null,  -- enum ADDRESSBOOK_OBJECT_KIND
   MD5                           char(32)        not null,
+  TRASHED                       timestamp       default null,
+  IS_IN_TRASH                   boolean         default false not null,
   CREATED                       timestamp       default timezone('UTC', CURRENT_TIMESTAMP),
   MODIFIED                      timestamp       default timezone('UTC', CURRENT_TIMESTAMP),
   DATAVERSION                   integer         default 0 not null,
-  TRASHED                       timestamp       default null,
-  IS_IN_TRASH                   boolean         default false not null,
 
   unique (ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME), -- implicit index
   unique (ADDRESSBOOK_HOME_RESOURCE_ID, VCARD_UID)      -- implicit index
@@ -557,11 +628,11 @@
 create table SHARED_GROUP_BIND (
   ADDRESSBOOK_HOME_RESOURCE_ID      integer      not null references ADDRESSBOOK_HOME,
   GROUP_RESOURCE_ID                 integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
-  EXTERNAL_ID                       integer      default null,
   GROUP_ADDRESSBOOK_NAME            varchar(255) not null,
   BIND_MODE                         integer      not null, -- enum CALENDAR_BIND_MODE
   BIND_STATUS                       integer      not null, -- enum CALENDAR_BIND_STATUS
   BIND_REVISION                     integer      default 0 not null,
+  BIND_UID                          varchar(36)  default null,
   MESSAGE                           text,                  -- FIXME: xml?
 
   primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
@@ -881,7 +952,7 @@
   DELEGATOR                     varchar(255) not null,
   GROUP_ID                      integer      not null references GROUPS on delete cascade,
   READ_WRITE                    integer      not null, -- 1 = ReadWrite, 0 = ReadOnly
-  IS_EXTERNAL                   integer      not null, -- 1 = ReadWrite, 0 = ReadOnly
+  IS_EXTERNAL                   integer      not null, -- 1 = External, 0 = Internal
 
   primary key (DELEGATOR, READ_WRITE, GROUP_ID)
 );
@@ -1158,7 +1229,7 @@
   VALUE                         varchar(255)
 );
 
-insert into CALENDARSERVER values ('VERSION', '52');
+insert into CALENDARSERVER values ('VERSION', '53');
 insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '6');
 insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');
 insert into CALENDARSERVER values ('NOTIFICATION-DATAVERSION', '1');

Added: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/oracle-dialect/v52.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/oracle-dialect/v52.sql	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/oracle-dialect/v52.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,988 @@
+create sequence RESOURCE_ID_SEQ;
+create sequence JOB_SEQ;
+create sequence INSTANCE_ID_SEQ;
+create sequence ATTACHMENT_ID_SEQ;
+create sequence REVISION_SEQ;
+create sequence WORKITEM_SEQ;
+create table NODE_INFO (
+    "HOSTNAME" nvarchar2(255),
+    "PID" integer not null,
+    "PORT" integer not null,
+    "TIME" timestamp default CURRENT_TIMESTAMP at time zone 'UTC' not null, 
+    primary key ("HOSTNAME", "PORT")
+);
+
+create table NAMED_LOCK (
+    "LOCK_NAME" nvarchar2(255) primary key
+);
+
+create table JOB (
+    "JOB_ID" integer primary key,
+    "WORK_TYPE" nvarchar2(255),
+    "PRIORITY" integer default 0,
+    "WEIGHT" integer default 0,
+    "NOT_BEFORE" timestamp not null,
+    "ASSIGNED" timestamp default null,
+    "OVERDUE" timestamp default null,
+    "FAILED" integer default 0
+);
+
+create table CALENDAR_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255),
+    "STATUS" integer default 0 not null,
+    "DATAVERSION" integer default 0 not null, 
+    unique ("OWNER_UID", "STATUS")
+);
+
+create table HOME_STATUS (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into HOME_STATUS (DESCRIPTION, ID) values ('normal', 0);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('external', 1);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('purging', 2);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('migrating', 3);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('disabled', 4);
+create table CALENDAR (
+    "RESOURCE_ID" integer primary key
+);
+
+create table CALENDAR_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "DEFAULT_EVENTS" integer default null references CALENDAR on delete set null,
+    "DEFAULT_TASKS" integer default null references CALENDAR on delete set null,
+    "DEFAULT_POLLS" integer default null references CALENDAR on delete set null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "AVAILABILITY" nclob default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_METADATA (
+    "RESOURCE_ID" integer primary key references CALENDAR on delete cascade,
+    "SUPPORTED_COMPONENTS" nvarchar2(255) default null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references CALENDAR on delete cascade,
+    "LAST_SYNC_TOKEN" nvarchar2(255), 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
+create table NOTIFICATION_HOME (
+    "RESOURCE_ID" integer primary key,
+    "OWNER_UID" nvarchar2(255),
+    "STATUS" integer default 0 not null,
+    "DATAVERSION" integer default 0 not null, 
+    unique ("OWNER_UID", "STATUS")
+);
+
+create table NOTIFICATION (
+    "RESOURCE_ID" integer primary key,
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME,
+    "NOTIFICATION_UID" nvarchar2(255),
+    "NOTIFICATION_TYPE" nvarchar2(255),
+    "NOTIFICATION_DATA" nclob,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique ("NOTIFICATION_UID", "NOTIFICATION_HOME_RESOURCE_ID")
+);
+
+create table CALENDAR_BIND (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "BIND_UID" nvarchar2(36) default null,
+    "MESSAGE" nclob,
+    "TRANSP" integer default 0 not null,
+    "ALARM_VEVENT_TIMED" nclob default null,
+    "ALARM_VEVENT_ALLDAY" nclob default null,
+    "ALARM_VTODO_TIMED" nclob default null,
+    "ALARM_VTODO_ALLDAY" nclob default null,
+    "TIMEZONE" nclob default null, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_ID"), 
+    unique ("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_NAME")
+);
+
+create table CALENDAR_BIND_MODE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('own', 0);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('write', 2);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('direct', 3);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('indirect', 4);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('group', 5);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('group_read', 6);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('group_write', 7);
+create table CALENDAR_BIND_STATUS (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invited', 0);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('accepted', 1);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('declined', 2);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invalid', 3);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('deleted', 4);
+create table CALENDAR_TRANSP (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('opaque', 0);
+insert into CALENDAR_TRANSP (DESCRIPTION, ID) values ('transparent', 1);
+create table CALENDAR_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob,
+    "ICALENDAR_UID" nvarchar2(255),
+    "ICALENDAR_TYPE" nvarchar2(255),
+    "ATTACHMENTS_MODE" integer default 0 not null,
+    "DROPBOX_ID" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "RECURRANCE_MIN" date,
+    "RECURRANCE_MAX" date,
+    "ACCESS" integer default 0 not null,
+    "SCHEDULE_OBJECT" integer default 0,
+    "SCHEDULE_TAG" nvarchar2(36) default null,
+    "SCHEDULE_ETAGS" nclob default null,
+    "PRIVATE_COMMENTS" integer default 0 not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "DATAVERSION" integer default 0 not null, 
+    unique ("CALENDAR_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table CALENDAR_OBJ_ATTACHMENTS_MODE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_OBJ_ATTACHMENTS_MODE (DESCRIPTION, ID) values ('none', 0);
+insert into CALENDAR_OBJ_ATTACHMENTS_MODE (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_OBJ_ATTACHMENTS_MODE (DESCRIPTION, ID) values ('write', 2);
+create table CALENDAR_ACCESS_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(32) unique
+);
+
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('', 0);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('public', 1);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('private', 2);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('confidential', 3);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('restricted', 4);
+create table TIME_RANGE (
+    "INSTANCE_ID" integer primary key,
+    "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "FLOATING" integer not null,
+    "START_DATE" timestamp not null,
+    "END_DATE" timestamp not null,
+    "FBTYPE" integer not null,
+    "TRANSPARENT" integer not null
+);
+
+create table FREE_BUSY_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('unknown', 0);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('free', 1);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy', 2);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-unavailable', 3);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-tentative', 4);
+create table PERUSER (
+    "TIME_RANGE_INSTANCE_ID" integer not null references TIME_RANGE on delete cascade,
+    "USER_ID" nvarchar2(255),
+    "TRANSPARENT" integer not null,
+    "ADJUSTED_START_DATE" timestamp default null,
+    "ADJUSTED_END_DATE" timestamp default null, 
+    primary key ("TIME_RANGE_INSTANCE_ID", "USER_ID")
+);
+
+create table CALENDAR_OBJECT_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references CALENDAR_OBJECT on delete cascade, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
+create table ATTACHMENT (
+    "ATTACHMENT_ID" integer primary key,
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "DROPBOX_ID" nvarchar2(255),
+    "CONTENT_TYPE" nvarchar2(255),
+    "SIZE" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "PATH" nvarchar2(1024)
+);
+
+create table ATTACHMENT_CALENDAR_OBJECT (
+    "ATTACHMENT_ID" integer not null references ATTACHMENT on delete cascade,
+    "MANAGED_ID" nvarchar2(255),
+    "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade, 
+    primary key ("ATTACHMENT_ID", "CALENDAR_OBJECT_RESOURCE_ID"), 
+    unique ("MANAGED_ID", "CALENDAR_OBJECT_RESOURCE_ID")
+);
+
+create table ATTACHMENT_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references ATTACHMENT on delete cascade, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
+create table RESOURCE_PROPERTY (
+    "RESOURCE_ID" integer not null,
+    "NAME" nvarchar2(255),
+    "VALUE" nclob,
+    "VIEWER_UID" nvarchar2(255), 
+    primary key ("RESOURCE_ID", "NAME", "VIEWER_UID")
+);
+
+create table ADDRESSBOOK_HOME (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_PROPERTY_STORE_ID" integer not null,
+    "OWNER_UID" nvarchar2(255),
+    "STATUS" integer default 0 not null,
+    "DATAVERSION" integer default 0 not null, 
+    unique ("OWNER_UID", "STATUS")
+);
+
+create table ADDRESSBOOK_HOME_METADATA (
+    "RESOURCE_ID" integer primary key references ADDRESSBOOK_HOME on delete cascade,
+    "QUOTA_USED_BYTES" integer default 0 not null,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table SHARED_ADDRESSBOOK_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "BIND_UID" nvarchar2(36) default null,
+    "MESSAGE" nclob, 
+    primary key ("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID"), 
+    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
+);
+
+create table ADDRESSBOOK_OBJECT (
+    "RESOURCE_ID" integer primary key,
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "VCARD_TEXT" nclob,
+    "VCARD_UID" nvarchar2(255),
+    "KIND" integer not null,
+    "MD5" nchar(32),
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "DATAVERSION" integer default 0 not null, 
+    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "RESOURCE_NAME"), 
+    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "VCARD_UID")
+);
+
+create table ADDRESSBOOK_OBJECT_KIND (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('person', 0);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('group', 1);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('resource', 2);
+insert into ADDRESSBOOK_OBJECT_KIND (DESCRIPTION, ID) values ('location', 3);
+create table ABO_MEMBERS (
+    "GROUP_ID" integer not null,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ID" integer not null,
+    "REVISION" integer not null,
+    "REMOVED" integer default 0 not null,
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    primary key ("GROUP_ID", "MEMBER_ID", "REVISION")
+);
+
+create table ABO_FOREIGN_MEMBERS (
+    "GROUP_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "ADDRESSBOOK_ID" integer not null references ADDRESSBOOK_HOME on delete cascade,
+    "MEMBER_ADDRESS" nvarchar2(255), 
+    primary key ("GROUP_ID", "MEMBER_ADDRESS")
+);
+
+create table SHARED_GROUP_BIND (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "GROUP_RESOURCE_ID" integer not null references ADDRESSBOOK_OBJECT on delete cascade,
+    "GROUP_ADDRESSBOOK_NAME" nvarchar2(255),
+    "BIND_MODE" integer not null,
+    "BIND_STATUS" integer not null,
+    "BIND_REVISION" integer default 0 not null,
+    "BIND_UID" nvarchar2(36) default null,
+    "MESSAGE" nclob, 
+    primary key ("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_RESOURCE_ID"), 
+    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "GROUP_ADDRESSBOOK_NAME")
+);
+
+create table CALENDAR_OBJECT_REVISIONS (
+    "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+    "CALENDAR_RESOURCE_ID" integer references CALENDAR,
+    "CALENDAR_NAME" nvarchar2(255) default null,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null,
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique ("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_ID", "CALENDAR_NAME", "RESOURCE_NAME")
+);
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+    "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+    "OWNER_HOME_RESOURCE_ID" integer references ADDRESSBOOK_HOME,
+    "ADDRESSBOOK_NAME" nvarchar2(255) default null,
+    "OBJECT_RESOURCE_ID" integer default 0,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null,
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique ("ADDRESSBOOK_HOME_RESOURCE_ID", "OWNER_HOME_RESOURCE_ID", "ADDRESSBOOK_NAME", "RESOURCE_NAME")
+);
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+    "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME on delete cascade,
+    "RESOURCE_NAME" nvarchar2(255),
+    "REVISION" integer not null,
+    "DELETED" integer not null,
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    unique ("NOTIFICATION_HOME_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table APN_SUBSCRIPTIONS (
+    "TOKEN" nvarchar2(255),
+    "RESOURCE_KEY" nvarchar2(255),
+    "MODIFIED" integer not null,
+    "SUBSCRIBER_GUID" nvarchar2(255),
+    "USER_AGENT" nvarchar2(255) default null,
+    "IP_ADDR" nvarchar2(255) default null, 
+    primary key ("TOKEN", "RESOURCE_KEY")
+);
+
+create table IMIP_TOKENS (
+    "TOKEN" nvarchar2(255),
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALUID" nvarchar2(255),
+    "ACCESSED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC', 
+    primary key ("ORGANIZER", "ATTENDEE", "ICALUID")
+);
+
+create table IMIP_INVITATION_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "FROM_ADDR" nvarchar2(255),
+    "TO_ADDR" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table IMIP_POLLING_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB
+);
+
+create table IMIP_REPLY_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "ORGANIZER" nvarchar2(255),
+    "ATTENDEE" nvarchar2(255),
+    "ICALENDAR_TEXT" nclob
+);
+
+create table PUSH_NOTIFICATION_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "PUSH_ID" nvarchar2(255),
+    "PUSH_PRIORITY" integer not null
+);
+
+create table GROUP_CACHER_POLLING_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB
+);
+
+create table GROUP_REFRESH_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "GROUP_UID" nvarchar2(255)
+);
+
+create table GROUP_DELEGATE_CHANGES_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "DELEGATOR_UID" nvarchar2(255),
+    "READ_DELEGATE_UID" nvarchar2(255),
+    "WRITE_DELEGATE_UID" nvarchar2(255)
+);
+
+create table GROUPS (
+    "GROUP_ID" integer primary key,
+    "NAME" nvarchar2(255),
+    "GROUP_UID" nvarchar2(255) unique,
+    "MEMBERSHIP_HASH" nvarchar2(255),
+    "EXTANT" integer default 1,
+    "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+    "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table GROUP_MEMBERSHIP (
+    "GROUP_ID" integer not null references GROUPS on delete cascade,
+    "MEMBER_UID" nvarchar2(255), 
+    primary key ("GROUP_ID", "MEMBER_UID")
+);
+
+create table GROUP_ATTENDEE_RECONCILE_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "GROUP_ID" integer not null references GROUPS on delete cascade
+);
+
+create table GROUP_ATTENDEE (
+    "GROUP_ID" integer not null references GROUPS on delete cascade,
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "MEMBERSHIP_HASH" nvarchar2(255), 
+    primary key ("GROUP_ID", "RESOURCE_ID")
+);
+
+create table GROUP_SHAREE_RECONCILE_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "CALENDAR_ID" integer not null references CALENDAR on delete cascade,
+    "GROUP_ID" integer not null references GROUPS on delete cascade
+);
+
+create table GROUP_SHAREE (
+    "GROUP_ID" integer not null references GROUPS on delete cascade,
+    "CALENDAR_ID" integer not null references CALENDAR on delete cascade,
+    "GROUP_BIND_MODE" integer not null,
+    "MEMBERSHIP_HASH" nvarchar2(255), 
+    primary key ("GROUP_ID", "CALENDAR_ID")
+);
+
+create table DELEGATES (
+    "DELEGATOR" nvarchar2(255),
+    "DELEGATE" nvarchar2(255),
+    "READ_WRITE" integer not null, 
+    primary key ("DELEGATOR", "READ_WRITE", "DELEGATE")
+);
+
+create table DELEGATE_GROUPS (
+    "DELEGATOR" nvarchar2(255),
+    "GROUP_ID" integer not null references GROUPS on delete cascade,
+    "READ_WRITE" integer not null,
+    "IS_EXTERNAL" integer not null, 
+    primary key ("DELEGATOR", "READ_WRITE", "GROUP_ID")
+);
+
+create table EXTERNAL_DELEGATE_GROUPS (
+    "DELEGATOR" nvarchar2(255) primary key,
+    "GROUP_UID_READ" nvarchar2(255),
+    "GROUP_UID_WRITE" nvarchar2(255)
+);
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade
+);
+
+create table CALENDAR_OBJECT_UPGRADE_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade
+);
+
+create table FIND_MIN_VALID_REVISION_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB
+);
+
+create table REVISION_CLEANUP_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB
+);
+
+create table INBOX_CLEANUP_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB
+);
+
+create table CLEANUP_ONE_INBOX_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "HOME_ID" integer not null unique references CALENDAR_HOME on delete cascade
+);
+
+create table SCHEDULE_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "ICALENDAR_UID" nvarchar2(255),
+    "WORK_TYPE" nvarchar2(255)
+);
+
+create table SCHEDULE_REFRESH_WORK (
+    "WORK_ID" integer primary key references SCHEDULE_WORK on delete cascade,
+    "HOME_RESOURCE_ID" integer not null references CALENDAR_HOME on delete cascade,
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "ATTENDEE_COUNT" integer
+);
+
+create table SCHEDULE_REFRESH_ATTENDEES (
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "ATTENDEE" nvarchar2(255), 
+    primary key ("RESOURCE_ID", "ATTENDEE")
+);
+
+create table SCHEDULE_AUTO_REPLY_WORK (
+    "WORK_ID" integer primary key references SCHEDULE_WORK on delete cascade,
+    "HOME_RESOURCE_ID" integer not null references CALENDAR_HOME on delete cascade,
+    "RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+    "PARTSTAT" nvarchar2(255)
+);
+
+create table SCHEDULE_ORGANIZER_WORK (
+    "WORK_ID" integer primary key references SCHEDULE_WORK on delete cascade,
+    "SCHEDULE_ACTION" integer not null,
+    "HOME_RESOURCE_ID" integer not null references CALENDAR_HOME on delete cascade,
+    "RESOURCE_ID" integer,
+    "ICALENDAR_TEXT_OLD" nclob,
+    "ICALENDAR_TEXT_NEW" nclob,
+    "ATTENDEE_COUNT" integer,
+    "SMART_MERGE" integer
+);
+
+create table SCHEDULE_ACTION (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into SCHEDULE_ACTION (DESCRIPTION, ID) values ('create', 0);
+insert into SCHEDULE_ACTION (DESCRIPTION, ID) values ('modify', 1);
+insert into SCHEDULE_ACTION (DESCRIPTION, ID) values ('modify-cancelled', 2);
+insert into SCHEDULE_ACTION (DESCRIPTION, ID) values ('remove', 3);
+create table SCHEDULE_ORGANIZER_SEND_WORK (
+    "WORK_ID" integer primary key references SCHEDULE_WORK on delete cascade,
+    "SCHEDULE_ACTION" integer not null,
+    "HOME_RESOURCE_ID" integer not null references CALENDAR_HOME on delete cascade,
+    "RESOURCE_ID" integer,
+    "ATTENDEE" nvarchar2(255),
+    "ITIP_MSG" nclob,
+    "NO_REFRESH" integer
+);
+
+create table SCHEDULE_REPLY_WORK (
+    "WORK_ID" integer primary key references SCHEDULE_WORK on delete cascade,
+    "HOME_RESOURCE_ID" integer not null references CALENDAR_HOME on delete cascade,
+    "RESOURCE_ID" integer,
+    "ITIP_MSG" nclob
+);
+
+create table PRINCIPAL_PURGE_POLLING_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB
+);
+
+create table PRINCIPAL_PURGE_CHECK_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "UID" nvarchar2(255)
+);
+
+create table PRINCIPAL_PURGE_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "UID" nvarchar2(255)
+);
+
+create table PRINCIPAL_PURGE_HOME_WORK (
+    "WORK_ID" integer primary key,
+    "JOB_ID" integer not null references JOB,
+    "HOME_RESOURCE_ID" integer not null references CALENDAR_HOME on delete cascade
+);
+
+create table CALENDARSERVER (
+    "NAME" nvarchar2(255) primary key,
+    "VALUE" nvarchar2(255)
+);
+
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '52');
+insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '6');
+insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '2');
+insert into CALENDARSERVER (NAME, VALUE) values ('NOTIFICATION-DATAVERSION', '1');
+insert into CALENDARSERVER (NAME, VALUE) values ('MIN-VALID-REVISION', '1');
+create index CALENDAR_HOME_METADAT_3cb9049e on CALENDAR_HOME_METADATA (
+    DEFAULT_EVENTS
+);
+
+create index CALENDAR_HOME_METADAT_d55e5548 on CALENDAR_HOME_METADATA (
+    DEFAULT_TASKS
+);
+
+create index CALENDAR_HOME_METADAT_910264ce on CALENDAR_HOME_METADATA (
+    DEFAULT_POLLS
+);
+
+create index CALENDAR_MIGRATION_LO_0525c72b on CALENDAR_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
+create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
+    NOTIFICATION_HOME_RESOURCE_ID
+);
+
+create index CALENDAR_BIND_RESOURC_e57964d4 on CALENDAR_BIND (
+    CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_CALEN_a9a453a9 on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_CALEN_c4dc619c on CALENDAR_OBJECT (
+    CALENDAR_RESOURCE_ID,
+    RECURRANCE_MAX,
+    RECURRANCE_MIN
+);
+
+create index CALENDAR_OBJECT_ICALE_82e731d5 on CALENDAR_OBJECT (
+    ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_DROPB_de041d80 on CALENDAR_OBJECT (
+    DROPBOX_ID
+);
+
+create index TIME_RANGE_CALENDAR_R_beb6e7eb on TIME_RANGE (
+    CALENDAR_RESOURCE_ID
+);
+
+create index TIME_RANGE_CALENDAR_O_acf37bd1 on TIME_RANGE (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_MIGRA_0502cbef on CALENDAR_OBJECT_MIGRATION (
+    CALENDAR_HOME_RESOURCE_ID,
+    LOCAL_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_MIGRA_3577efd9 on CALENDAR_OBJECT_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_H_0078845c on ATTACHMENT (
+    CALENDAR_HOME_RESOURCE_ID
+);
+
+create index ATTACHMENT_DROPBOX_ID_5073cf23 on ATTACHMENT (
+    DROPBOX_ID
+);
+
+create index ATTACHMENT_CALENDAR_O_81508484 on ATTACHMENT_CALENDAR_OBJECT (
+    CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index ATTACHMENT_MIGRATION__804bf85e on ATTACHMENT_MIGRATION (
+    CALENDAR_HOME_RESOURCE_ID,
+    LOCAL_RESOURCE_ID
+);
+
+create index ATTACHMENT_MIGRATION__816947fe on ATTACHMENT_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
+create index SHARED_ADDRESSBOOK_BI_e9a2e6d4 on SHARED_ADDRESSBOOK_BIND (
+    OWNER_HOME_RESOURCE_ID
+);
+
+create index ABO_MEMBERS_ADDRESSBO_4effa879 on ABO_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index ABO_MEMBERS_MEMBER_ID_8d66adcf on ABO_MEMBERS (
+    MEMBER_ID
+);
+
+create index ABO_FOREIGN_MEMBERS_A_1fd2c5e9 on ABO_FOREIGN_MEMBERS (
+    ADDRESSBOOK_ID
+);
+
+create index SHARED_GROUP_BIND_RES_cf52f95d on SHARED_GROUP_BIND (
+    GROUP_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_6d9d929c on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    RESOURCE_NAME,
+    DELETED,
+    REVISION
+);
+
+create index CALENDAR_OBJECT_REVIS_265c8acf on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_RESOURCE_ID,
+    REVISION
+);
+
+create index CALENDAR_OBJECT_REVIS_550b1c56 on CALENDAR_OBJECT_REVISIONS (
+    CALENDAR_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index ADDRESSBOOK_OBJECT_RE_00fe8288 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    RESOURCE_NAME,
+    DELETED,
+    REVISION
+);
+
+create index ADDRESSBOOK_OBJECT_RE_45004780 on ADDRESSBOOK_OBJECT_REVISIONS (
+    OWNER_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index NOTIFICATION_OBJECT_R_036a9cee on NOTIFICATION_OBJECT_REVISIONS (
+    NOTIFICATION_HOME_RESOURCE_ID,
+    REVISION
+);
+
+create index APN_SUBSCRIPTIONS_RES_9610d78e on APN_SUBSCRIPTIONS (
+    RESOURCE_KEY
+);
+
+create index IMIP_TOKENS_TOKEN_e94b918f on IMIP_TOKENS (
+    TOKEN
+);
+
+create index IMIP_INVITATION_WORK__586d064c on IMIP_INVITATION_WORK (
+    JOB_ID
+);
+
+create index IMIP_POLLING_WORK_JOB_d5535891 on IMIP_POLLING_WORK (
+    JOB_ID
+);
+
+create index IMIP_REPLY_WORK_JOB_I_bf4ae73e on IMIP_REPLY_WORK (
+    JOB_ID
+);
+
+create index PUSH_NOTIFICATION_WOR_8bbab117 on PUSH_NOTIFICATION_WORK (
+    JOB_ID
+);
+
+create index PUSH_NOTIFICATION_WOR_3a3ee588 on PUSH_NOTIFICATION_WORK (
+    PUSH_ID
+);
+
+create index GROUP_CACHER_POLLING__6eb3151c on GROUP_CACHER_POLLING_WORK (
+    JOB_ID
+);
+
+create index GROUP_REFRESH_WORK_JO_717ede20 on GROUP_REFRESH_WORK (
+    JOB_ID
+);
+
+create index GROUP_REFRESH_WORK_GR_0325f3a8 on GROUP_REFRESH_WORK (
+    GROUP_UID
+);
+
+create index GROUP_DELEGATE_CHANGE_8bf9e6d8 on GROUP_DELEGATE_CHANGES_WORK (
+    JOB_ID
+);
+
+create index GROUP_DELEGATE_CHANGE_d8f7af69 on GROUP_DELEGATE_CHANGES_WORK (
+    DELEGATOR_UID
+);
+
+create index GROUPS_GROUP_UID_b35cce23 on GROUPS (
+    GROUP_UID
+);
+
+create index GROUP_MEMBERSHIP_MEMB_0ca508e8 on GROUP_MEMBERSHIP (
+    MEMBER_UID
+);
+
+create index GROUP_ATTENDEE_RECONC_da73d3c2 on GROUP_ATTENDEE_RECONCILE_WORK (
+    JOB_ID
+);
+
+create index GROUP_ATTENDEE_RECONC_b894ee7a on GROUP_ATTENDEE_RECONCILE_WORK (
+    RESOURCE_ID
+);
+
+create index GROUP_ATTENDEE_RECONC_5eabc549 on GROUP_ATTENDEE_RECONCILE_WORK (
+    GROUP_ID
+);
+
+create index GROUP_ATTENDEE_RESOUR_855124dc on GROUP_ATTENDEE (
+    RESOURCE_ID
+);
+
+create index GROUP_SHAREE_RECONCIL_9aad0858 on GROUP_SHAREE_RECONCILE_WORK (
+    JOB_ID
+);
+
+create index GROUP_SHAREE_RECONCIL_4dc60f78 on GROUP_SHAREE_RECONCILE_WORK (
+    CALENDAR_ID
+);
+
+create index GROUP_SHAREE_RECONCIL_1d14c921 on GROUP_SHAREE_RECONCILE_WORK (
+    GROUP_ID
+);
+
+create index GROUP_SHAREE_CALENDAR_28a88850 on GROUP_SHAREE (
+    CALENDAR_ID
+);
+
+create index DELEGATE_TO_DELEGATOR_5e149b11 on DELEGATES (
+    DELEGATE,
+    READ_WRITE,
+    DELEGATOR
+);
+
+create index DELEGATE_GROUPS_GROUP_25117446 on DELEGATE_GROUPS (
+    GROUP_ID
+);
+
+create index CALENDAR_OBJECT_SPLIT_af71dcda on CALENDAR_OBJECT_SPLITTER_WORK (
+    RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_SPLIT_33603b72 on CALENDAR_OBJECT_SPLITTER_WORK (
+    JOB_ID
+);
+
+create index CALENDAR_OBJECT_UPGRA_a5c181eb on CALENDAR_OBJECT_UPGRADE_WORK (
+    RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_UPGRA_39d6f8f9 on CALENDAR_OBJECT_UPGRADE_WORK (
+    JOB_ID
+);
+
+create index FIND_MIN_VALID_REVISI_78d17400 on FIND_MIN_VALID_REVISION_WORK (
+    JOB_ID
+);
+
+create index REVISION_CLEANUP_WORK_eb062686 on REVISION_CLEANUP_WORK (
+    JOB_ID
+);
+
+create index INBOX_CLEANUP_WORK_JO_799132bd on INBOX_CLEANUP_WORK (
+    JOB_ID
+);
+
+create index CLEANUP_ONE_INBOX_WOR_375dac36 on CLEANUP_ONE_INBOX_WORK (
+    JOB_ID
+);
+
+create index SCHEDULE_WORK_JOB_ID_65e810ee on SCHEDULE_WORK (
+    JOB_ID
+);
+
+create index SCHEDULE_WORK_ICALEND_089f33dc on SCHEDULE_WORK (
+    ICALENDAR_UID
+);
+
+create index SCHEDULE_REFRESH_WORK_26084c7b on SCHEDULE_REFRESH_WORK (
+    HOME_RESOURCE_ID
+);
+
+create index SCHEDULE_REFRESH_WORK_989efe54 on SCHEDULE_REFRESH_WORK (
+    RESOURCE_ID
+);
+
+create index SCHEDULE_REFRESH_ATTE_83053b91 on SCHEDULE_REFRESH_ATTENDEES (
+    RESOURCE_ID,
+    ATTENDEE
+);
+
+create index SCHEDULE_AUTO_REPLY_W_0256478d on SCHEDULE_AUTO_REPLY_WORK (
+    HOME_RESOURCE_ID
+);
+
+create index SCHEDULE_AUTO_REPLY_W_0755e754 on SCHEDULE_AUTO_REPLY_WORK (
+    RESOURCE_ID
+);
+
+create index SCHEDULE_ORGANIZER_WO_18ce4edd on SCHEDULE_ORGANIZER_WORK (
+    HOME_RESOURCE_ID
+);
+
+create index SCHEDULE_ORGANIZER_WO_14702035 on SCHEDULE_ORGANIZER_WORK (
+    RESOURCE_ID
+);
+
+create index SCHEDULE_ORGANIZER_SE_9ec9f827 on SCHEDULE_ORGANIZER_SEND_WORK (
+    HOME_RESOURCE_ID
+);
+
+create index SCHEDULE_ORGANIZER_SE_699fefc4 on SCHEDULE_ORGANIZER_SEND_WORK (
+    RESOURCE_ID
+);
+
+create index SCHEDULE_REPLY_WORK_H_745af8cf on SCHEDULE_REPLY_WORK (
+    HOME_RESOURCE_ID
+);
+
+create index SCHEDULE_REPLY_WORK_R_11bd3fbb on SCHEDULE_REPLY_WORK (
+    RESOURCE_ID
+);
+
+create index PRINCIPAL_PURGE_POLLI_6383e68a on PRINCIPAL_PURGE_POLLING_WORK (
+    JOB_ID
+);
+
+create index PRINCIPAL_PURGE_CHECK_b0c024c1 on PRINCIPAL_PURGE_CHECK_WORK (
+    JOB_ID
+);
+
+create index PRINCIPAL_PURGE_CHECK_198388a5 on PRINCIPAL_PURGE_CHECK_WORK (
+    UID
+);
+
+create index PRINCIPAL_PURGE_WORK__7a8141a3 on PRINCIPAL_PURGE_WORK (
+    JOB_ID
+);
+
+create index PRINCIPAL_PURGE_WORK__db35cfdc on PRINCIPAL_PURGE_WORK (
+    UID
+);
+
+create index PRINCIPAL_PURGE_HOME__f35eea7a on PRINCIPAL_PURGE_HOME_WORK (
+    JOB_ID
+);
+
+create index PRINCIPAL_PURGE_HOME__967e4480 on PRINCIPAL_PURGE_HOME_WORK (
+    HOME_RESOURCE_ID
+);
+
+-- Extra schema to add to current-oracle-dialect.sql

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v51.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v51.sql	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v51.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -1,7 +1,7 @@
 -- -*- test-case-name: txdav.caldav.datastore.test.test_sql,txdav.carddav.datastore.test.test_sql -*-
 
 ----
--- Copyright (c) 2010-2014 Apple Inc. All rights reserved.
+-- Copyright (c) 2010-2015 Apple Inc. All rights reserved.
 --
 -- Licensed under the Apache License, Version 2.0 (the "License");
 -- you may not use this file except in compliance with the License.

Added: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v52.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v52.sql	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/old/postgres-dialect/v52.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,1218 @@
+-- -*- test-case-name: txdav.caldav.datastore.test.test_sql,txdav.carddav.datastore.test.test_sql -*-
+
+----
+-- Copyright (c) 2010-2015 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+
+-----------------
+-- Resource ID --
+-----------------
+
+create sequence RESOURCE_ID_SEQ;
+
+
+-------------------------
+-- Cluster Bookkeeping --
+-------------------------
+
+-- Information about a process connected to this database.
+
+-- Note that this must match the node info schema in twext.enterprise.queue.
+create table NODE_INFO (
+  HOSTNAME  varchar(255) not null,
+  PID       integer      not null,
+  PORT      integer      not null,
+  TIME      timestamp    not null default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (HOSTNAME, PORT)
+);
+
+-- Unique named locks.  This table should always be empty, but rows are
+-- temporarily created in order to prevent undesirable concurrency.
+create table NAMED_LOCK (
+    LOCK_NAME varchar(255) primary key
+);
+
+
+--------------------
+-- Jobs           --
+--------------------
+
+create sequence JOB_SEQ;
+
+create table JOB (
+  JOB_ID      integer primary key default nextval('JOB_SEQ'), --implicit index
+  WORK_TYPE   varchar(255) not null,
+  PRIORITY    integer default 0,
+  WEIGHT      integer default 0,
+  NOT_BEFORE  timestamp not null,
+  ASSIGNED    timestamp default null,
+  OVERDUE     timestamp default null,
+  FAILED      integer default 0
+);
+
+-------------------
+-- Calendar Home --
+-------------------
+
+create table CALENDAR_HOME (
+  RESOURCE_ID      integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID        varchar(255) not null,                		                -- implicit index
+  STATUS           integer      default 0 not null,                             -- enum HOME_STATUS
+  DATAVERSION      integer      default 0 not null,
+  
+  unique (OWNER_UID, STATUS)	-- implicit index
+);
+
+-- Enumeration of statuses
+
+create table HOME_STATUS (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into HOME_STATUS values (0, 'normal' );
+insert into HOME_STATUS values (1, 'external');
+insert into HOME_STATUS values (2, 'purging');
+insert into HOME_STATUS values (3, 'migrating');
+insert into HOME_STATUS values (4, 'disabled');
+
+
+--------------
+-- Calendar --
+--------------
+
+create table CALENDAR (
+  RESOURCE_ID integer   primary key default nextval('RESOURCE_ID_SEQ') -- implicit index
+);
+
+
+----------------------------
+-- Calendar Home Metadata --
+----------------------------
+
+create table CALENDAR_HOME_METADATA (
+  RESOURCE_ID              integer     primary key references CALENDAR_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES         integer     default 0 not null,
+  DEFAULT_EVENTS           integer     default null references CALENDAR on delete set null,
+  DEFAULT_TASKS            integer     default null references CALENDAR on delete set null,
+  DEFAULT_POLLS            integer     default null references CALENDAR on delete set null,
+  ALARM_VEVENT_TIMED       text        default null,
+  ALARM_VEVENT_ALLDAY      text        default null,
+  ALARM_VTODO_TIMED        text        default null,
+  ALARM_VTODO_ALLDAY       text        default null,
+  AVAILABILITY             text        default null,
+  CREATED                  timestamp   default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                 timestamp   default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+create index CALENDAR_HOME_METADATA_DEFAULT_EVENTS on
+  CALENDAR_HOME_METADATA(DEFAULT_EVENTS);
+create index CALENDAR_HOME_METADATA_DEFAULT_TASKS on
+  CALENDAR_HOME_METADATA(DEFAULT_TASKS);
+create index CALENDAR_HOME_METADATA_DEFAULT_POLLS on
+  CALENDAR_HOME_METADATA(DEFAULT_POLLS);
+
+
+-----------------------
+-- Calendar Metadata --
+-----------------------
+
+create table CALENDAR_METADATA (
+  RESOURCE_ID           integer      primary key references CALENDAR on delete cascade, -- implicit index
+  SUPPORTED_COMPONENTS  varchar(255) default null,
+  CREATED               timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+------------------------
+-- Calendar Migration --
+------------------------
+
+create table CALENDAR_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references CALENDAR on delete cascade,
+  LAST_SYNC_TOKEN				varchar(255),
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index CALENDAR_MIGRATION_LOCAL_RESOURCE_ID on
+  CALENDAR_MIGRATION(LOCAL_RESOURCE_ID);
+
+
+---------------------------
+-- Sharing Notifications --
+---------------------------
+
+create table NOTIFICATION_HOME (
+  RESOURCE_ID integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  OWNER_UID   varchar(255) not null,	                                   -- implicit index
+  STATUS      integer      default 0 not null,                             -- enum HOME_STATUS
+  DATAVERSION integer      default 0 not null,
+    
+  unique (OWNER_UID, STATUS)	-- implicit index
+);
+
+create table NOTIFICATION (
+  RESOURCE_ID                   integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME,
+  NOTIFICATION_UID              varchar(255) not null,
+  NOTIFICATION_TYPE             varchar(255) not null,
+  NOTIFICATION_DATA             text         not null,
+  MD5                           char(32)     not null,
+  CREATED                       timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (NOTIFICATION_UID, NOTIFICATION_HOME_RESOURCE_ID) -- implicit index
+);
+
+create index NOTIFICATION_NOTIFICATION_HOME_RESOURCE_ID on
+  NOTIFICATION(NOTIFICATION_HOME_RESOURCE_ID);
+
+
+-------------------
+-- Calendar Bind --
+-------------------
+
+-- Joins CALENDAR_HOME and CALENDAR
+
+create table CALENDAR_BIND (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      not null references CALENDAR on delete cascade,
+  CALENDAR_RESOURCE_NAME    varchar(255) not null,
+  BIND_MODE                 integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS               integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION             integer      default 0 not null,
+  BIND_UID                  varchar(36)  default null,
+  MESSAGE                   text,
+  TRANSP                    integer      default 0 not null, -- enum CALENDAR_TRANSP
+  ALARM_VEVENT_TIMED        text         default null,
+  ALARM_VEVENT_ALLDAY       text         default null,
+  ALARM_VTODO_TIMED         text         default null,
+  ALARM_VTODO_ALLDAY        text         default null,
+  TIMEZONE                  text         default null,
+
+  primary key (CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID), -- implicit index
+  unique (CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_NAME)     -- implicit index
+);
+
+create index CALENDAR_BIND_RESOURCE_ID on
+  CALENDAR_BIND(CALENDAR_RESOURCE_ID);
+
+-- Enumeration of calendar bind modes
+
+create table CALENDAR_BIND_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_MODE values (0, 'own'  );
+insert into CALENDAR_BIND_MODE values (1, 'read' );
+insert into CALENDAR_BIND_MODE values (2, 'write');
+insert into CALENDAR_BIND_MODE values (3, 'direct');
+insert into CALENDAR_BIND_MODE values (4, 'indirect');
+insert into CALENDAR_BIND_MODE values (5, 'group');
+insert into CALENDAR_BIND_MODE values (6, 'group_read');
+insert into CALENDAR_BIND_MODE values (7, 'group_write');
+
+-- Enumeration of statuses
+
+create table CALENDAR_BIND_STATUS (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_STATUS values (0, 'invited' );
+insert into CALENDAR_BIND_STATUS values (1, 'accepted');
+insert into CALENDAR_BIND_STATUS values (2, 'declined');
+insert into CALENDAR_BIND_STATUS values (3, 'invalid');
+insert into CALENDAR_BIND_STATUS values (4, 'deleted');
+
+
+-- Enumeration of transparency
+
+create table CALENDAR_TRANSP (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_TRANSP values (0, 'opaque' );
+insert into CALENDAR_TRANSP values (1, 'transparent');
+
+
+---------------------
+-- Calendar Object --
+---------------------
+
+create table CALENDAR_OBJECT (
+  RESOURCE_ID          integer      primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID integer      not null references CALENDAR on delete cascade,
+  RESOURCE_NAME        varchar(255) not null,
+  ICALENDAR_TEXT       text         not null,
+  ICALENDAR_UID        varchar(255) not null,
+  ICALENDAR_TYPE       varchar(255) not null,
+  ATTACHMENTS_MODE     integer      default 0 not null, -- enum CALENDAR_OBJ_ATTACHMENTS_MODE
+  DROPBOX_ID           varchar(255),
+  ORGANIZER            varchar(255),
+  RECURRANCE_MIN       date,        -- minimum date that recurrences have been expanded to.
+  RECURRANCE_MAX       date,        -- maximum date that recurrences have been expanded to.
+  ACCESS               integer      default 0 not null,
+  SCHEDULE_OBJECT      boolean      default false,
+  SCHEDULE_TAG         varchar(36)  default null,
+  SCHEDULE_ETAGS       text         default null,
+  PRIVATE_COMMENTS     boolean      default false not null,
+  MD5                  char(32)     not null,
+  CREATED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED             timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  DATAVERSION          integer      default 0 not null,
+
+  unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+
+  -- since the 'inbox' is a 'calendar resource' for the purpose of storing
+  -- calendar objects, this constraint has to be selectively enforced by the
+  -- application layer.
+
+  -- unique (CALENDAR_RESOURCE_ID, ICALENDAR_UID)
+);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_AND_ICALENDAR_UID on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_RECURRANCE_MAX_MIN on
+  CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, RECURRANCE_MAX, RECURRANCE_MIN);
+
+create index CALENDAR_OBJECT_ICALENDAR_UID on
+  CALENDAR_OBJECT(ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_DROPBOX_ID on
+  CALENDAR_OBJECT(DROPBOX_ID);
+
+-- Enumeration of attachment modes
+
+create table CALENDAR_OBJ_ATTACHMENTS_MODE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_OBJ_ATTACHMENTS_MODE values (0, 'none' );
+insert into CALENDAR_OBJ_ATTACHMENTS_MODE values (1, 'read' );
+insert into CALENDAR_OBJ_ATTACHMENTS_MODE values (2, 'write');
+
+
+-- Enumeration of calendar access types
+
+create table CALENDAR_ACCESS_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(32) not null unique
+);
+
+insert into CALENDAR_ACCESS_TYPE values (0, ''             );
+insert into CALENDAR_ACCESS_TYPE values (1, 'public'       );
+insert into CALENDAR_ACCESS_TYPE values (2, 'private'      );
+insert into CALENDAR_ACCESS_TYPE values (3, 'confidential' );
+insert into CALENDAR_ACCESS_TYPE values (4, 'restricted'   );
+
+
+-----------------
+-- Instance ID --
+-----------------
+
+create sequence INSTANCE_ID_SEQ;
+
+
+----------------
+-- Time Range --
+----------------
+
+create table TIME_RANGE (
+  INSTANCE_ID                 integer        primary key default nextval('INSTANCE_ID_SEQ'), -- implicit index
+  CALENDAR_RESOURCE_ID        integer        not null references CALENDAR on delete cascade,
+  CALENDAR_OBJECT_RESOURCE_ID integer        not null references CALENDAR_OBJECT on delete cascade,
+  FLOATING                    boolean        not null,
+  START_DATE                  timestamp      not null,
+  END_DATE                    timestamp      not null,
+  FBTYPE                      integer        not null,
+  TRANSPARENT                 boolean        not null
+);
+
+create index TIME_RANGE_CALENDAR_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_RESOURCE_ID);
+create index TIME_RANGE_CALENDAR_OBJECT_RESOURCE_ID on
+  TIME_RANGE(CALENDAR_OBJECT_RESOURCE_ID);
+
+
+-- Enumeration of free/busy types
+
+create table FREE_BUSY_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into FREE_BUSY_TYPE values (0, 'unknown'         );
+insert into FREE_BUSY_TYPE values (1, 'free'            );
+insert into FREE_BUSY_TYPE values (2, 'busy'            );
+insert into FREE_BUSY_TYPE values (3, 'busy-unavailable');
+insert into FREE_BUSY_TYPE values (4, 'busy-tentative'  );
+
+
+-------------------
+-- Per-user data --
+-------------------
+
+create table PERUSER (
+  TIME_RANGE_INSTANCE_ID      integer      not null references TIME_RANGE on delete cascade,
+  USER_ID                     varchar(255) not null,
+  TRANSPARENT                 boolean      not null,
+  ADJUSTED_START_DATE         timestamp    default null,
+  ADJUSTED_END_DATE           timestamp    default null,
+  
+  primary key (TIME_RANGE_INSTANCE_ID, USER_ID)    -- implicit index
+);
+
+
+-------------------------------
+-- Calendar Object Migration --
+-------------------------------
+
+create table CALENDAR_OBJECT_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references CALENDAR_OBJECT on delete cascade,
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index CALENDAR_OBJECT_MIGRATION_HOME_LOCAL on
+  CALENDAR_OBJECT_MIGRATION(CALENDAR_HOME_RESOURCE_ID, LOCAL_RESOURCE_ID);
+create index CALENDAR_OBJECT_MIGRATION_LOCAL_RESOURCE_ID on
+  CALENDAR_OBJECT_MIGRATION(LOCAL_RESOURCE_ID);
+
+
+----------------
+-- Attachment --
+----------------
+
+create sequence ATTACHMENT_ID_SEQ;
+
+create table ATTACHMENT (
+  ATTACHMENT_ID               integer           primary key default nextval('ATTACHMENT_ID_SEQ'), -- implicit index
+  CALENDAR_HOME_RESOURCE_ID   integer           not null references CALENDAR_HOME,
+  DROPBOX_ID                  varchar(255),
+  CONTENT_TYPE                varchar(255)      not null,
+  SIZE                        integer           not null,
+  MD5                         char(32)          not null,
+  CREATED                     timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                    timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  PATH                        varchar(1024)     not null
+);
+
+create index ATTACHMENT_CALENDAR_HOME_RESOURCE_ID on
+  ATTACHMENT(CALENDAR_HOME_RESOURCE_ID);
+
+create index ATTACHMENT_DROPBOX_ID on
+  ATTACHMENT(DROPBOX_ID);
+
+-- Many-to-many relationship between attachments and calendar objects
+create table ATTACHMENT_CALENDAR_OBJECT (
+  ATTACHMENT_ID                  integer      not null references ATTACHMENT on delete cascade,
+  MANAGED_ID                     varchar(255) not null,
+  CALENDAR_OBJECT_RESOURCE_ID    integer      not null references CALENDAR_OBJECT on delete cascade,
+
+  primary key (ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
+  unique (MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
+);
+
+create index ATTACHMENT_CALENDAR_OBJECT_CALENDAR_OBJECT_RESOURCE_ID on
+  ATTACHMENT_CALENDAR_OBJECT(CALENDAR_OBJECT_RESOURCE_ID);
+
+-----------------------------------
+-- Calendar Attachment Migration --
+-----------------------------------
+
+create table ATTACHMENT_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references ATTACHMENT on delete cascade,
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index ATTACHMENT_MIGRATION_HOME_LOCAL on
+  ATTACHMENT_MIGRATION(CALENDAR_HOME_RESOURCE_ID, LOCAL_RESOURCE_ID);
+create index ATTACHMENT_MIGRATION_LOCAL_RESOURCE_ID on
+  ATTACHMENT_MIGRATION(LOCAL_RESOURCE_ID);
+
+
+-----------------------
+-- Resource Property --
+-----------------------
+
+create table RESOURCE_PROPERTY (
+  RESOURCE_ID integer      not null, -- foreign key: *.RESOURCE_ID
+  NAME        varchar(255) not null,
+  VALUE       text         not null, -- FIXME: xml?
+  VIEWER_UID  varchar(255),
+
+  primary key (RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
+);
+
+
+----------------------
+-- AddressBook Home --
+----------------------
+
+create table ADDRESSBOOK_HOME (
+  RESOURCE_ID                   integer         primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+  ADDRESSBOOK_PROPERTY_STORE_ID integer         default nextval('RESOURCE_ID_SEQ') not null,    -- implicit index
+  OWNER_UID                     varchar(255)    not null,
+  STATUS                        integer         default 0 not null,                             -- enum HOME_STATUS
+  DATAVERSION                   integer         default 0 not null,
+    
+  unique (OWNER_UID, STATUS)	-- implicit index
+);
+
+
+-------------------------------
+-- AddressBook Home Metadata --
+-------------------------------
+
+create table ADDRESSBOOK_HOME_METADATA (
+  RESOURCE_ID      integer      primary key references ADDRESSBOOK_HOME on delete cascade, -- implicit index
+  QUOTA_USED_BYTES integer      default 0 not null,
+  CREATED          timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED         timestamp    default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+-----------------------------
+-- Shared AddressBook Bind --
+-----------------------------
+
+-- Joins sharee ADDRESSBOOK_HOME and owner ADDRESSBOOK_HOME
+
+create table SHARED_ADDRESSBOOK_BIND (
+  ADDRESSBOOK_HOME_RESOURCE_ID          integer         not null references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID                integer         not null references ADDRESSBOOK_HOME on delete cascade,
+  ADDRESSBOOK_RESOURCE_NAME             varchar(255)    not null,
+  BIND_MODE                             integer         not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS                           integer         not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION                         integer         default 0 not null,
+  BIND_UID                              varchar(36)     default null,
+  MESSAGE                               text,                     -- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+);
+
+create index SHARED_ADDRESSBOOK_BIND_RESOURCE_ID on
+  SHARED_ADDRESSBOOK_BIND(OWNER_HOME_RESOURCE_ID);
+
+
+------------------------
+-- AddressBook Object --
+------------------------
+
+create table ADDRESSBOOK_OBJECT (
+  RESOURCE_ID                   integer         primary key default nextval('RESOURCE_ID_SEQ'),    -- implicit index
+  ADDRESSBOOK_HOME_RESOURCE_ID  integer         not null references ADDRESSBOOK_HOME on delete cascade,
+  RESOURCE_NAME                 varchar(255)    not null,
+  VCARD_TEXT                    text            not null,
+  VCARD_UID                     varchar(255)    not null,
+  KIND                          integer         not null,  -- enum ADDRESSBOOK_OBJECT_KIND
+  MD5                           char(32)        not null,
+  CREATED                       timestamp       default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                      timestamp       default timezone('UTC', CURRENT_TIMESTAMP),
+  DATAVERSION                   integer         default 0 not null,
+
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, RESOURCE_NAME), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, VCARD_UID)      -- implicit index
+);
+
+
+-----------------------------
+-- AddressBook Object kind --
+-----------------------------
+
+create table ADDRESSBOOK_OBJECT_KIND (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into ADDRESSBOOK_OBJECT_KIND values (0, 'person');
+insert into ADDRESSBOOK_OBJECT_KIND values (1, 'group' );
+insert into ADDRESSBOOK_OBJECT_KIND values (2, 'resource');
+insert into ADDRESSBOOK_OBJECT_KIND values (3, 'location');
+
+
+----------------------------------
+-- Revisions, forward reference --
+----------------------------------
+
+create sequence REVISION_SEQ;
+
+---------------------------------
+-- Address Book Object Members --
+---------------------------------
+
+create table ABO_MEMBERS (
+  GROUP_ID        integer     not null, -- references ADDRESSBOOK_OBJECT on delete cascade,   -- AddressBook Object's (kind=='group') RESOURCE_ID
+  ADDRESSBOOK_ID  integer     not null references ADDRESSBOOK_HOME on delete cascade,
+  MEMBER_ID       integer     not null, -- references ADDRESSBOOK_OBJECT,                     -- member AddressBook Object's RESOURCE_ID
+  REVISION        integer     default nextval('REVISION_SEQ') not null,
+  REMOVED         boolean     default false not null,
+  MODIFIED        timestamp   default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (GROUP_ID, MEMBER_ID, REVISION) -- implicit index
+);
+
+create index ABO_MEMBERS_ADDRESSBOOK_ID on
+  ABO_MEMBERS(ADDRESSBOOK_ID);
+create index ABO_MEMBERS_MEMBER_ID on
+  ABO_MEMBERS(MEMBER_ID);
+
+------------------------------------------
+-- Address Book Object Foreign Members  --
+------------------------------------------
+
+create table ABO_FOREIGN_MEMBERS (
+  GROUP_ID           integer      not null references ADDRESSBOOK_OBJECT on delete cascade,  -- AddressBook Object's (kind=='group') RESOURCE_ID
+  ADDRESSBOOK_ID     integer      not null references ADDRESSBOOK_HOME on delete cascade,
+  MEMBER_ADDRESS     varchar(255) not null,                                                  -- member AddressBook Object's 'calendar' address
+
+  primary key (GROUP_ID, MEMBER_ADDRESS) -- implicit index
+);
+
+create index ABO_FOREIGN_MEMBERS_ADDRESSBOOK_ID on
+  ABO_FOREIGN_MEMBERS(ADDRESSBOOK_ID);
+
+-----------------------
+-- Shared Group Bind --
+-----------------------
+
+-- Joins ADDRESSBOOK_HOME and ADDRESSBOOK_OBJECT (kind == group)
+
+create table SHARED_GROUP_BIND (
+  ADDRESSBOOK_HOME_RESOURCE_ID      integer      not null references ADDRESSBOOK_HOME,
+  GROUP_RESOURCE_ID                 integer      not null references ADDRESSBOOK_OBJECT on delete cascade,
+  GROUP_ADDRESSBOOK_NAME            varchar(255) not null,
+  BIND_MODE                         integer      not null, -- enum CALENDAR_BIND_MODE
+  BIND_STATUS                       integer      not null, -- enum CALENDAR_BIND_STATUS
+  BIND_REVISION                     integer      default 0 not null,
+  BIND_UID                          varchar(36)  default null,
+  MESSAGE                           text,                  -- FIXME: xml?
+
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, GROUP_ADDRESSBOOK_NAME)  -- implicit index
+);
+
+create index SHARED_GROUP_BIND_RESOURCE_ID on
+  SHARED_GROUP_BIND(GROUP_RESOURCE_ID);
+
+
+---------------
+-- Revisions --
+---------------
+
+-- create sequence REVISION_SEQ;
+
+
+-------------------------------
+-- Calendar Object Revisions --
+-------------------------------
+
+create table CALENDAR_OBJECT_REVISIONS (
+  CALENDAR_HOME_RESOURCE_ID integer      not null references CALENDAR_HOME,
+  CALENDAR_RESOURCE_ID      integer      references CALENDAR,
+  CALENDAR_NAME             varchar(255) default null,
+  RESOURCE_NAME             varchar(255),
+  REVISION                  integer      default nextval('REVISION_SEQ') not null,
+  DELETED                   boolean      not null,
+  MODIFIED                  timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  
+  unique(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID, CALENDAR_NAME, RESOURCE_NAME)    -- implicit index
+);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME_DELETED_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME, DELETED, REVISION);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, REVISION);
+
+create index CALENDAR_OBJECT_REVISIONS_HOME_RESOURCE_ID_REVISION
+  on CALENDAR_OBJECT_REVISIONS(CALENDAR_HOME_RESOURCE_ID, REVISION);
+
+
+----------------------------------
+-- AddressBook Object Revisions --
+----------------------------------
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+  ADDRESSBOOK_HOME_RESOURCE_ID  integer      not null references ADDRESSBOOK_HOME,
+  OWNER_HOME_RESOURCE_ID        integer      references ADDRESSBOOK_HOME,
+  ADDRESSBOOK_NAME              varchar(255) default null,
+  OBJECT_RESOURCE_ID            integer      default 0,
+  RESOURCE_NAME                 varchar(255),
+  REVISION                      integer      default nextval('REVISION_SEQ') not null,
+  DELETED                       boolean      not null,
+  MODIFIED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+  
+  unique(ADDRESSBOOK_HOME_RESOURCE_ID, OWNER_HOME_RESOURCE_ID, ADDRESSBOOK_NAME, RESOURCE_NAME)    -- implicit index
+);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_RESOURCE_NAME_DELETED_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, RESOURCE_NAME, DELETED, REVISION);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_OWNER_HOME_RESOURCE_ID_REVISION
+  on ADDRESSBOOK_OBJECT_REVISIONS(OWNER_HOME_RESOURCE_ID, REVISION);
+
+
+-----------------------------------
+-- Notification Object Revisions --
+-----------------------------------
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+  NOTIFICATION_HOME_RESOURCE_ID integer      not null references NOTIFICATION_HOME on delete cascade,
+  RESOURCE_NAME                 varchar(255),
+  REVISION                      integer      default nextval('REVISION_SEQ') not null,
+  DELETED                       boolean      not null,
+  MODIFIED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  unique (NOTIFICATION_HOME_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+);
+
+create index NOTIFICATION_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+  on NOTIFICATION_OBJECT_REVISIONS(NOTIFICATION_HOME_RESOURCE_ID, REVISION);
+
+
+-------------------------------------------
+-- Apple Push Notification Subscriptions --
+-------------------------------------------
+
+create table APN_SUBSCRIPTIONS (
+  TOKEN                         varchar(255) not null,
+  RESOURCE_KEY                  varchar(255) not null,
+  MODIFIED                      integer      not null,
+  SUBSCRIBER_GUID               varchar(255) not null,
+  USER_AGENT                    varchar(255) default null,
+  IP_ADDR                       varchar(255) default null,
+
+  primary key (TOKEN, RESOURCE_KEY) -- implicit index
+);
+
+create index APN_SUBSCRIPTIONS_RESOURCE_KEY
+  on APN_SUBSCRIPTIONS(RESOURCE_KEY);
+
+
+-----------------
+-- IMIP Tokens --
+-----------------
+
+create table IMIP_TOKENS (
+  TOKEN                         varchar(255) not null,
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALUID                       varchar(255) not null,
+  ACCESSED                      timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
+
+  primary key (ORGANIZER, ATTENDEE, ICALUID) -- implicit index
+);
+
+create index IMIP_TOKENS_TOKEN
+  on IMIP_TOKENS(TOKEN);
+
+
+----------------
+-- Work Items --
+----------------
+
+create sequence WORKITEM_SEQ;
+
+
+---------------------------
+-- IMIP Inivitation Work --
+---------------------------
+
+create table IMIP_INVITATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  FROM_ADDR                     varchar(255) not null,
+  TO_ADDR                       varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+create index IMIP_INVITATION_WORK_JOB_ID on
+  IMIP_INVITATION_WORK(JOB_ID);
+
+-----------------------
+-- IMIP Polling Work --
+-----------------------
+
+create table IMIP_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null
+);
+
+create index IMIP_POLLING_WORK_JOB_ID on
+  IMIP_POLLING_WORK(JOB_ID);
+
+
+---------------------
+-- IMIP Reply Work --
+---------------------
+
+create table IMIP_REPLY_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  ORGANIZER                     varchar(255) not null,
+  ATTENDEE                      varchar(255) not null,
+  ICALENDAR_TEXT                text         not null
+);
+
+create index IMIP_REPLY_WORK_JOB_ID on
+  IMIP_REPLY_WORK(JOB_ID);
+
+
+------------------------
+-- Push Notifications --
+------------------------
+
+create table PUSH_NOTIFICATION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  PUSH_ID                       varchar(255) not null,
+  PUSH_PRIORITY                 integer      not null -- 1:low 5:medium 10:high
+);
+
+create index PUSH_NOTIFICATION_WORK_JOB_ID on
+  PUSH_NOTIFICATION_WORK(JOB_ID);
+create index PUSH_NOTIFICATION_WORK_PUSH_ID on
+  PUSH_NOTIFICATION_WORK(PUSH_ID);
+
+-----------------
+-- GroupCacher --
+-----------------
+
+create table GROUP_CACHER_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null
+);
+
+create index GROUP_CACHER_POLLING_WORK_JOB_ID on
+  GROUP_CACHER_POLLING_WORK(JOB_ID);
+
+create table GROUP_REFRESH_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  GROUP_UID                     varchar(255) not null
+);
+
+create index GROUP_REFRESH_WORK_JOB_ID on
+  GROUP_REFRESH_WORK(JOB_ID);
+create index GROUP_REFRESH_WORK_GROUP_UID on
+  GROUP_REFRESH_WORK(GROUP_UID);
+
+create table GROUP_DELEGATE_CHANGES_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  DELEGATOR_UID                 varchar(255) not null,
+  READ_DELEGATE_UID             varchar(255) not null,
+  WRITE_DELEGATE_UID            varchar(255) not null
+);
+
+create index GROUP_DELEGATE_CHANGES_WORK_JOB_ID on
+  GROUP_DELEGATE_CHANGES_WORK(JOB_ID);
+create index GROUP_DELEGATE_CHANGES_WORK_DELEGATOR_UID on
+  GROUP_DELEGATE_CHANGES_WORK(DELEGATOR_UID);
+
+create table GROUPS (
+  GROUP_ID                      integer      primary key default nextval('RESOURCE_ID_SEQ'),    -- implicit index
+  NAME                          varchar(255) not null,
+  GROUP_UID                     varchar(255) not null unique,
+  MEMBERSHIP_HASH               varchar(255) not null,
+  EXTANT                        integer default 1,
+  CREATED                       timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+  MODIFIED                      timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+create index GROUPS_GROUP_UID on
+  GROUPS(GROUP_UID);
+
+create table GROUP_MEMBERSHIP (
+  GROUP_ID                     integer not null references GROUPS on delete cascade,
+  MEMBER_UID                   varchar(255) not null,
+
+  primary key (GROUP_ID, MEMBER_UID)
+);
+
+create index GROUP_MEMBERSHIP_MEMBER on
+  GROUP_MEMBERSHIP(MEMBER_UID);
+
+create table GROUP_ATTENDEE_RECONCILE_WORK (
+  WORK_ID                       integer primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer not null references JOB,
+  RESOURCE_ID                   integer not null references CALENDAR_OBJECT on delete cascade,
+  GROUP_ID                      integer not null references GROUPS on delete cascade
+);
+
+create index GROUP_ATTENDEE_RECONCILE_WORK_JOB_ID on
+  GROUP_ATTENDEE_RECONCILE_WORK(JOB_ID);
+create index GROUP_ATTENDEE_RECONCILE_WORK_RESOURCE_ID on
+  GROUP_ATTENDEE_RECONCILE_WORK(RESOURCE_ID);
+create index GROUP_ATTENDEE_RECONCILE_WORK_GROUP_ID on
+  GROUP_ATTENDEE_RECONCILE_WORK(GROUP_ID);
+
+
+create table GROUP_ATTENDEE (
+  GROUP_ID                      integer not null references GROUPS on delete cascade,
+  RESOURCE_ID                   integer not null references CALENDAR_OBJECT on delete cascade,
+  MEMBERSHIP_HASH               varchar(255) not null,
+
+  primary key (GROUP_ID, RESOURCE_ID)
+);
+
+create index GROUP_ATTENDEE_RESOURCE_ID on
+  GROUP_ATTENDEE(RESOURCE_ID);
+
+
+create table GROUP_SHAREE_RECONCILE_WORK (
+  WORK_ID                       integer primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer not null references JOB,
+  CALENDAR_ID                   integer	not null references CALENDAR on delete cascade,
+  GROUP_ID                      integer not null references GROUPS on delete cascade
+);
+
+create index GROUP_SHAREE_RECONCILE_WORK_JOB_ID on
+  GROUP_SHAREE_RECONCILE_WORK(JOB_ID);
+create index GROUP_SHAREE_RECONCILE_WORK_CALENDAR_ID on
+  GROUP_SHAREE_RECONCILE_WORK(CALENDAR_ID);
+create index GROUP_SHAREE_RECONCILE_WORK_GROUP_ID on
+  GROUP_SHAREE_RECONCILE_WORK(GROUP_ID);
+
+
+create table GROUP_SHAREE (
+  GROUP_ID                      integer not null references GROUPS on delete cascade,
+  CALENDAR_ID      				integer not null references CALENDAR on delete cascade,
+  GROUP_BIND_MODE               integer not null, -- enum CALENDAR_BIND_MODE
+  MEMBERSHIP_HASH               varchar(255) not null,
+
+  primary key (GROUP_ID, CALENDAR_ID)
+);
+
+create index GROUP_SHAREE_CALENDAR_ID on
+  GROUP_SHAREE(CALENDAR_ID);
+
+---------------
+-- Delegates --
+---------------
+
+create table DELEGATES (
+  DELEGATOR                     varchar(255) not null,
+  DELEGATE                      varchar(255) not null,
+  READ_WRITE                    integer      not null, -- 1 = ReadWrite, 0 = ReadOnly
+
+  primary key (DELEGATOR, READ_WRITE, DELEGATE)
+);
+create index DELEGATE_TO_DELEGATOR on
+  DELEGATES(DELEGATE, READ_WRITE, DELEGATOR);
+
+create table DELEGATE_GROUPS (
+  DELEGATOR                     varchar(255) not null,
+  GROUP_ID                      integer      not null references GROUPS on delete cascade,
+  READ_WRITE                    integer      not null, -- 1 = ReadWrite, 0 = ReadOnly
+  IS_EXTERNAL                   integer      not null, -- 1 = External, 0 = Internal
+
+  primary key (DELEGATOR, READ_WRITE, GROUP_ID)
+);
+create index DELEGATE_GROUPS_GROUP_ID on
+  DELEGATE_GROUPS(GROUP_ID);
+
+create table EXTERNAL_DELEGATE_GROUPS (
+  DELEGATOR                     varchar(255) primary key,
+  GROUP_UID_READ                varchar(255),
+  GROUP_UID_WRITE               varchar(255)
+);
+
+--------------------------
+-- Object Splitter Work --
+--------------------------
+
+create table CALENDAR_OBJECT_SPLITTER_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade
+);
+
+create index CALENDAR_OBJECT_SPLITTER_WORK_RESOURCE_ID on
+  CALENDAR_OBJECT_SPLITTER_WORK(RESOURCE_ID);
+create index CALENDAR_OBJECT_SPLITTER_WORK_JOB_ID on
+  CALENDAR_OBJECT_SPLITTER_WORK(JOB_ID);
+
+-------------------------
+-- Object Upgrade Work --
+-------------------------
+
+create table CALENDAR_OBJECT_UPGRADE_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade
+);
+
+create index CALENDAR_OBJECT_UPGRADE_WORK_RESOURCE_ID on
+  CALENDAR_OBJECT_UPGRADE_WORK(RESOURCE_ID);
+create index CALENDAR_OBJECT_UPGRADE_WORK_JOB_ID on
+  CALENDAR_OBJECT_UPGRADE_WORK(JOB_ID);
+
+---------------------------
+-- Revision Cleanup Work --
+---------------------------
+
+create table FIND_MIN_VALID_REVISION_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null
+);
+
+create index FIND_MIN_VALID_REVISION_WORK_JOB_ID on
+  FIND_MIN_VALID_REVISION_WORK(JOB_ID);
+
+create table REVISION_CLEANUP_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null
+);
+
+create index REVISION_CLEANUP_WORK_JOB_ID on
+  REVISION_CLEANUP_WORK(JOB_ID);
+
+------------------------
+-- Inbox Cleanup Work --
+------------------------
+
+create table INBOX_CLEANUP_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null
+);
+
+create index INBOX_CLEANUP_WORK_JOB_ID on
+   INBOX_CLEANUP_WORK(JOB_ID);
+
+create table CLEANUP_ONE_INBOX_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  HOME_ID                       integer      not null unique references CALENDAR_HOME on delete cascade -- implicit index
+);
+
+create index CLEANUP_ONE_INBOX_WORK_JOB_ID on
+  CLEANUP_ONE_INBOX_WORK(JOB_ID);
+
+-------------------
+-- Schedule Work --
+-------------------
+
+create table SCHEDULE_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  ICALENDAR_UID                 varchar(255) not null,
+  WORK_TYPE                     varchar(255) not null
+);
+
+create index SCHEDULE_WORK_JOB_ID on
+  SCHEDULE_WORK(JOB_ID);
+create index SCHEDULE_WORK_ICALENDAR_UID on
+  SCHEDULE_WORK(ICALENDAR_UID);
+
+---------------------------
+-- Schedule Refresh Work --
+---------------------------
+
+create table SCHEDULE_REFRESH_WORK (
+  WORK_ID                       integer      primary key references SCHEDULE_WORK on delete cascade, -- implicit index
+  HOME_RESOURCE_ID              integer      not null references CALENDAR_HOME on delete cascade,
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade,
+  ATTENDEE_COUNT                integer
+);
+
+create index SCHEDULE_REFRESH_WORK_HOME_RESOURCE_ID on
+  SCHEDULE_REFRESH_WORK(HOME_RESOURCE_ID);
+create index SCHEDULE_REFRESH_WORK_RESOURCE_ID on
+  SCHEDULE_REFRESH_WORK(RESOURCE_ID);
+
+create table SCHEDULE_REFRESH_ATTENDEES (
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade,
+  ATTENDEE                      varchar(255) not null,
+
+  primary key (RESOURCE_ID, ATTENDEE)
+);
+
+create index SCHEDULE_REFRESH_ATTENDEES_RESOURCE_ID_ATTENDEE on
+  SCHEDULE_REFRESH_ATTENDEES(RESOURCE_ID, ATTENDEE);
+
+------------------------------
+-- Schedule Auto Reply Work --
+------------------------------
+
+create table SCHEDULE_AUTO_REPLY_WORK (
+  WORK_ID                       integer      primary key references SCHEDULE_WORK on delete cascade, -- implicit index
+  HOME_RESOURCE_ID              integer      not null references CALENDAR_HOME on delete cascade,
+  RESOURCE_ID                   integer      not null references CALENDAR_OBJECT on delete cascade,
+  PARTSTAT                      varchar(255) not null
+);
+
+create index SCHEDULE_AUTO_REPLY_WORK_HOME_RESOURCE_ID on
+  SCHEDULE_AUTO_REPLY_WORK(HOME_RESOURCE_ID);
+create index SCHEDULE_AUTO_REPLY_WORK_RESOURCE_ID on
+  SCHEDULE_AUTO_REPLY_WORK(RESOURCE_ID);
+
+-----------------------------
+-- Schedule Organizer Work --
+-----------------------------
+
+create table SCHEDULE_ORGANIZER_WORK (
+  WORK_ID                       integer      primary key references SCHEDULE_WORK on delete cascade, -- implicit index
+  SCHEDULE_ACTION               integer      not null, -- Enum SCHEDULE_ACTION
+  HOME_RESOURCE_ID              integer      not null references CALENDAR_HOME on delete cascade,
+  RESOURCE_ID                   integer,     -- this references a possibly non-existent CALENDAR_OBJECT
+  ICALENDAR_TEXT_OLD            text,
+  ICALENDAR_TEXT_NEW            text,
+  ATTENDEE_COUNT                integer,
+  SMART_MERGE                   boolean
+);
+
+create index SCHEDULE_ORGANIZER_WORK_HOME_RESOURCE_ID on
+  SCHEDULE_ORGANIZER_WORK(HOME_RESOURCE_ID);
+create index SCHEDULE_ORGANIZER_WORK_RESOURCE_ID on
+  SCHEDULE_ORGANIZER_WORK(RESOURCE_ID);
+
+-- Enumeration of schedule actions
+
+create table SCHEDULE_ACTION (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into SCHEDULE_ACTION values (0, 'create');
+insert into SCHEDULE_ACTION values (1, 'modify');
+insert into SCHEDULE_ACTION values (2, 'modify-cancelled');
+insert into SCHEDULE_ACTION values (3, 'remove');
+
+----------------------------------
+-- Schedule Organizer Send Work --
+----------------------------------
+
+create table SCHEDULE_ORGANIZER_SEND_WORK (
+  WORK_ID                       integer      primary key references SCHEDULE_WORK on delete cascade, -- implicit index
+  SCHEDULE_ACTION               integer      not null, -- Enum SCHEDULE_ACTION
+  HOME_RESOURCE_ID              integer      not null references CALENDAR_HOME on delete cascade,
+  RESOURCE_ID                   integer,     -- this references a possibly non-existent CALENDAR_OBJECT
+  ATTENDEE                      varchar(255) not null,
+  ITIP_MSG                      text,
+  NO_REFRESH                    boolean
+);
+
+create index SCHEDULE_ORGANIZER_SEND_WORK_HOME_RESOURCE_ID on
+  SCHEDULE_ORGANIZER_SEND_WORK(HOME_RESOURCE_ID);
+create index SCHEDULE_ORGANIZER_SEND_WORK_RESOURCE_ID on
+  SCHEDULE_ORGANIZER_SEND_WORK(RESOURCE_ID);
+
+-------------------------
+-- Schedule Reply Work --
+-------------------------
+
+create table SCHEDULE_REPLY_WORK (
+  WORK_ID                       integer      primary key references SCHEDULE_WORK on delete cascade, -- implicit index
+  HOME_RESOURCE_ID              integer      not null references CALENDAR_HOME on delete cascade,
+  RESOURCE_ID                   integer,     -- this references a possibly non-existent CALENDAR_OBJECT
+  ITIP_MSG                      text
+);
+
+create index SCHEDULE_REPLY_WORK_HOME_RESOURCE_ID on
+  SCHEDULE_REPLY_WORK(HOME_RESOURCE_ID);
+create index SCHEDULE_REPLY_WORK_RESOURCE_ID on
+  SCHEDULE_REPLY_WORK(RESOURCE_ID);
+
+----------------------------------
+-- Principal Purge Polling Work --
+----------------------------------
+
+create table PRINCIPAL_PURGE_POLLING_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null
+);
+
+create index PRINCIPAL_PURGE_POLLING_WORK_JOB_ID on
+  PRINCIPAL_PURGE_POLLING_WORK(JOB_ID);
+
+--------------------------------
+-- Principal Purge Check Work --
+--------------------------------
+
+create table PRINCIPAL_PURGE_CHECK_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  UID                           varchar(255) not null
+);
+
+create index PRINCIPAL_PURGE_CHECK_WORK_JOB_ID on
+  PRINCIPAL_PURGE_CHECK_WORK(JOB_ID);
+create index PRINCIPAL_PURGE_CHECK_WORK_UID on
+  PRINCIPAL_PURGE_CHECK_WORK(UID);
+
+--------------------------
+-- Principal Purge Work --
+--------------------------
+
+create table PRINCIPAL_PURGE_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  UID                           varchar(255) not null
+);
+
+create index PRINCIPAL_PURGE_WORK_JOB_ID on
+  PRINCIPAL_PURGE_WORK(JOB_ID);
+create index PRINCIPAL_PURGE_WORK_UID on
+  PRINCIPAL_PURGE_WORK(UID);
+
+
+--------------------------------
+-- Principal Home Remove Work --
+--------------------------------
+
+create table PRINCIPAL_PURGE_HOME_WORK (
+  WORK_ID                       integer      primary key default nextval('WORKITEM_SEQ'), -- implicit index
+  JOB_ID                        integer      references JOB not null,
+  HOME_RESOURCE_ID              integer      not null references CALENDAR_HOME on delete cascade
+);
+
+create index PRINCIPAL_PURGE_HOME_WORK_JOB_ID on
+  PRINCIPAL_PURGE_HOME_WORK(JOB_ID);
+create index PRINCIPAL_PURGE_HOME_HOME_RESOURCE_ID on
+  PRINCIPAL_PURGE_HOME_WORK(HOME_RESOURCE_ID);
+
+
+--------------------
+-- Schema Version --
+--------------------
+
+create table CALENDARSERVER (
+  NAME                          varchar(255) primary key, -- implicit index
+  VALUE                         varchar(255)
+);
+
+insert into CALENDARSERVER values ('VERSION', '52');
+insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '6');
+insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '2');
+insert into CALENDARSERVER values ('NOTIFICATION-DATAVERSION', '1');
+insert into CALENDARSERVER values ('MIN-VALID-REVISION', '1');

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_51_to_52.sql (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_51_to_52.sql)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_51_to_52.sql	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_51_to_52.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,102 @@
+----
+-- Copyright (c) 2012-2015 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 51 to 52 --
+---------------------------------------------------
+
+-- New status value
+insert into HOME_STATUS (DESCRIPTION, ID) values ('migrating', 3);
+insert into HOME_STATUS (DESCRIPTION, ID) values ('disabled', 4);
+
+-- Home constraints
+alter table CALENDAR_HOME
+	drop unique (OWNER_UID);
+alter table CALENDAR_HOME
+	add unique (OWNER_UID, STATUS);
+
+alter table ADDRESSBOOK_HOME
+	drop unique (OWNER_UID);
+alter table ADDRESSBOOK_HOME
+	add unique (OWNER_UID, STATUS);
+
+alter table NOTIFICATION_HOME
+	drop unique (OWNER_UID);
+alter table NOTIFICATION_HOME
+	add unique (OWNER_UID, STATUS);
+
+-- Change columns
+alter table CALENDAR_BIND
+	drop column EXTERNAL_ID
+	add ("BIND_UID" nvarchar2(36) default null);
+
+alter table SHARED_ADDRESSBOOK_BIND
+	drop column EXTERNAL_ID
+	add ("BIND_UID" nvarchar2(36) default null);
+
+alter table SHARED_GROUP_BIND
+	drop column EXTERNAL_ID
+	add ("BIND_UID" nvarchar2(36) default null);
+
+
+-- New table
+create table CALENDAR_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references CALENDAR on delete cascade,
+    "LAST_SYNC_TOKEN" nvarchar2(255), 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
+create index CALENDAR_MIGRATION_LO_0525c72b on CALENDAR_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
+-- New table
+create table CALENDAR_OBJECT_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references CALENDAR_OBJECT on delete cascade, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
+create index CALENDAR_OBJECT_MIGRA_0502cbef on CALENDAR_OBJECT_MIGRATION (
+    CALENDAR_HOME_RESOURCE_ID,
+    LOCAL_RESOURCE_ID
+);
+create index CALENDAR_OBJECT_MIGRA_3577efd9 on CALENDAR_OBJECT_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
+-- New table
+create table ATTACHMENT_MIGRATION (
+    "CALENDAR_HOME_RESOURCE_ID" integer references CALENDAR_HOME on delete cascade,
+    "REMOTE_RESOURCE_ID" integer not null,
+    "LOCAL_RESOURCE_ID" integer references ATTACHMENT on delete cascade, 
+    primary key ("CALENDAR_HOME_RESOURCE_ID", "REMOTE_RESOURCE_ID")
+);
+
+create index ATTACHMENT_MIGRATION__804bf85e on ATTACHMENT_MIGRATION (
+    CALENDAR_HOME_RESOURCE_ID,
+    LOCAL_RESOURCE_ID
+);
+create index ATTACHMENT_MIGRATION__816947fe on ATTACHMENT_MIGRATION (
+    LOCAL_RESOURCE_ID
+);
+
+
+-- update the version
+update CALENDARSERVER set VALUE = '52' where NAME = 'VERSION';

Added: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_52_to_53.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_52_to_53.sql	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_52_to_53.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,52 @@
+----
+-- Copyright (c) 2012-2015 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 52 to 53 --
+---------------------------------------------------
+
+-- New columns
+alter table CALENDAR_METADATA
+  add ("CHILD_TYPE" integer default 0 not null)
+  add ("TRASHED" timestamp default null)
+  add ("IS_IN_TRASH" integer default 0 not null);
+
+-- Enumeration of child type
+
+create table CHILD_TYPE (
+    "ID" integer primary key,
+    "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CHILD_TYPE (DESCRIPTION, ID) values ('normal', 0);
+insert into CHILD_TYPE (DESCRIPTION, ID) values ('inbox', 1);
+insert into CHILD_TYPE (DESCRIPTION, ID) values ('trash', 2);
+
+
+-- New columns
+alter table CALENDAR_OBJECT
+  add ("TRASHED" timestamp default null)
+  add ("ORIGINAL_COLLECTION" integer default null);
+
+
+-- New columns
+alter table ADDRESSBOOK_OBJECT
+  add ("TRASHED" timestamp default null),
+  add ("IS_IN_TRASH" integer default 0 not null);
+
+
+-- update the version
+update CALENDARSERVER set VALUE = '53' where NAME = 'VERSION';

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_51_to_52.sql (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_51_to_52.sql)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_51_to_52.sql	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_51_to_52.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,97 @@
+----
+-- Copyright (c) 2012-2015 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 51 to 52 --
+---------------------------------------------------
+
+-- New status value
+insert into HOME_STATUS values (3, 'migrating');
+insert into HOME_STATUS values (4, 'disabled');
+
+-- Home constraints
+alter table CALENDAR_HOME
+	drop constraint CALENDAR_HOME_OWNER_UID_KEY,
+	add unique (OWNER_UID, STATUS);
+
+alter table ADDRESSBOOK_HOME
+	drop constraint ADDRESSBOOK_HOME_OWNER_UID_KEY,
+	add unique (OWNER_UID, STATUS);
+
+alter table NOTIFICATION_HOME
+	drop constraint NOTIFICATION_HOME_OWNER_UID_KEY,
+	add unique (OWNER_UID, STATUS);
+
+-- Change columns
+alter table CALENDAR_BIND
+	drop column EXTERNAL_ID,
+	add column BIND_UID varchar(36) default null;
+
+alter table SHARED_ADDRESSBOOK_BIND
+	drop column EXTERNAL_ID,
+	add column BIND_UID varchar(36) default null;
+
+alter table SHARED_GROUP_BIND
+	drop column EXTERNAL_ID,
+	add column BIND_UID varchar(36) default null;
+
+
+-- New table
+create table CALENDAR_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references CALENDAR on delete cascade,
+  LAST_SYNC_TOKEN				varchar(255),
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index CALENDAR_MIGRATION_LOCAL_RESOURCE_ID on
+  CALENDAR_MIGRATION(LOCAL_RESOURCE_ID);
+
+  
+-- New table
+create table CALENDAR_OBJECT_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references CALENDAR_OBJECT on delete cascade,
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index CALENDAR_OBJECT_MIGRATION_HOME_LOCAL on
+  CALENDAR_OBJECT_MIGRATION(CALENDAR_HOME_RESOURCE_ID, LOCAL_RESOURCE_ID);
+create index CALENDAR_OBJECT_MIGRATION_LOCAL_RESOURCE_ID on
+  CALENDAR_OBJECT_MIGRATION(LOCAL_RESOURCE_ID);
+
+  
+-- New table
+create table ATTACHMENT_MIGRATION (
+  CALENDAR_HOME_RESOURCE_ID		integer references CALENDAR_HOME on delete cascade,
+  REMOTE_RESOURCE_ID			integer not null,
+  LOCAL_RESOURCE_ID				integer	references ATTACHMENT on delete cascade,
+   
+  primary key (CALENDAR_HOME_RESOURCE_ID, REMOTE_RESOURCE_ID) -- implicit index
+);
+
+create index ATTACHMENT_MIGRATION_HOME_LOCAL on
+  ATTACHMENT_MIGRATION(CALENDAR_HOME_RESOURCE_ID, LOCAL_RESOURCE_ID);
+create index ATTACHMENT_MIGRATION_LOCAL_RESOURCE_ID on
+  ATTACHMENT_MIGRATION(LOCAL_RESOURCE_ID);
+
+
+-- update the version
+update CALENDARSERVER set VALUE = '52' where NAME = 'VERSION';

Added: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_52_to_53.sql
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_52_to_53.sql	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_52_to_53.sql	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,53 @@
+----
+-- Copyright (c) 2012-2015 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 52 to 53 --
+---------------------------------------------------
+
+-- New columns
+alter table CALENDAR_METADATA
+  add column CHILD_TYPE     integer      default 0 not null,
+  add column TRASHED        timestamp    default null,
+  add column IS_IN_TRASH    boolean      default false not null;
+
+-- Enumeration of child type
+
+create table CHILD_TYPE (
+  ID          integer     primary key,
+  DESCRIPTION varchar(16) not null unique
+);
+
+insert into CHILD_TYPE values (0, 'normal');
+insert into CHILD_TYPE values (1, 'inbox');
+insert into CHILD_TYPE values (2, 'trash');
+
+
+-- New columns
+alter table CALENDAR_OBJECT
+  add column TRASHED              timestamp    default null,
+  add column ORIGINAL_COLLECTION  integer      default null;
+
+
+-- New columns
+alter table ADDRESSBOOK_OBJECT
+  add column TRASHED       timestamp    default null,
+  add column IS_IN_TRASH   boolean      default false not null;
+
+
+
+-- update the version
+update CALENDARSERVER set VALUE = '53' where NAME = 'VERSION';

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_sharing.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_sharing.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_sharing.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_sharing.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,1472 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from collections import namedtuple
+from pycalendar.datetime import DateTime
+
+from twext.enterprise.dal.syntax import Insert, Parameter, Update, Delete, \
+    Select, Max
+from twext.python.clsprop import classproperty
+from twext.python.log import Logger
+
+from twisted.internet.defer import inlineCallbacks, returnValue, succeed
+
+from txdav.base.propertystore.base import PropertyName
+from txdav.common.datastore.sql_tables import _BIND_MODE_OWN, _BIND_MODE_DIRECT, \
+    _BIND_MODE_INDIRECT, _BIND_STATUS_ACCEPTED, _BIND_STATUS_DECLINED, \
+    _BIND_STATUS_INVITED, _BIND_STATUS_INVALID, _BIND_STATUS_DELETED, \
+    _HOME_STATUS_EXTERNAL
+from txdav.common.icommondatastore import ExternalShareFailed, \
+    HomeChildNameAlreadyExistsError, AllRetriesFailed
+from txdav.xml import element
+
+from uuid import uuid4
+
+
+log = Logger()
+
+"""
+Classes and methods that relate to sharing in the SQL store.
+"""
+
+class SharingHomeMixIn(object):
+    """
+    Common class for CommonHome to implement sharing operations
+    """
+
+    @inlineCallbacks
+    def acceptShare(self, shareUID, summary=None):
+        """
+        This share is being accepted.
+        """
+
+        shareeView = yield self.anyObjectWithShareUID(shareUID)
+        if shareeView is not None:
+            yield shareeView.acceptShare(summary)
+
+        returnValue(shareeView)
+
+
+    @inlineCallbacks
+    def declineShare(self, shareUID):
+        """
+        This share is being declined.
+        """
+
+        shareeView = yield self.anyObjectWithShareUID(shareUID)
+        if shareeView is not None:
+            yield shareeView.declineShare()
+
+        returnValue(shareeView is not None)
+
+
+    #
+    # External (cross-pod) sharing - entry point is the sharee's home collection.
+    #
+    @inlineCallbacks
+    def processExternalInvite(
+        self, ownerUID, ownerName, shareUID, bindMode, bindUID, summary,
+        copy_invite_properties, supported_components=None
+    ):
+        """
+        External invite received.
+        """
+
+        # Get the owner home - create external one if not present
+        ownerHome = yield self._txn.homeWithUID(
+            self._homeType, ownerUID, status=_HOME_STATUS_EXTERNAL, create=True
+        )
+        if ownerHome is None or not ownerHome.external():
+            raise ExternalShareFailed("Invalid owner UID: {}".format(ownerUID))
+
+        # Try to find owner calendar via its external id
+        ownerView = yield ownerHome.childWithBindUID(bindUID)
+        if ownerView is None:
+            ownerView = yield ownerHome.createCollectionForExternalShare(ownerName, bindUID, supported_components)
+
+        # Now carry out the share operation
+        if bindMode == _BIND_MODE_DIRECT:
+            shareeView = yield ownerView.directShareWithUser(
+                self.uid(), shareName=shareUID
+            )
+        else:
+            shareeView = yield ownerView.inviteUIDToShare(
+                self.uid(), bindMode, summary, shareName=shareUID
+            )
+
+        shareeView.setInviteCopyProperties(copy_invite_properties)
+
+
+    @inlineCallbacks
+    def processExternalUninvite(self, ownerUID, bindUID, shareUID):
+        """
+        External invite received.
+        """
+
+        # Get the owner home
+        ownerHome = yield self._txn.homeWithUID(self._homeType, ownerUID, status=_HOME_STATUS_EXTERNAL)
+        if ownerHome is None or not ownerHome.external():
+            raise ExternalShareFailed("Invalid owner UID: {}".format(ownerUID))
+
+        # Try to find owner calendar via its external id
+        ownerView = yield ownerHome.childWithBindUID(bindUID)
+        if ownerView is None:
+            raise ExternalShareFailed("Invalid share ID: {}".format(shareUID))
+
+        # Now carry out the share operation
+        yield ownerView.uninviteUIDFromShare(self.uid())
+
+        # See if there are any references to the external share. If not,
+        # remove it
+        invites = yield ownerView.sharingInvites()
+        if len(invites) == 0:
+            yield ownerHome.removeExternalChild(ownerView)
+
+
+    @inlineCallbacks
+    def processExternalReply(
+        self, ownerUID, shareeUID, shareUID, bindStatus, summary=None
+    ):
+        """
+        External invite received.
+        """
+
+        # Make sure the shareeUID and shareUID match
+
+        # Get the owner home - create external one if not present
+        shareeHome = yield self._txn.homeWithUID(self._homeType, shareeUID, status=_HOME_STATUS_EXTERNAL)
+        if shareeHome is None or not shareeHome.external():
+            raise ExternalShareFailed(
+                "Invalid sharee UID: {}".format(shareeUID)
+            )
+
+        # Try to find owner calendar via its external id
+        shareeView = yield shareeHome.anyObjectWithShareUID(shareUID)
+        if shareeView is None:
+            raise ExternalShareFailed("Invalid share UID: {}".format(shareUID))
+
+        # Now carry out the share operation
+        if bindStatus == _BIND_STATUS_ACCEPTED:
+            yield shareeHome.acceptShare(shareUID, summary)
+        elif bindStatus == _BIND_STATUS_DECLINED:
+            if shareeView.direct():
+                yield shareeView.deleteShare()
+            else:
+                yield shareeHome.declineShare(shareUID)
+
+
+    @inlineCallbacks
+    def createCollectionForExternalShare(self, name, bindUID, supported_components):
+        """
+        Create the L{CommonHomeChild} object that used as a "stub" to represent the external
+        object on the other pod for the sharer.
+
+        @param name: name of the collection
+        @type name: L{str}
+        @param bindUID: id on other pod
+        @type bindUID: L{str}
+        @param supported_components: optional set of support components
+        @type supported_components: L{str}
+        """
+        try:
+            ownerView = yield self.createChildWithName(
+                name, bindUID=bindUID
+            )
+        except HomeChildNameAlreadyExistsError:
+            # This is odd - it means we possibly have a left over sharer
+            # collection which the sharer likely removed and re-created
+            # with the same name but now it has a different bindUID and
+            # is not found by the initial query. What we do is check to see
+            # whether any shares still reference the old ID - if they do we
+            # are hosed. If not, we can remove the old item and create a new one.
+            oldOwnerView = yield self.childWithName(name)
+            invites = yield oldOwnerView.sharingInvites()
+            if len(invites) != 0:
+                log.error(
+                    "External invite collection name is present with a "
+                    "different bindUID and still has shares"
+                )
+                raise
+            log.error(
+                "External invite collection name is present with a "
+                "different bindUID - trying to fix"
+            )
+            yield self.removeExternalChild(oldOwnerView)
+            ownerView = yield self.createChildWithName(
+                name, bindUID=bindUID
+            )
+
+        if (
+            supported_components is not None and
+            hasattr(ownerView, "setSupportedComponents")
+        ):
+            yield ownerView.setSupportedComponents(supported_components)
+
+        returnValue(ownerView)
+
+
+    @inlineCallbacks
+    def sharedToBindRecords(self):
+        """
+        Return an L{dict} that maps home/directory uid to a sharing bind record for collections shared to this user.
+        """
+
+        # Get shared to bind records
+        records = yield self._childClass._bindRecordClass.query(
+            self._txn,
+            (getattr(self._childClass._bindRecordClass, self._childClass._bindHomeIDAttributeName) == self.id()).And(
+                self._childClass._bindRecordClass.bindMode != _BIND_MODE_OWN
+            )
+        )
+        records = dict([(getattr(record, self._childClass._bindResourceIDAttributeName), record) for record in records])
+        if not records:
+            returnValue({})
+
+        # Look up the owner records for each of the shared to records
+        ownerRecords = yield self._childClass._bindRecordClass.query(
+            self._txn,
+            (getattr(self._childClass._bindRecordClass, self._childClass._bindResourceIDAttributeName).In(records.keys())).And(
+                self._childClass._bindRecordClass.bindMode == _BIND_MODE_OWN
+            )
+        )
+
+        # Important - this method is called when migrating shared-to records to some other pod. For that to work all the
+        # owner records must have a bindUID assigned to them. Normally bindUIDs are assigned the first time an external
+        # share is created, but migration will implicitly create the external share
+        for ownerRecord in ownerRecords:
+            if not ownerRecord.bindUID:
+                yield ownerRecord.update(bindUID=str(uuid4()))
+
+        ownerRecords = dict([(getattr(record, self._childClass._bindResourceIDAttributeName), record) for record in ownerRecords])
+
+        # Look up the metadata records for each of the shared to records
+        metadataRecords = yield self._childClass._metadataRecordClass.query(
+            self._txn,
+            self._childClass._metadataRecordClass.resourceID.In(records.keys()),
+        )
+        metadataRecords = dict([(record.resourceID, record) for record in metadataRecords])
+
+        # Map the owner records to home ownerUIDs
+        homeIDs = dict([(
+            getattr(record, self._childClass._bindHomeIDAttributeName), getattr(record, self._childClass._bindResourceIDAttributeName)
+        ) for record in ownerRecords.values()])
+        homes = yield self._childClass._homeRecordClass.query(
+            self._txn,
+            self._childClass._homeRecordClass.resourceID.In(homeIDs.keys()),
+        )
+        homeMap = dict((homeIDs[home.resourceID], home.ownerUID,) for home in homes)
+
+        returnValue(dict([(homeMap[calendarID], (records[calendarID], ownerRecords[calendarID], metadataRecords[calendarID],),) for calendarID in records]))
+
+
+
+SharingInvitation = namedtuple(
+    "SharingInvitation",
+    ["uid", "ownerUID", "ownerHomeID", "shareeUID", "shareeHomeID", "mode", "status", "summary"]
+)
+
+
+
+class SharingMixIn(object):
+    """
+    Common class for CommonHomeChild and AddressBookObject
+    """
+
+    @classproperty
+    def _bindInsertQuery(cls, **kw):
+        """
+        DAL statement to create a bind entry that connects a collection to its
+        home.
+        """
+        bind = cls._bindSchema
+        return Insert({
+            bind.HOME_RESOURCE_ID: Parameter("homeID"),
+            bind.RESOURCE_ID: Parameter("resourceID"),
+            bind.RESOURCE_NAME: Parameter("name"),
+            bind.BIND_MODE: Parameter("mode"),
+            bind.BIND_STATUS: Parameter("bindStatus"),
+            bind.BIND_UID: Parameter("bindUID"),
+            bind.MESSAGE: Parameter("message"),
+        })
+
+
+    @classmethod
+    def _updateBindColumnsQuery(cls, columnMap):
+        bind = cls._bindSchema
+        return Update(
+            columnMap,
+            Where=(bind.RESOURCE_ID == Parameter("resourceID")).And(
+                bind.HOME_RESOURCE_ID == Parameter("homeID")),
+        )
+
+
+    @classproperty
+    def _deleteBindForResourceIDAndHomeID(cls):
+        bind = cls._bindSchema
+        return Delete(
+            From=bind,
+            Where=(bind.RESOURCE_ID == Parameter("resourceID")).And(
+                bind.HOME_RESOURCE_ID == Parameter("homeID")),
+        )
+
+
+    @classmethod
+    def _bindFor(cls, condition):
+        bind = cls._bindSchema
+        columns = cls.bindColumns() + cls.additionalBindColumns()
+        return Select(
+            columns,
+            From=bind,
+            Where=condition
+        )
+
+
+    @classmethod
+    def _bindInviteFor(cls, condition):
+        home = cls._homeSchema
+        bind = cls._bindSchema
+        return Select(
+            [
+                home.OWNER_UID,
+                bind.HOME_RESOURCE_ID,
+                bind.RESOURCE_ID,
+                bind.RESOURCE_NAME,
+                bind.BIND_MODE,
+                bind.BIND_STATUS,
+                bind.MESSAGE,
+            ],
+            From=bind.join(home, on=(bind.HOME_RESOURCE_ID == home.RESOURCE_ID)),
+            Where=condition
+        )
+
+
+    @classproperty
+    def _sharedInvitationBindForResourceID(cls):
+        bind = cls._bindSchema
+        return cls._bindInviteFor(
+            (bind.RESOURCE_ID == Parameter("resourceID")).And
+            (bind.BIND_MODE != _BIND_MODE_OWN)
+        )
+
+
+    @classproperty
+    def _acceptedBindForHomeID(cls):
+        bind = cls._bindSchema
+        return cls._bindFor((bind.HOME_RESOURCE_ID == Parameter("homeID"))
+                            .And(bind.BIND_STATUS == _BIND_STATUS_ACCEPTED))
+
+
+    @classproperty
+    def _bindForResourceIDAndHomeID(cls):
+        """
+        DAL query that looks up home bind rows by home child
+        resource ID and home resource ID.
+        """
+        bind = cls._bindSchema
+        return cls._bindFor((bind.RESOURCE_ID == Parameter("resourceID"))
+                            .And(bind.HOME_RESOURCE_ID == Parameter("homeID")))
+
+
+    @classproperty
+    def _bindForBindUIDAndHomeID(cls):
+        """
+        DAL query that looks up home bind rows by home child
+        resource ID and home resource ID.
+        """
+        bind = cls._bindSchema
+        return cls._bindFor((bind.BIND_UID == Parameter("bindUID"))
+                            .And(bind.HOME_RESOURCE_ID == Parameter("homeID")))
+
+
+    @classproperty
+    def _bindForNameAndHomeID(cls):
+        """
+        DAL query that looks up any bind rows by home child
+        resource ID and home resource ID.
+        """
+        bind = cls._bindSchema
+        return cls._bindFor((bind.RESOURCE_NAME == Parameter("name"))
+                            .And(bind.HOME_RESOURCE_ID == Parameter("homeID")))
+
+
+    #
+    # Higher level API
+    #
+    @inlineCallbacks
+    def inviteUIDToShare(self, shareeUID, mode, summary=None, shareName=None):
+        """
+        Invite a user to share this collection - either create the share if it does not exist, or
+        update the existing share with new values. Make sure a notification is sent as well.
+
+        @param shareeUID: UID of the sharee
+        @type shareeUID: C{str}
+        @param mode: access mode
+        @type mode: C{int}
+        @param summary: share message
+        @type summary: C{str}
+        """
+
+        # Look for existing invite and update its fields or create new one
+        shareeView = yield self.shareeView(shareeUID)
+        if shareeView is not None:
+            status = _BIND_STATUS_INVITED if shareeView.shareStatus() in (_BIND_STATUS_DECLINED, _BIND_STATUS_INVALID) else None
+            yield self.updateShare(shareeView, mode=mode, status=status, summary=summary)
+        else:
+            shareeView = yield self.createShare(shareeUID=shareeUID, mode=mode, summary=summary, shareName=shareName)
+
+        # Check for external
+        if shareeView.viewerHome().external():
+            yield self._sendExternalInvite(shareeView)
+        else:
+            # Send invite notification
+            yield self._sendInviteNotification(shareeView)
+        returnValue(shareeView)
+
+
+    @inlineCallbacks
+    def directShareWithUser(self, shareeUID, shareName=None):
+        """
+        Create a direct share with the specified user. Note it is currently up to the app layer
+        to enforce access control - this is not ideal as we really should have control of that in
+        the store. Once we do, this api will need to verify that access is allowed for a direct share.
+
+        NB no invitations are used with direct sharing.
+
+        @param shareeUID: UID of the sharee
+        @type shareeUID: C{str}
+        """
+
+        # Ignore if it already exists
+        shareeView = yield self.shareeView(shareeUID)
+        if shareeView is None:
+            shareeView = yield self.createShare(shareeUID=shareeUID, mode=_BIND_MODE_DIRECT, shareName=shareName)
+            yield shareeView.newShare()
+
+            # Check for external
+            if shareeView.viewerHome().external():
+                yield self._sendExternalInvite(shareeView)
+
+        returnValue(shareeView)
+
+
+    @inlineCallbacks
+    def uninviteUIDFromShare(self, shareeUID):
+        """
+        Remove a user from a share. Make sure a notification is sent as well.
+
+        @param shareeUID: UID of the sharee
+        @type shareeUID: C{str}
+        """
+        # Cancel invites - we'll just use whatever userid we are given
+
+        shareeView = yield self.shareeView(shareeUID)
+        if shareeView is not None:
+            if shareeView.viewerHome().external():
+                yield self._sendExternalUninvite(shareeView)
+            else:
+                # If current user state is accepted then we send an invite with the new state, otherwise
+                # we cancel any existing invites for the user
+                if not shareeView.direct():
+                    if shareeView.shareStatus() != _BIND_STATUS_ACCEPTED:
+                        yield self._removeInviteNotification(shareeView)
+                    else:
+                        yield self._sendInviteNotification(shareeView, notificationState=_BIND_STATUS_DELETED)
+
+            # Remove the bind
+            yield self.removeShare(shareeView)
+
+
+    @inlineCallbacks
+    def acceptShare(self, summary=None):
+        """
+        This share is being accepted.
+        """
+
+        if not self.direct() and self.shareStatus() != _BIND_STATUS_ACCEPTED:
+            if self.external():
+                yield self._replyExternalInvite(_BIND_STATUS_ACCEPTED, summary)
+            ownerView = yield self.ownerView()
+            yield ownerView.updateShare(self, status=_BIND_STATUS_ACCEPTED)
+            yield self.newShare(displayname=summary)
+            if not ownerView.external():
+                yield self._sendReplyNotification(ownerView, summary)
+
+
+    @inlineCallbacks
+    def declineShare(self):
+        """
+        This share is being declined.
+        """
+
+        if not self.direct() and self.shareStatus() != _BIND_STATUS_DECLINED:
+            if self.external():
+                yield self._replyExternalInvite(_BIND_STATUS_DECLINED)
+            ownerView = yield self.ownerView()
+            yield ownerView.updateShare(self, status=_BIND_STATUS_DECLINED)
+            if not ownerView.external():
+                yield self._sendReplyNotification(ownerView)
+
+
+    @inlineCallbacks
+    def deleteShare(self):
+        """
+        This share is being deleted (by the sharee) - either decline or remove (for direct shares).
+        """
+
+        ownerView = yield self.ownerView()
+        if self.direct():
+            yield ownerView.removeShare(self)
+            if ownerView.external():
+                yield self._replyExternalInvite(_BIND_STATUS_DECLINED)
+        else:
+            yield self.declineShare()
+
+
+    @inlineCallbacks
+    def ownerDeleteShare(self):
+        """
+        This share is being deleted (by the owner) - either decline or remove (for direct shares).
+        """
+
+        # Change status on store object
+        yield self.setShared(False)
+
+        # Remove all sharees (direct and invited)
+        for invitation in (yield self.sharingInvites()):
+            yield self.uninviteUIDFromShare(invitation.shareeUID)
+
+
+    def newShare(self, displayname=None):
+        """
+        Override in derived classes to do any specific operations needed when a share
+        is first accepted.
+        """
+        return succeed(None)
+
+
+    @inlineCallbacks
+    def allInvitations(self):
+        """
+        Get list of all invitations (non-direct) to this object.
+        """
+        invitations = yield self.sharingInvites()
+
+        # remove direct shares as those are not "real" invitations
+        invitations = filter(lambda x: x.mode != _BIND_MODE_DIRECT, invitations)
+        invitations.sort(key=lambda invitation: invitation.shareeUID)
+        returnValue(invitations)
+
+
+    @inlineCallbacks
+    def _sendInviteNotification(self, shareeView, notificationState=None):
+        """
+        Called on the owner's resource.
+        """
+        # When deleting the message is the sharee's display name
+        displayname = shareeView.shareMessage()
+        if notificationState == _BIND_STATUS_DELETED:
+            displayname = str(shareeView.properties().get(PropertyName.fromElement(element.DisplayName), displayname))
+
+        notificationtype = {
+            "notification-type": "invite-notification",
+            "shared-type": shareeView.sharedResourceType(),
+        }
+        notificationdata = {
+            "notification-type": "invite-notification",
+            "shared-type": shareeView.sharedResourceType(),
+            "dtstamp": DateTime.getNowUTC().getText(),
+            "owner": shareeView.ownerHome().uid(),
+            "sharee": shareeView.viewerHome().uid(),
+            "uid": shareeView.shareUID(),
+            "status": shareeView.shareStatus() if notificationState is None else notificationState,
+            "access": (yield shareeView.effectiveShareMode()),
+            "ownerName": self.shareName(),
+            "summary": displayname,
+        }
+        if hasattr(self, "getSupportedComponents"):
+            notificationdata["supported-components"] = self.getSupportedComponents()
+
+        # Add to sharee's collection
+        notifications = yield self._txn.notificationsWithUID(shareeView.viewerHome().uid(), create=True)
+        yield notifications.writeNotificationObject(shareeView.shareUID(), notificationtype, notificationdata)
+
+
+    @inlineCallbacks
+    def _sendReplyNotification(self, ownerView, summary=None):
+        """
+        Create a reply notification based on the current state of this shared resource.
+        """
+
+        # Generate invite XML
+        notificationUID = "%s-reply" % (self.shareUID(),)
+
+        notificationtype = {
+            "notification-type": "invite-reply",
+            "shared-type": self.sharedResourceType(),
+        }
+
+        notificationdata = {
+            "notification-type": "invite-reply",
+            "shared-type": self.sharedResourceType(),
+            "dtstamp": DateTime.getNowUTC().getText(),
+            "owner": self.ownerHome().uid(),
+            "sharee": self.viewerHome().uid(),
+            "status": self.shareStatus(),
+            "ownerName": ownerView.shareName(),
+            "in-reply-to": self.shareUID(),
+            "summary": summary,
+        }
+
+        # Add to owner notification collection
+        notifications = yield self._txn.notificationsWithUID(self.ownerHome().uid(), create=True)
+        yield notifications.writeNotificationObject(notificationUID, notificationtype, notificationdata)
+
+
+    @inlineCallbacks
+    def _removeInviteNotification(self, shareeView):
+        """
+        Called on the owner's resource.
+        """
+
+        # Remove from sharee's collection
+        notifications = yield self._txn.notificationsWithUID(shareeView.viewerHome().uid())
+        yield notifications.removeNotificationObjectWithUID(shareeView.shareUID())
+
+
+    #
+    # External/cross-pod API
+    #
+    @inlineCallbacks
+    def _sendExternalInvite(self, shareeView):
+
+        # Must make sure this collection has a BIND_UID assigned
+        if not self._bindUID:
+            self._bindUID = str(uuid4())
+            yield self._updateBindColumnsQuery({self._bindSchema.BIND_UID: self._bindUID}).on(
+                self._txn,
+                resourceID=self.id(), homeID=self.ownerHome().id()
+            )
+
+        # Now send the invite
+        yield self._txn.store().conduit.send_shareinvite(
+            self._txn,
+            shareeView.ownerHome()._homeType,
+            shareeView.ownerHome().uid(),
+            self.shareName(),
+            shareeView.viewerHome().uid(),
+            shareeView.shareUID(),
+            shareeView.shareMode(),
+            self.bindUID(),
+            shareeView.shareMessage(),
+            self.getInviteCopyProperties(),
+            supported_components=self.getSupportedComponents() if hasattr(self, "getSupportedComponents") else None,
+        )
+
+
+    @inlineCallbacks
+    def _sendExternalUninvite(self, shareeView):
+
+        yield self._txn.store().conduit.send_shareuninvite(
+            self._txn,
+            shareeView.ownerHome()._homeType,
+            shareeView.ownerHome().uid(),
+            self.bindUID(),
+            shareeView.viewerHome().uid(),
+            shareeView.shareUID(),
+        )
+
+
+    @inlineCallbacks
+    def _replyExternalInvite(self, status, summary=None):
+
+        yield self._txn.store().conduit.send_sharereply(
+            self._txn,
+            self.viewerHome()._homeType,
+            self.ownerHome().uid(),
+            self.viewerHome().uid(),
+            self.shareUID(),
+            status,
+            summary,
+        )
+
+
+    #
+    # Lower level API
+    #
+    @inlineCallbacks
+    def ownerView(self):
+        """
+        Return the owner resource counterpart of this shared resource.
+
+        Note we have to play a trick with the property store to coerce it to match
+        the per-user properties for the owner.
+        """
+        # Get the child of the owner home that has the same resource id as the owned one
+        ownerView = yield self.ownerHome().childWithID(self.id())
+        returnValue(ownerView)
+
+
+    @inlineCallbacks
+    def shareeView(self, shareeUID):
+        """
+        Return the shared resource counterpart of this owned resource for the specified sharee.
+
+        Note we have to play a trick with the property store to coerce it to match
+        the per-user properties for the sharee.
+        """
+
+        # Never return the owner's own resource
+        if self._home.uid() == shareeUID:
+            returnValue(None)
+
+        # Get the child of the sharee home that has the same resource id as the owned one
+        shareeHome = yield self._txn.homeWithUID(self._home._homeType, shareeUID, authzUID=shareeUID)
+        shareeView = (yield shareeHome.allChildWithID(self.id())) if shareeHome is not None else None
+        returnValue(shareeView)
+
+
+    @inlineCallbacks
+    def shareWithUID(self, shareeUID, mode, status=None, summary=None, shareName=None):
+        """
+        Share this (owned) L{CommonHomeChild} with another principal.
+
+        @param shareeUID: The UID of the sharee.
+        @type: L{str}
+
+        @param mode: The sharing mode; L{_BIND_MODE_READ} or
+            L{_BIND_MODE_WRITE} or L{_BIND_MODE_DIRECT}
+        @type mode: L{str}
+
+        @param status: The sharing status; L{_BIND_STATUS_INVITED} or
+            L{_BIND_STATUS_ACCEPTED}
+        @type: L{str}
+
+        @param summary: The proposed message to go along with the share, which
+            will be used as the default display name.
+        @type: L{str}
+
+        @return: the name of the shared calendar in the new calendar home.
+        @rtype: L{str}
+        """
+        shareeHome = yield self._txn.homeWithUID(self._home._homeType, shareeUID, create=True)
+        returnValue(
+            (yield self.shareWith(shareeHome, mode, status, summary, shareName))
+        )
+
+
+    @inlineCallbacks
+    def shareWith(self, shareeHome, mode, status=None, summary=None, shareName=None):
+        """
+        Share this (owned) L{CommonHomeChild} with another home.
+
+        @param shareeHome: The home of the sharee.
+        @type: L{CommonHome}
+
+        @param mode: The sharing mode; L{_BIND_MODE_READ} or
+            L{_BIND_MODE_WRITE} or L{_BIND_MODE_DIRECT}
+        @type: L{str}
+
+        @param status: The sharing status; L{_BIND_STATUS_INVITED} or
+            L{_BIND_STATUS_ACCEPTED}
+        @type: L{str}
+
+        @param summary: The proposed message to go along with the share, which
+            will be used as the default display name.
+        @type: L{str}
+
+        @param shareName: The proposed name of the new share.
+        @type: L{str}
+
+        @return: the name of the shared calendar in the new calendar home.
+        @rtype: L{str}
+        """
+
+        if status is None:
+            status = _BIND_STATUS_ACCEPTED
+
+        @inlineCallbacks
+        def doInsert(subt):
+            newName = shareName if shareName is not None else self.newShareName()
+            yield self._bindInsertQuery.on(
+                subt,
+                homeID=shareeHome._resourceID,
+                resourceID=self._resourceID,
+                name=newName,
+                mode=mode,
+                bindStatus=status,
+                bindUID=None,
+                message=summary
+            )
+            returnValue(newName)
+        try:
+            bindName = yield self._txn.subtransaction(doInsert)
+        except AllRetriesFailed:
+            # FIXME: catch more specific exception
+            child = yield shareeHome.allChildWithID(self._resourceID)
+            yield self.updateShare(
+                child, mode=mode, status=status,
+                summary=summary
+            )
+            bindName = child._name
+        else:
+            if status == _BIND_STATUS_ACCEPTED:
+                shareeView = yield shareeHome.anyObjectWithShareUID(bindName)
+                yield shareeView._initSyncToken()
+                yield shareeView._initBindRevision()
+
+        # Mark this as shared
+        yield self.setShared(True)
+
+        # Must send notification to ensure cache invalidation occurs
+        yield self.notifyPropertyChanged()
+        yield shareeHome.notifyChanged()
+
+        returnValue(bindName)
+
+
+    @inlineCallbacks
+    def createShare(self, shareeUID, mode, summary=None, shareName=None):
+        """
+        Create a new shared resource. If the mode is direct, the share is created in accepted state,
+        otherwise the share is created in invited state.
+        """
+        shareeHome = yield self._txn.homeWithUID(self.ownerHome()._homeType, shareeUID, create=True)
+
+        yield self.shareWith(
+            shareeHome,
+            mode=mode,
+            status=_BIND_STATUS_INVITED if mode != _BIND_MODE_DIRECT else _BIND_STATUS_ACCEPTED,
+            summary=summary,
+            shareName=shareName,
+        )
+        shareeView = yield self.shareeView(shareeUID)
+        returnValue(shareeView)
+
+
+    @inlineCallbacks
+    def updateShare(self, shareeView, mode=None, status=None, summary=None):
+        """
+        Update share mode, status, and message for a home child shared with
+        this (owned) L{CommonHomeChild}.
+
+        @param shareeView: The sharee home child that shares this.
+        @type shareeView: L{CommonHomeChild}
+
+        @param mode: The sharing mode; L{_BIND_MODE_READ} or
+            L{_BIND_MODE_WRITE} or None to not update
+        @type mode: L{str}
+
+        @param status: The sharing status; L{_BIND_STATUS_INVITED} or
+            L{_BIND_STATUS_ACCEPTED} or L{_BIND_STATUS_DECLINED} or
+            L{_BIND_STATUS_INVALID}  or None to not update
+        @type status: L{str}
+
+        @param summary: The proposed message to go along with the share, which
+            will be used as the default display name, or None to not update
+        @type summary: L{str}
+        """
+        # TODO: raise a nice exception if shareeView is not, in fact, a shared
+        # version of this same L{CommonHomeChild}
+
+        # remove None parameters, and substitute None for empty string
+        bind = self._bindSchema
+        columnMap = {}
+        if mode != None and mode != shareeView._bindMode:
+            columnMap[bind.BIND_MODE] = mode
+        if status != None and status != shareeView._bindStatus:
+            columnMap[bind.BIND_STATUS] = status
+        if summary != None and summary != shareeView._bindMessage:
+            columnMap[bind.MESSAGE] = summary
+
+        if columnMap:
+
+            # Count accepted
+            if bind.BIND_STATUS in columnMap:
+                previouslyAcceptedCount = yield shareeView._previousAcceptCount()
+
+            yield self._updateBindColumnsQuery(columnMap).on(
+                self._txn,
+                resourceID=self._resourceID, homeID=shareeView._home._resourceID
+            )
+
+            # Update affected attributes
+            if bind.BIND_MODE in columnMap:
+                shareeView._bindMode = columnMap[bind.BIND_MODE]
+
+            if bind.BIND_STATUS in columnMap:
+                shareeView._bindStatus = columnMap[bind.BIND_STATUS]
+                yield shareeView._changedStatus(previouslyAcceptedCount)
+
+            if bind.MESSAGE in columnMap:
+                shareeView._bindMessage = columnMap[bind.MESSAGE]
+
+            yield shareeView.invalidateQueryCache()
+
+            # Must send notification to ensure cache invalidation occurs
+            yield self.notifyPropertyChanged()
+            yield shareeView.viewerHome().notifyChanged()
+
+
+    def _previousAcceptCount(self):
+        return succeed(1)
+
+
+    @inlineCallbacks
+    def _changedStatus(self, previouslyAcceptedCount):
+        key = self._home._childrenKey(self.isInTrash())
+        if self._bindStatus == _BIND_STATUS_ACCEPTED:
+            yield self._initSyncToken()
+            yield self._initBindRevision()
+            self._home._children[key][self.name()] = self
+            self._home._children[key][self.id()] = self
+        elif self._bindStatus in (_BIND_STATUS_INVITED, _BIND_STATUS_DECLINED):
+            yield self._deletedSyncToken(sharedRemoval=True)
+            self._home._children[key].pop(self.name(), None)
+            self._home._children[key].pop(self.id(), None)
+
+
+    @inlineCallbacks
+    def removeShare(self, shareeView):
+        """
+        Remove the shared version of this (owned) L{CommonHomeChild} from the
+        referenced L{CommonHome}.
+
+        @see: L{CommonHomeChild.shareWith}
+
+        @param shareeView: The shared resource being removed.
+
+        @return: a L{Deferred} which will fire with the previous shareUID
+        """
+        key = self._home._childrenKey(self.isInTrash())
+
+        # remove sync tokens
+        shareeHome = shareeView.viewerHome()
+        yield shareeView._deletedSyncToken(sharedRemoval=True)
+        shareeHome._children[key].pop(shareeView._name, None)
+        shareeHome._children[key].pop(shareeView._resourceID, None)
+
+        # Must send notification to ensure cache invalidation occurs
+        yield self.notifyPropertyChanged()
+        yield shareeHome.notifyChanged()
+
+        # delete binds including invites
+        yield self._deleteBindForResourceIDAndHomeID.on(
+            self._txn,
+            resourceID=self._resourceID,
+            homeID=shareeHome._resourceID,
+        )
+
+        yield shareeView.invalidateQueryCache()
+
+
+    @inlineCallbacks
+    def unshare(self):
+        """
+        Unshares a collection, regardless of which "direction" it was shared.
+        """
+        if self.owned():
+            # This collection may be shared to others
+            invites = yield self.sharingInvites()
+            for invite in invites:
+                shareeView = yield self.shareeView(invite.shareeUID)
+                yield self.removeShare(shareeView)
+        else:
+            # This collection is shared to me
+            ownerView = yield self.ownerView()
+            yield ownerView.removeShare(self)
+
+
+    @inlineCallbacks
+    def sharingInvites(self):
+        """
+        Retrieve the list of all L{SharingInvitation}'s for this L{CommonHomeChild}, irrespective of mode.
+
+        @return: L{SharingInvitation} objects
+        @rtype: a L{Deferred} which fires with a L{list} of L{SharingInvitation}s.
+        """
+        if not self.owned():
+            returnValue([])
+
+        # get all accepted binds
+        invitedRows = yield self._sharedInvitationBindForResourceID.on(
+            self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
+        )
+
+        result = []
+        for homeUID, homeRID, _ignore_resourceID, resourceName, bindMode, bindStatus, bindMessage in invitedRows:
+            invite = SharingInvitation(
+                resourceName,
+                self.ownerHome().name(),
+                self.ownerHome().id(),
+                homeUID,
+                homeRID,
+                bindMode,
+                bindStatus,
+                bindMessage,
+            )
+            result.append(invite)
+        returnValue(result)
+
+
+    @inlineCallbacks
+    def sharingBindRecords(self):
+        """
+        Return an L{dict} that maps home/directory uid to a sharing bind record.
+        """
+        if not self.owned():
+            returnValue({})
+
+        records = yield self._bindRecordClass.querysimple(
+            self._txn,
+            **{self._bindResourceIDAttributeName: self.id()}
+        )
+        homeIDs = [getattr(record, self._bindHomeIDAttributeName) for record in records]
+        homes = yield self._homeRecordClass.query(
+            self._txn,
+            self._homeRecordClass.resourceID.In(homeIDs),
+        )
+        homeMap = dict((home.resourceID, home.ownerUID,) for home in homes)
+
+        returnValue(dict([(homeMap[getattr(record, self._bindHomeIDAttributeName)], record,) for record in records if record.bindMode != _BIND_MODE_OWN]))
+
+
+    def migrateBindRecords(self, bindUID):
+        """
+        The user that owns this collection is being migrated to another pod. We need to switch over
+        the sharing details to point to the new external user.
+        """
+        if self.owned():
+            return self.migrateSharedByRecords(bindUID)
+        else:
+            return self.migrateSharedToRecords()
+
+
+    @inlineCallbacks
+    def migrateSharedByRecords(self, bindUID):
+        """
+        The user that owns this collection is being migrated to another pod. We need to switch over
+        the sharing details to point to the new external user. For sharees hosted on this pod, we
+        update their bind record to point to a new external home/calendar for the sharer. For sharees
+        hosted on other pods, we simply remove their bind entries.
+        """
+
+        # Get the external home and make sure there is a "fake" calendar associated with it
+        home = yield self.externalHome()
+        calendar = yield home.childWithBindUID(bindUID)
+        if calendar is None:
+            calendar = yield home.createCollectionForExternalShare(
+                self.name(),
+                bindUID,
+                self.getSupportedComponents() if hasattr(self, "getSupportedComponents") else None,
+            )
+
+        remaining = False
+        records = yield self._bindRecordClass.querysimple(self._txn, **{self._bindResourceIDAttributeName: self.id()})
+        for record in records:
+            if record.bindMode == _BIND_MODE_OWN:
+                continue
+            shareeHome = yield self._txn.homeWithResourceID(home._homeType, getattr(record, self._bindHomeIDAttributeName))
+            if shareeHome.normal():
+                remaining = True
+                yield record.update(**{
+                    self._bindResourceIDAttributeName: calendar.id(),
+                })
+            else:
+                # It is OK to just delete (as opposed to doing a full "unshare") without adjusting other things
+                # like sync revisions since those would not have been used for an external share anyway. Also,
+                # revisions are tied to the calendar id and the original calendar will be removed after migration
+                # is complete.
+                yield record.delete()
+
+        # If there are no external shares remaining, we can remove the external calendar
+        if not remaining:
+            yield calendar.remove()
+
+
+    @inlineCallbacks
+    def migrateSharedToRecords(self):
+        """
+        The user that owns this collection is being migrated to another pod. We need to switch over
+        the sharing details to point to the new external user.
+        """
+
+        # Update the bind record for this calendar to point to the external home
+        records = yield self._bindRecordClass.querysimple(
+            self._txn,
+            **{
+                self._bindHomeIDAttributeName: self.viewerHome().id(),
+                self._bindResourceIDAttributeName: self.id(),
+            }
+        )
+
+        if len(records) == 1:
+
+            # What we do depends on whether the sharer is local to this pod or not
+            if self.ownerHome().normal():
+                # Get the external home for the sharee
+                home = yield self.externalHome()
+
+                yield records[0].update(**{
+                    self._bindHomeIDAttributeName: home.id(),
+                })
+            else:
+                # It is OK to just delete (as opposed to doing a full "unshare") without adjusting other things
+                # like sync revisions since those would not have been used for an external share anyway. Also,
+                # revisions are tied to the sharee calendar home id and that will be removed after migration
+                # is complete.
+                yield records[0].delete()
+
+                # Clean up external calendar if no sharees left
+                calendar = yield self.ownerView()
+                invites = yield calendar.sharingInvites()
+                if len(invites) == 0:
+                    yield calendar.remove()
+        else:
+            raise AssertionError("We must have a bind record for this calendar.")
+
+
+    def externalHome(self):
+        """
+        Create and return an L{CommonHome} for the user being migrated. Note that when called, the user
+        directory record may still indicate that they are hosted on this pod, so we have to forcibly create
+        a home for the external user.
+        """
+        currentHome = self.viewerHome()
+        return self._txn.homeWithUID(currentHome._homeType, currentHome.uid(), status=_HOME_STATUS_EXTERNAL, create=True)
+
+
+    @inlineCallbacks
+    def _initBindRevision(self):
+        yield self.syncToken() # init self._syncTokenRevision if None
+        self._bindRevision = self._syncTokenRevision
+
+        bind = self._bindSchema
+        yield self._updateBindColumnsQuery(
+            {bind.BIND_REVISION : Parameter("revision"), }
+        ).on(
+            self._txn,
+            revision=self._bindRevision,
+            resourceID=self._resourceID,
+            homeID=self.viewerHome()._resourceID,
+        )
+        yield self.invalidateQueryCache()
+
+
+    def sharedResourceType(self):
+        """
+        The sharing resource type. Needs to be overridden by each type of resource that can be shared.
+
+        @return: an identifier for the type of the share.
+        @rtype: C{str}
+        """
+        return ""
+
+
+    def newShareName(self):
+        """
+        Name used when creating a new share. By default this is a UUID.
+        """
+        return str(uuid4())
+
+
+    def owned(self):
+        """
+        @see: L{ICalendar.owned}
+        """
+        return self._bindMode == _BIND_MODE_OWN
+
+
+    def isShared(self):
+        """
+        For an owned collection indicate whether it is shared.
+
+        @return: C{True} if shared, C{False} otherwise
+        @rtype: C{bool}
+        """
+        return self.owned() and self._bindMessage == "shared"
+
+
+    @inlineCallbacks
+    def setShared(self, shared):
+        """
+        Set an owned collection to shared or unshared state. Technically this is not useful as "shared"
+        really means it has invitees, but the current sharing spec supports a notion of a shared collection
+        that has not yet had invitees added. For the time being we will support that option by using a new
+        MESSAGE value to indicate an owned collection that is "shared".
+
+        @param shared: whether or not the owned collection is "shared"
+        @type shared: C{bool}
+        """
+        assert self.owned(), "Cannot change share mode on a shared collection"
+
+        # Only if change is needed
+        newMessage = "shared" if shared else None
+        if self._bindMessage == newMessage:
+            returnValue(None)
+
+        self._bindMessage = newMessage
+
+        bind = self._bindSchema
+        yield Update(
+            {bind.MESSAGE: self._bindMessage},
+            Where=(bind.RESOURCE_ID == Parameter("resourceID")).And(
+                bind.HOME_RESOURCE_ID == Parameter("homeID")),
+        ).on(self._txn, resourceID=self._resourceID, homeID=self.viewerHome()._resourceID)
+
+        yield self.invalidateQueryCache()
+        yield self.notifyPropertyChanged()
+
+
+    def direct(self):
+        """
+        Is this a "direct" share?
+
+        @return: a boolean indicating whether it's direct.
+        """
+        return self._bindMode == _BIND_MODE_DIRECT
+
+
+    def indirect(self):
+        """
+        Is this an "indirect" share?
+
+        @return: a boolean indicating whether it's indirect.
+        """
+        return self._bindMode == _BIND_MODE_INDIRECT
+
+
+    def shareUID(self):
+        """
+        @see: L{ICalendar.shareUID}
+        """
+        return self.name()
+
+
+    def shareMode(self):
+        """
+        @see: L{ICalendar.shareMode}
+        """
+        return self._bindMode
+
+
+    def _effectiveShareMode(self, bindMode, viewerUID, txn):
+        """
+        Get the effective share mode without a calendar object
+        """
+        return bindMode
+
+
+    def effectiveShareMode(self):
+        """
+        @see: L{ICalendar.shareMode}
+        """
+        return self._bindMode
+
+
+    def shareName(self):
+        """
+        This is a path like name for the resource within the home being shared. For object resource
+        shares this will be a combination of the L{CommonHomeChild} name and the L{CommonObjecrResource}
+        name. Otherwise it is just the L{CommonHomeChild} name. This is needed to expose a value to the
+        app-layer such that it can construct a URI for the actual WebDAV resource being shared.
+        """
+        name = self.name()
+        if self.sharedResourceType() == "group":
+            name = self.parentCollection().name() + "/" + name
+        return name
+
+
+    def shareStatus(self):
+        """
+        @see: L{ICalendar.shareStatus}
+        """
+        return self._bindStatus
+
+
+    def bindUID(self):
+        """
+        @see: L{ICalendar.bindUID}
+        """
+        return self._bindUID
+
+
+    def accepted(self):
+        """
+        @see: L{ICalendar.shareStatus}
+        """
+        return self._bindStatus == _BIND_STATUS_ACCEPTED
+
+
+    def shareMessage(self):
+        """
+        @see: L{ICalendar.shareMessage}
+        """
+        return self._bindMessage
+
+
+    def getInviteCopyProperties(self):
+        """
+        Get a dictionary of property name/values (as strings) for properties that are shadowable and
+        need to be copied to a sharee's collection when an external (cross-pod) share is created.
+        Sub-classes should override to expose the properties they care about.
+        """
+        return {}
+
+
+    def setInviteCopyProperties(self, props):
+        """
+        Copy a set of shadowable properties (as name/value strings) onto this shared resource when
+        a cross-pod invite is processed. Sub-classes should override to expose the properties they
+        care about.
+        """
+        pass
+
+
+    @classmethod
+    def metadataColumns(cls):
+        """
+        Return a list of column name for retrieval of metadata. This allows
+        different child classes to have their own type specific data, but still make use of the
+        common base logic.
+        """
+
+        # Common behavior is to have created and modified
+
+        return (
+            cls._homeChildMetaDataSchema.CREATED,
+            cls._homeChildMetaDataSchema.MODIFIED,
+        )
+
+
+    @classmethod
+    def metadataAttributes(cls):
+        """
+        Return a list of attribute names for retrieval of metadata. This allows
+        different child classes to have their own type specific data, but still make use of the
+        common base logic.
+        """
+
+        # Common behavior is to have created and modified
+
+        return (
+            "_created",
+            "_modified",
+        )
+
+
+    @classmethod
+    def bindColumns(cls):
+        """
+        Return a list of column names for retrieval during creation. This allows
+        different child classes to have their own type specific data, but still make use of the
+        common base logic.
+        """
+
+        return (
+            cls._bindSchema.HOME_RESOURCE_ID,
+            cls._bindSchema.RESOURCE_ID,
+            cls._bindSchema.RESOURCE_NAME,
+            cls._bindSchema.BIND_MODE,
+            cls._bindSchema.BIND_STATUS,
+            cls._bindSchema.BIND_REVISION,
+            cls._bindSchema.BIND_UID,
+            cls._bindSchema.MESSAGE
+        )
+
+
+    @classmethod
+    def bindAttributes(cls):
+        """
+        Return a list of column names for retrieval during creation. This allows
+        different child classes to have their own type specific data, but still make use of the
+        common base logic.
+        """
+
+        return (
+            "_homeResourceID",
+            "_resourceID",
+            "_name",
+            "_bindMode",
+            "_bindStatus",
+            "_bindRevision",
+            "_bindUID",
+            "_bindMessage",
+        )
+
+    bindColumnCount = 8
+
+    @classmethod
+    def additionalBindColumns(cls):
+        """
+        Return a list of column names for retrieval during creation. This allows
+        different child classes to have their own type specific data, but still make use of the
+        common base logic.
+        """
+
+        return ()
+
+
+    @classmethod
+    def additionalBindAttributes(cls):
+        """
+        Return a list of attribute names for retrieval of during creation. This allows
+        different child classes to have their own type specific data, but still make use of the
+        common base logic.
+        """
+
+        return ()
+
+
+    @classproperty
+    def _childrenAndMetadataForHomeID(cls):
+        bind = cls._bindSchema
+        child = cls._homeChildSchema
+        childMetaData = cls._homeChildMetaDataSchema
+
+        columns = cls.bindColumns() + cls.additionalBindColumns() + cls.metadataColumns()
+        return Select(
+            columns,
+            From=child.join(
+                bind, child.RESOURCE_ID == bind.RESOURCE_ID,
+                'left outer').join(
+                    childMetaData, childMetaData.RESOURCE_ID == bind.RESOURCE_ID,
+                    'left outer'),
+            Where=(bind.HOME_RESOURCE_ID == Parameter("homeID")).And(
+                bind.BIND_STATUS == _BIND_STATUS_ACCEPTED)
+        )
+
+
+    @classmethod
+    def _revisionsForResourceIDs(cls, resourceIDs):
+        rev = cls._revisionsSchema
+        return Select(
+            [rev.RESOURCE_ID, Max(rev.REVISION)],
+            From=rev,
+            Where=rev.RESOURCE_ID.In(Parameter("resourceIDs", len(resourceIDs))).And(
+                (rev.RESOURCE_NAME != None).Or(rev.DELETED == False)),
+            GroupBy=rev.RESOURCE_ID
+        )
+
+
+    @inlineCallbacks
+    def invalidateQueryCache(self):
+        queryCacher = self._txn._queryCacher
+        if queryCacher is not None:
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForHomeChildMetaData(self._resourceID))
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithName(self._home._resourceID, self._name))
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithResourceID(self._home._resourceID, self._resourceID))
+            yield queryCacher.invalidateAfterCommit(self._txn, queryCacher.keyForObjectWithBindUID(self._home._resourceID, self._bindUID))

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_tables.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_tables.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_tables.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -23,13 +23,11 @@
 from twext.enterprise.dal.syntax import SchemaSyntax, QueryGenerator
 from twext.enterprise.dal.model import NO_DEFAULT
 from twext.enterprise.dal.model import Sequence, ProcedureCall
+from twext.enterprise.dal.parseschema import schemaFromPath
 from twext.enterprise.dal.syntax import FixedPlaceholder
 from twext.enterprise.ienterprise import ORACLE_DIALECT, POSTGRES_DIALECT
 from twext.enterprise.dal.syntax import Insert
 from twext.enterprise.ienterprise import ORACLE_TABLE_NAME_MAX
-from twext.enterprise.dal.parseschema import schemaFromPath, significant
-from sqlparse import parse
-from re import compile
 import hashlib
 import itertools
 
@@ -187,7 +185,20 @@
 _HOME_STATUS_NORMAL = _homeStatus('normal')
 _HOME_STATUS_EXTERNAL = _homeStatus('external')
 _HOME_STATUS_PURGING = _homeStatus('purging')
+_HOME_STATUS_MIGRATING = _homeStatus('migrating')
+_HOME_STATUS_DISABLED = _homeStatus('disabled')
 
+_childType = _schemaConstants(
+    schema.CHILD_TYPE.DESCRIPTION,
+    schema.CHILD_TYPE.ID
+)
+
+
+_CHILD_TYPE_NORMAL = _childType('normal')
+_CHILD_TYPE_INBOX = _childType('inbox')
+_CHILD_TYPE_TRASH = _childType('trash')
+
+
 _bindStatus = _schemaConstants(
     schema.CALENDAR_BIND_STATUS.DESCRIPTION,
     schema.CALENDAR_BIND_STATUS.ID
@@ -434,44 +445,6 @@
         out.write("-- Skipped Function {}\n".format(function.name))
 
 
-
-def splitSQLString(sqlString):
-    """
-    Strings which mix zero or more sql statements with zero or more pl/sql
-    statements need to be split into individual sql statements for execution.
-    This function was written to allow execution of pl/sql during Oracle schema
-    upgrades.
-    """
-    aggregated = ''
-    inPlSQL = None
-    parsed = parse(sqlString)
-    for stmt in parsed:
-        while stmt.tokens and not significant(stmt.tokens[0]):
-            stmt.tokens.pop(0)
-        if not stmt.tokens:
-            continue
-        if inPlSQL is not None:
-            agg = str(stmt).strip()
-            if "end;".lower() in agg.lower():
-                inPlSQL = None
-                aggregated += agg
-                rex = compile("\n +")
-                aggregated = rex.sub('\n', aggregated)
-                yield aggregated.strip()
-                continue
-            aggregated += agg
-            continue
-        if inPlSQL is None:
-            # if 'begin'.lower() in str(stmt).split()[0].lower():
-            if str(stmt).lower().strip().startswith('begin'):
-                inPlSQL = True
-                aggregated += str(stmt)
-                continue
-        else:
-            continue
-        yield str(stmt).rstrip().rstrip(";")
-
-
 if __name__ == '__main__':
     import sys
     version = sys.argv[1] if len(sys.argv) == 2 else None

Copied: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_util.py (from rev 14551, CalendarServer/trunk/txdav/common/datastore/sql_util.py)
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_util.py	                        (rev 0)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/sql_util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -0,0 +1,837 @@
+# -*- test-case-name: twext.enterprise.dal.test.test_record -*-
+##
+# Copyright (c) 2015 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from twext.enterprise.dal.syntax import Max, Select, Parameter, Delete, Insert, \
+    Update, ColumnSyntax, TableSyntax, Upper
+from twext.python.clsprop import classproperty
+from twext.python.log import Logger
+from twisted.internet.defer import succeed, inlineCallbacks, returnValue
+from txdav.base.datastore.util import normalizeUUIDOrNot
+from txdav.common.datastore.sql_tables import schema
+from txdav.common.icommondatastore import SyncTokenValidException, \
+    ENOTIFICATIONTYPE, ECALENDARTYPE, EADDRESSBOOKTYPE
+import time
+from uuid import UUID
+
+log = Logger()
+
+
+"""
+Classes and methods for the SQL store.
+"""
+
+class _EmptyCacher(object):
+
+    def set(self, key, value):
+        return succeed(True)
+
+
+    def get(self, key, withIdentifier=False):
+        return succeed(None)
+
+
+    def delete(self, key):
+        return succeed(True)
+
+
+
+class _SharedSyncLogic(object):
+    """
+    Logic for maintaining sync-token shared between notification collections and
+    shared collections.
+    """
+
+    @classproperty
+    def _childSyncTokenQuery(cls):
+        """
+        DAL query for retrieving the sync token of a L{CommonHomeChild} based on
+        its resource ID.
+        """
+        rev = cls._revisionsSchema
+        return Select([Max(rev.REVISION)], From=rev,
+                      Where=rev.RESOURCE_ID == Parameter("resourceID"))
+
+
+    def revisionFromToken(self, token):
+        if token is None:
+            return 0
+        elif isinstance(token, str) or isinstance(token, unicode):
+            _ignore_uuid, revision = token.split("_", 1)
+            return int(revision)
+        else:
+            return token
+
+
+    @inlineCallbacks
+    def syncToken(self):
+        if self._syncTokenRevision is None:
+            self._syncTokenRevision = yield self.syncTokenRevision()
+        returnValue(("%s_%s" % (self._resourceID, self._syncTokenRevision,)))
+
+
+    @inlineCallbacks
+    def syncTokenRevision(self):
+        revision = (yield self._childSyncTokenQuery.on(self._txn, resourceID=self._resourceID))[0][0]
+        if revision is None:
+            revision = int((yield self._txn.calendarserverValue("MIN-VALID-REVISION")))
+        returnValue(revision)
+
+
+    def objectResourcesSinceToken(self, token):
+        raise NotImplementedError()
+
+
+    @classmethod
+    def _objectNamesSinceRevisionQuery(cls, deleted=True):
+        """
+        DAL query for (resource, deleted-flag)
+        """
+        rev = cls._revisionsSchema
+        where = (rev.REVISION > Parameter("revision")).And(rev.RESOURCE_ID == Parameter("resourceID"))
+        if not deleted:
+            where = where.And(rev.DELETED == False)
+        return Select(
+            [rev.RESOURCE_NAME, rev.DELETED],
+            From=rev,
+            Where=where,
+        )
+
+
+    def resourceNamesSinceToken(self, token):
+        """
+        Return the changed and deleted resources since a particular sync-token. This simply extracts
+        the revision from from the token then calls L{resourceNamesSinceRevision}.
+
+        @param revision: the revision to determine changes since
+        @type revision: C{int}
+        """
+
+        return self.resourceNamesSinceRevision(self.revisionFromToken(token))
+
+
+    @inlineCallbacks
+    def resourceNamesSinceRevision(self, revision):
+        """
+        Return the changed and deleted resources since a particular revision.
+
+        @param revision: the revision to determine changes since
+        @type revision: C{int}
+        """
+        changed = []
+        deleted = []
+        invalid = []
+        if revision:
+            minValidRevision = yield self._txn.calendarserverValue("MIN-VALID-REVISION")
+            if revision < int(minValidRevision):
+                raise SyncTokenValidException
+
+            results = [
+                (name if name else "", removed) for name, removed in (
+                    yield self._objectNamesSinceRevisionQuery().on(
+                        self._txn, revision=revision, resourceID=self._resourceID)
+                )
+            ]
+            results.sort(key=lambda x: x[1])
+
+            for name, wasdeleted in results:
+                if name:
+                    if wasdeleted:
+                        deleted.append(name)
+                    else:
+                        changed.append(name)
+        else:
+            changed = yield self.listObjectResources()
+
+        returnValue((changed, deleted, invalid))
+
+
+    @classproperty
+    def _removeDeletedRevision(cls):
+        rev = cls._revisionsSchema
+        return Delete(From=rev,
+                      Where=(rev.HOME_RESOURCE_ID == Parameter("homeID")).And(
+                          rev.COLLECTION_NAME == Parameter("collectionName")))
+
+
+    @classproperty
+    def _addNewRevision(cls):
+        rev = cls._revisionsSchema
+        return Insert(
+            {
+                rev.HOME_RESOURCE_ID: Parameter("homeID"),
+                rev.RESOURCE_ID: Parameter("resourceID"),
+                rev.COLLECTION_NAME: Parameter("collectionName"),
+                rev.RESOURCE_NAME: None,
+                # Always starts false; may be updated to be a tombstone
+                # later.
+                rev.DELETED: False
+            },
+            Return=[rev.REVISION]
+        )
+
+
+    @inlineCallbacks
+    def _initSyncToken(self):
+        yield self._removeDeletedRevision.on(
+            self._txn, homeID=self._home._resourceID, collectionName=self._name
+        )
+        self._syncTokenRevision = (yield (
+            self._addNewRevision.on(self._txn, homeID=self._home._resourceID,
+                                    resourceID=self._resourceID,
+                                    collectionName=self._name)))[0][0]
+        self._txn.bumpRevisionForObject(self)
+
+
+    @classproperty
+    def _renameSyncTokenQuery(cls):
+        """
+        DAL query to change sync token for a rename (increment and adjust
+        resource name).
+        """
+        rev = cls._revisionsSchema
+        return Update(
+            {
+                rev.REVISION: schema.REVISION_SEQ,
+                rev.COLLECTION_NAME: Parameter("name")
+            },
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And
+                  (rev.RESOURCE_NAME == None),
+            Return=rev.REVISION
+        )
+
+
+    @inlineCallbacks
+    def _renameSyncToken(self):
+        rows = yield self._renameSyncTokenQuery.on(
+            self._txn, name=self._name, resourceID=self._resourceID)
+        if rows:
+            self._syncTokenRevision = rows[0][0]
+            self._txn.bumpRevisionForObject(self)
+        else:
+            yield self._initSyncToken()
+
+
+    @classproperty
+    def _bumpSyncTokenQuery(cls):
+        """
+        DAL query to change collection sync token. Note this can impact multiple rows if the
+        collection is shared.
+        """
+        rev = cls._revisionsSchema
+        return Update(
+            {rev.REVISION: schema.REVISION_SEQ, },
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And
+                  (rev.RESOURCE_NAME == None)
+        )
+
+
+    @inlineCallbacks
+    def _bumpSyncToken(self):
+
+        if not self._txn.isRevisionBumpedAlready(self):
+            self._txn.bumpRevisionForObject(self)
+            yield self._bumpSyncTokenQuery.on(
+                self._txn,
+                resourceID=self._resourceID,
+            )
+            self._syncTokenRevision = None
+
+
+    @classproperty
+    def _deleteSyncTokenQuery(cls):
+        """
+        DAL query to remove all child revision information. The revision for the collection
+        itself is not touched.
+        """
+        rev = cls._revisionsSchema
+        return Delete(
+            From=rev,
+            Where=(rev.HOME_RESOURCE_ID == Parameter("homeID")).And
+                  (rev.RESOURCE_ID == Parameter("resourceID")).And
+                  (rev.COLLECTION_NAME == None)
+        )
+
+
+    @classproperty
+    def _sharedRemovalQuery(cls):
+        """
+        DAL query to indicate a shared collection has been deleted.
+        """
+        rev = cls._revisionsSchema
+        return Update(
+            {
+                rev.RESOURCE_ID: None,
+                rev.REVISION: schema.REVISION_SEQ,
+                rev.DELETED: True
+            },
+            Where=(rev.HOME_RESOURCE_ID == Parameter("homeID")).And(
+                rev.RESOURCE_ID == Parameter("resourceID")).And(
+                rev.RESOURCE_NAME == None)
+        )
+
+
+    @classproperty
+    def _unsharedRemovalQuery(cls):
+        """
+        DAL query to indicate an owned collection has been deleted.
+        """
+        rev = cls._revisionsSchema
+        return Update(
+            {
+                rev.RESOURCE_ID: None,
+                rev.REVISION: schema.REVISION_SEQ,
+                rev.DELETED: True
+            },
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
+                rev.RESOURCE_NAME == None),
+        )
+
+
+    @inlineCallbacks
+    def _deletedSyncToken(self, sharedRemoval=False):
+        """
+        When a collection is deleted we remove all the revision information for its child resources.
+        We update the collection's sync token to indicate it has been deleted - that way a sync on
+        the home collection can report the deletion of the collection.
+
+        @param sharedRemoval: indicates whether the collection being removed is shared
+        @type sharedRemoval: L{bool}
+        """
+        # Remove all child entries
+        yield self._deleteSyncTokenQuery.on(self._txn,
+                                            homeID=self._home._resourceID,
+                                            resourceID=self._resourceID)
+
+        # If this is a share being removed then we only mark this one specific
+        # home/resource-id as being deleted.  On the other hand, if it is a
+        # non-shared collection, then we need to mark all collections
+        # with the resource-id as being deleted to account for direct shares.
+        if sharedRemoval:
+            yield self._sharedRemovalQuery.on(self._txn,
+                                              homeID=self._home._resourceID,
+                                              resourceID=self._resourceID)
+        else:
+            yield self._unsharedRemovalQuery.on(self._txn,
+                                                resourceID=self._resourceID)
+        self._syncTokenRevision = None
+
+
+    def _insertRevision(self, name):
+        return self._changeRevision("insert", name)
+
+
+    def _updateRevision(self, name):
+        return self._changeRevision("update", name)
+
+
+    def _deleteRevision(self, name):
+        return self._changeRevision("delete", name)
+
+
+    @classproperty
+    def _deleteBumpTokenQuery(cls):
+        rev = cls._revisionsSchema
+        return Update(
+            {rev.REVISION: schema.REVISION_SEQ, rev.DELETED: True},
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
+                rev.RESOURCE_NAME == Parameter("name")),
+            Return=rev.REVISION
+        )
+
+
+    @classproperty
+    def _updateBumpTokenQuery(cls):
+        rev = cls._revisionsSchema
+        return Update(
+            {rev.REVISION: schema.REVISION_SEQ},
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
+                rev.RESOURCE_NAME == Parameter("name")),
+            Return=rev.REVISION
+        )
+
+
+    @classproperty
+    def _insertFindPreviouslyNamedQuery(cls):
+        rev = cls._revisionsSchema
+        return Select(
+            [rev.RESOURCE_ID],
+            From=rev,
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
+                rev.RESOURCE_NAME == Parameter("name"))
+        )
+
+
+    @classproperty
+    def _updatePreviouslyNamedQuery(cls):
+        rev = cls._revisionsSchema
+        return Update(
+            {rev.REVISION: schema.REVISION_SEQ, rev.DELETED: False},
+            Where=(rev.RESOURCE_ID == Parameter("resourceID")).And(
+                rev.RESOURCE_NAME == Parameter("name")),
+            Return=rev.REVISION
+        )
+
+
+    @classproperty
+    def _completelyNewRevisionQuery(cls):
+        rev = cls._revisionsSchema
+        return Insert(
+            {
+                rev.HOME_RESOURCE_ID: Parameter("homeID"),
+                rev.RESOURCE_ID: Parameter("resourceID"),
+                rev.RESOURCE_NAME: Parameter("name"),
+                rev.REVISION: schema.REVISION_SEQ,
+                rev.DELETED: False
+            },
+            Return=rev.REVISION
+        )
+
+
+    @classproperty
+    def _completelyNewDeletedRevisionQuery(cls):
+        rev = cls._revisionsSchema
+        return Insert(
+            {
+                rev.HOME_RESOURCE_ID: Parameter("homeID"),
+                rev.RESOURCE_ID: Parameter("resourceID"),
+                rev.RESOURCE_NAME: Parameter("name"),
+                rev.REVISION: schema.REVISION_SEQ,
+                rev.DELETED: True
+            },
+            Return=rev.REVISION
+        )
+
+
+    @inlineCallbacks
+    def _changeRevision(self, action, name):
+
+        # Need to handle the case where for some reason the revision entry is
+        # actually missing. For a "delete" we don't care, for an "update" we
+        # will turn it into an "insert".
+        if action == "delete":
+            rows = (
+                yield self._deleteBumpTokenQuery.on(
+                    self._txn, resourceID=self._resourceID, name=name))
+            if rows:
+                self._syncTokenRevision = rows[0][0]
+            else:
+                self._syncTokenRevision = (
+                    yield self._completelyNewDeletedRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
+
+        elif action == "update":
+            rows = (
+                yield self._updateBumpTokenQuery.on(
+                    self._txn, resourceID=self._resourceID, name=name))
+            if rows:
+                self._syncTokenRevision = rows[0][0]
+            else:
+                self._syncTokenRevision = (
+                    yield self._completelyNewRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
+
+        elif action == "insert":
+            # Note that an "insert" may happen for a resource that previously
+            # existed and then was deleted. In that case an entry in the
+            # REVISIONS table still exists so we have to detect that and do db
+            # INSERT or UPDATE as appropriate
+
+            found = bool((
+                yield self._insertFindPreviouslyNamedQuery.on(
+                    self._txn, resourceID=self._resourceID, name=name)))
+            if found:
+                self._syncTokenRevision = (
+                    yield self._updatePreviouslyNamedQuery.on(
+                        self._txn, resourceID=self._resourceID, name=name)
+                )[0][0]
+            else:
+                self._syncTokenRevision = (
+                    yield self._completelyNewRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
+        yield self._maybeNotify()
+        returnValue(self._syncTokenRevision)
+
+
+    def _maybeNotify(self):
+        """
+        Maybe notify changed.  (Overridden in NotificationCollection.)
+        """
+        return succeed(None)
+
+
+
+def determineNewest(uid, homeType):
+    """
+    Construct a query to determine the modification time of the newest object
+    in a given home.
+
+    @param uid: the UID of the home to scan.
+    @type uid: C{str}
+
+    @param homeType: The type of home to scan; C{ECALENDARTYPE},
+        C{ENOTIFICATIONTYPE}, or C{EADDRESSBOOKTYPE}.
+    @type homeType: C{int}
+
+    @return: A select query that will return a single row containing a single
+        column which is the maximum value.
+    @rtype: L{Select}
+    """
+    if homeType == ENOTIFICATIONTYPE:
+        return Select(
+            [Max(schema.NOTIFICATION.MODIFIED)],
+            From=schema.NOTIFICATION_HOME.join(
+                schema.NOTIFICATION,
+                on=schema.NOTIFICATION_HOME.RESOURCE_ID ==
+                schema.NOTIFICATION.NOTIFICATION_HOME_RESOURCE_ID),
+            Where=schema.NOTIFICATION_HOME.OWNER_UID == uid
+        )
+    homeTypeName = {ECALENDARTYPE: "CALENDAR",
+                    EADDRESSBOOKTYPE: "ADDRESSBOOK"}[homeType]
+    home = getattr(schema, homeTypeName + "_HOME")
+    bind = getattr(schema, homeTypeName + "_BIND")
+    child = getattr(schema, homeTypeName)
+    obj = getattr(schema, homeTypeName + "_OBJECT")
+    return Select(
+        [Max(obj.MODIFIED)],
+        From=home.join(bind, on=bind.HOME_RESOURCE_ID == home.RESOURCE_ID).join(
+            child, on=child.RESOURCE_ID == bind.RESOURCE_ID).join(
+            obj, on=obj.PARENT_RESOURCE_ID == child.RESOURCE_ID),
+        Where=(bind.BIND_MODE == 0).And(home.OWNER_UID == uid)
+    )
+
+
+
+ at inlineCallbacks
+def mergeHomes(sqlTxn, one, other, homeType):
+    """
+    Merge two homes together.  This determines which of C{one} or C{two} is
+    newer - that is, has been modified more recently - and pulls all the data
+    from the older into the newer home.  Then, it changes the UID of the old
+    home to its UID, normalized and prefixed with "old.", and then re-names the
+    new home to its name, normalized.
+
+    Because the UIDs of both homes have changed, B{both one and two will be
+    invalid to all other callers from the start of the invocation of this
+    function}.
+
+    @param sqlTxn: the transaction to use
+    @type sqlTxn: A L{CommonTransaction}
+
+    @param one: A calendar home.
+    @type one: L{ICalendarHome}
+
+    @param two: Another, different calendar home.
+    @type two: L{ICalendarHome}
+
+    @param homeType: The type of home to scan; L{ECALENDARTYPE} or
+        L{EADDRESSBOOKTYPE}.
+    @type homeType: C{int}
+
+    @return: a L{Deferred} which fires with with the newer of C{one} or C{two},
+        into which the data from the other home has been merged, when the merge
+        is complete.
+    """
+    from txdav.caldav.datastore.util import migrateHome as migrateCalendarHome
+    from txdav.carddav.datastore.util import migrateHome as migrateABHome
+    migrateHome = {EADDRESSBOOKTYPE: migrateABHome,
+                   ECALENDARTYPE: migrateCalendarHome,
+                   ENOTIFICATIONTYPE: _dontBotherWithNotifications}[homeType]
+    homeTable = {EADDRESSBOOKTYPE: schema.ADDRESSBOOK_HOME,
+                 ECALENDARTYPE: schema.CALENDAR_HOME,
+                 ENOTIFICATIONTYPE: schema.NOTIFICATION_HOME}[homeType]
+    both = []
+    both.append([one,
+                 (yield determineNewest(one.uid(), homeType).on(sqlTxn))])
+    both.append([other,
+                 (yield determineNewest(other.uid(), homeType).on(sqlTxn))])
+    both.sort(key=lambda x: x[1])
+
+    older = both[0][0]
+    newer = both[1][0]
+    yield migrateHome(older, newer, merge=True)
+    # Rename the old one to 'old.<correct-guid>'
+    newNormalized = normalizeUUIDOrNot(newer.uid())
+    oldNormalized = normalizeUUIDOrNot(older.uid())
+    yield _renameHome(sqlTxn, homeTable, older.uid(), "old." + oldNormalized)
+    # Rename the new one to '<correct-guid>'
+    if newer.uid() != newNormalized:
+        yield _renameHome(sqlTxn, homeTable, newer.uid(), newNormalized)
+    yield returnValue(newer)
+
+
+
+def _renameHome(txn, table, oldUID, newUID):
+    """
+    Rename a calendar, addressbook, or notification home.  Note that this
+    function is only safe in transactions that have had caching disabled, and
+    more specifically should only ever be used during upgrades.  Running this
+    in a normal transaction will have unpredictable consequences, especially
+    with respect to memcache.
+
+    @param txn: an SQL transaction to use for this update
+    @type txn: L{twext.enterprise.ienterprise.IAsyncTransaction}
+
+    @param table: the storage table of the desired home type
+    @type table: L{TableSyntax}
+
+    @param oldUID: the old UID, the existing home's UID
+    @type oldUID: L{str}
+
+    @param newUID: the new UID, to change the UID to
+    @type newUID: L{str}
+
+    @return: a L{Deferred} which fires when the home is renamed.
+    """
+    return Update({table.OWNER_UID: newUID},
+                  Where=table.OWNER_UID == oldUID).on(txn)
+
+
+
+def _dontBotherWithNotifications(older, newer, merge):
+    """
+    Notifications are more transient and can be easily worked around; don't
+    bother to migrate all of them when there is a UUID case mismatch.
+    """
+    pass
+
+
+
+ at inlineCallbacks
+def _normalizeHomeUUIDsIn(t, homeType):
+    """
+    Normalize the UUIDs in the given L{txdav.common.datastore.CommonStore}.
+
+    This changes the case of the UUIDs in the calendar home.
+
+    @param t: the transaction to normalize all the UUIDs in.
+    @type t: L{CommonStoreTransaction}
+
+    @param homeType: The type of home to scan, L{ECALENDARTYPE},
+        L{EADDRESSBOOKTYPE}, or L{ENOTIFICATIONTYPE}.
+    @type homeType: C{int}
+
+    @return: a L{Deferred} which fires with C{None} when the UUID normalization
+        is complete.
+    """
+    from txdav.caldav.datastore.util import fixOneCalendarHome
+    homeTable = {EADDRESSBOOKTYPE: schema.ADDRESSBOOK_HOME,
+                 ECALENDARTYPE: schema.CALENDAR_HOME,
+                 ENOTIFICATIONTYPE: schema.NOTIFICATION_HOME}[homeType]
+    homeTypeName = homeTable.model.name.split("_")[0]
+
+    allUIDs = yield Select([homeTable.OWNER_UID],
+                           From=homeTable,
+                           OrderBy=homeTable.OWNER_UID).on(t)
+    total = len(allUIDs)
+    allElapsed = []
+    for n, [UID] in enumerate(allUIDs):
+        start = time.time()
+        if allElapsed:
+            estimate = "%0.3d" % ((sum(allElapsed) / len(allElapsed)) *
+                                  total - n)
+        else:
+            estimate = "unknown"
+        log.info(
+            "Scanning UID {uid} [{homeType}] "
+            "({pct!0.2d}%, {estimate} seconds remaining)...",
+            uid=UID, pct=(n / float(total)) * 100, estimate=estimate,
+            homeType=homeTypeName
+        )
+        other = None
+        this = yield _getHome(t, homeType, UID)
+        if homeType == ECALENDARTYPE:
+            fixedThisHome = yield fixOneCalendarHome(this)
+        else:
+            fixedThisHome = 0
+        fixedOtherHome = 0
+        if this is None:
+            log.info(
+                "{uid!r} appears to be missing, already processed", uid=UID
+            )
+        try:
+            uuidobj = UUID(UID)
+        except ValueError:
+            pass
+        else:
+            newname = str(uuidobj).upper()
+            if UID != newname:
+                log.info(
+                    "Detected case variance: {uid} {newuid}[{homeType}]",
+                    uid=UID, newuid=newname, homeType=homeTypeName
+                )
+                other = yield _getHome(t, homeType, newname)
+                if other is None:
+                    # No duplicate: just fix the name.
+                    yield _renameHome(t, homeTable, UID, newname)
+                else:
+                    if homeType == ECALENDARTYPE:
+                        fixedOtherHome = yield fixOneCalendarHome(other)
+                    this = yield mergeHomes(t, this, other, homeType)
+                # NOTE: WE MUST NOT TOUCH EITHER HOME OBJECT AFTER THIS POINT.
+                # THE UIDS HAVE CHANGED AND ALL OPERATIONS WILL FAIL.
+
+        end = time.time()
+        elapsed = end - start
+        allElapsed.append(elapsed)
+        log.info(
+            "Scanned UID {uid}; {elapsed} seconds elapsed,"
+            " {fixes} properties fixed ({duplicate} fixes in duplicate).",
+            uid=UID, elapsed=elapsed, fixes=fixedThisHome,
+            duplicate=fixedOtherHome
+        )
+    returnValue(None)
+
+
+
+def _getHome(txn, homeType, uid):
+    """
+    Like L{CommonHome.homeWithUID} but also honoring ENOTIFICATIONTYPE which
+    isn't I{really} a type of home.
+
+    @param txn: the transaction to retrieve the home from
+    @type txn: L{CommonStoreTransaction}
+
+    @param homeType: L{ENOTIFICATIONTYPE}, L{ECALENDARTYPE}, or
+        L{EADDRESSBOOKTYPE}.
+
+    @param uid: the UID of the home to retrieve.
+    @type uid: L{str}
+
+    @return: a L{Deferred} that fires with the L{CommonHome} or
+        L{NotificationHome} when it has been retrieved.
+    """
+    if homeType == ENOTIFICATIONTYPE:
+        return txn.notificationsWithUID(uid)
+    else:
+        return txn.homeWithUID(homeType, uid)
+
+
+
+ at inlineCallbacks
+def _normalizeColumnUUIDs(txn, column):
+    """
+    Upper-case the UUIDs in the given SQL DAL column.
+
+    @param txn: The transaction.
+    @type txn: L{CommonStoreTransaction}
+
+    @param column: the column, which may contain UIDs, to normalize.
+    @type column: L{ColumnSyntax}
+
+    @return: A L{Deferred} that will fire when the UUID normalization of the
+        given column has completed.
+    """
+    tableModel = column.model.table
+    # Get a primary key made of column syntax objects for querying and
+    # comparison later.
+    pkey = [ColumnSyntax(columnModel)
+            for columnModel in tableModel.primaryKey]
+    for row in (yield Select([column] + pkey,
+                             From=TableSyntax(tableModel)).on(txn)):
+        before = row[0]
+        pkeyparts = row[1:]
+        after = normalizeUUIDOrNot(before)
+        if after != before:
+            where = _AndNothing
+            # Build a where clause out of the primary key and the parts of the
+            # primary key that were found.
+            for pkeycol, pkeypart in zip(pkeyparts, pkey):
+                where = where.And(pkeycol == pkeypart)
+            yield Update({column: after}, Where=where).on(txn)
+
+
+
+class _AndNothing(object):
+    """
+    Simple placeholder for iteratively generating a 'Where' clause; the 'And'
+    just returns its argument, so it can be used at the start of the loop.
+    """
+    @staticmethod
+    def And(self):
+        """
+        Return the argument.
+        """
+        return self
+
+
+
+ at inlineCallbacks
+def _needsNormalizationUpgrade(txn):
+    """
+    Determine whether a given store requires a UUID normalization data upgrade.
+
+    @param txn: the transaction to use
+    @type txn: L{CommonStoreTransaction}
+
+    @return: a L{Deferred} that fires with C{True} or C{False} depending on
+        whether we need the normalization upgrade or not.
+    """
+    for x in [schema.CALENDAR_HOME, schema.ADDRESSBOOK_HOME,
+              schema.NOTIFICATION_HOME]:
+        slct = Select([x.OWNER_UID], From=x,
+                      Where=x.OWNER_UID != Upper(x.OWNER_UID))
+        rows = yield slct.on(txn)
+        if rows:
+            for [uid] in rows:
+                if normalizeUUIDOrNot(uid) != uid:
+                    returnValue(True)
+    returnValue(False)
+
+
+
+ at inlineCallbacks
+def fixUUIDNormalization(store):
+    """
+    Fix all UUIDs in the given SQL store to be in a canonical form;
+    00000000-0000-0000-0000-000000000000 format and upper-case.
+    """
+    t = store.newTransaction(disableCache=True)
+
+    # First, let's see if there are any calendar, addressbook, or notification
+    # homes that have a de-normalized OWNER_UID.  If there are none, then we can
+    # early-out and avoid the tedious and potentially expensive inspection of
+    # oodles of calendar data.
+    if not (yield _needsNormalizationUpgrade(t)):
+        log.info("No potentially denormalized UUIDs detected, "
+                 "skipping normalization upgrade.")
+        yield t.abort()
+        returnValue(None)
+    try:
+        yield _normalizeHomeUUIDsIn(t, ECALENDARTYPE)
+        yield _normalizeHomeUUIDsIn(t, EADDRESSBOOKTYPE)
+        yield _normalizeHomeUUIDsIn(t, ENOTIFICATIONTYPE)
+        yield _normalizeColumnUUIDs(t, schema.RESOURCE_PROPERTY.VIEWER_UID)
+        yield _normalizeColumnUUIDs(t, schema.APN_SUBSCRIPTIONS.SUBSCRIBER_GUID)
+    except:
+        log.failure("Unable to normalize UUIDs")
+        yield t.abort()
+        # There's a lot of possible problems here which are very hard to test
+        # for individually; unexpected data that might cause constraint
+        # violations under one of the manipulations done by
+        # normalizeHomeUUIDsIn. Since this upgrade does not come along with a
+        # schema version bump and may be re- attempted at any time, just raise
+        # the exception and log it so that we can try again later, and the
+        # service will survive for everyone _not_ affected by this somewhat
+        # obscure bug.
+    else:
+        yield t.commit()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -31,8 +31,9 @@
     log, CommonStoreTransactionMonitor,
     CommonHome, CommonHomeChild, ECALENDARTYPE
 )
-from txdav.common.datastore.sql import fixUUIDNormalization
 from txdav.common.datastore.sql_tables import schema
+from txdav.common.datastore.sql_util import _normalizeColumnUUIDs, \
+    fixUUIDNormalization
 from txdav.common.datastore.test.util import CommonCommonTests
 from txdav.common.icommondatastore import AllRetriesFailed
 from txdav.xml import element as davxml
@@ -74,9 +75,9 @@
         txn = self.transactionUnderTest()
         cs = schema.CALENDARSERVER
         version = (yield Select(
-            [cs.VALUE, ],
+            [cs.VALUE],
             From=cs,
-            Where=cs.NAME == 'VERSION',
+            Where=cs.NAME == "VERSION",
         ).on(txn))
         self.assertNotEqual(version, None)
         self.assertEqual(len(version), 1)
@@ -349,7 +350,7 @@
         token = yield homeChild.syncToken()
         yield homeChild._changeRevision("delete", "E")
         changed = yield homeChild.resourceNamesSinceToken(token)
-        self.assertEqual(changed, ([], [], [],))
+        self.assertEqual(changed, ([], ["E"], [],))
 
         yield txn.abort()
 
@@ -374,7 +375,6 @@
             rp.VIEWER_UID: "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"}
         ).on(txn)
         # test
-        from txdav.common.datastore.sql import _normalizeColumnUUIDs
         yield _normalizeColumnUUIDs(txn, rp.VIEWER_UID)
         self.assertEqual(
             map(

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql_tables.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql_tables.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_sql_tables.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -31,8 +31,9 @@
 from twext.enterprise.dal.syntax import SchemaSyntax
 
 from txdav.common.datastore.sql_tables import schema, _translateSchema
-from txdav.common.datastore.sql_tables import SchemaBroken, splitSQLString
+from txdav.common.datastore.sql_tables import SchemaBroken
 
+from twext.enterprise.dal.parseschema import splitSQLString
 from twext.enterprise.dal.test.test_parseschema import SchemaTestHelper
 
 from textwrap import dedent

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_trash.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_trash.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/test_trash.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -34,6 +34,7 @@
     def _homeForUser(self, txn, userName):
         return txn.calendarHomeWithUID(userName, create=True)
 
+
     @inlineCallbacks
     def _collectionForUser(self, txn, userName, collectionName, create=False, onlyInTrash=False):
         home = yield txn.calendarHomeWithUID(userName, create=True)
@@ -210,7 +211,6 @@
         yield txn.commit()
 
 
-
     @inlineCallbacks
     def test_trashScheduledFullyInFuture(self):
 
@@ -359,7 +359,6 @@
         yield txn.commit()
 
 
-
     @inlineCallbacks
     def test_trashScheduledFullyInFutureAttendeeTrashedThenOrganizerChanged(self):
 
@@ -831,7 +830,6 @@
         yield txn.commit()
 
 
-
     @inlineCallbacks
     def test_trashScheduledFullyInFutureAttendeeTrashedThenPutBack(self):
 
@@ -999,7 +997,6 @@
         yield txn.commit()
 
 
-
     @inlineCallbacks
     def test_trashScheduledFullyInPast(self):
 
@@ -1301,7 +1298,6 @@
         yield txn.commit()
 
 
-
     @inlineCallbacks
     def test_trashScheduledSpanningNow(self):
 
@@ -1788,7 +1784,6 @@
         yield txn.commit()
 
 
-
     @inlineCallbacks
     def test_shareeDelete(self):
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/util.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/test/util.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -30,7 +30,7 @@
 
 from pycalendar.datetime import DateTime
 
-from random import Random
+from random import Random, randint
 
 from twext.python.log import Logger
 from twext.python.filepath import CachingFilePath as FilePath
@@ -108,10 +108,13 @@
     def __init__(self, count=0):
         self.sharedService = None
         self.currentTestID = None
-        self.sharedDBPath = "_test_sql_db" + str(os.getpid()) + ("-{}".format(count) if count else "")
         self.ampPort = config.WorkQueue.ampPort + count
 
+        self.sharedDBPath = "_test_sql_db-{}-{}".format(
+            os.getpid(), count
+        )
 
+
     def createService(self, serviceFactory):
         """
         Create a L{PostgresService} to use for building a store.
@@ -157,7 +160,10 @@
         return cds
 
 
-    def buildStore(self, testCase, notifierFactory, directoryService=None, homes=None, enableJobProcessing=True):
+    def buildStore(
+        self, testCase, notifierFactory,
+        directoryService=None, homes=None, enableJobProcessing=True,
+    ):
         """
         Do the necessary work to build a store for a particular test case.
 
@@ -169,34 +175,45 @@
         # The directory will be given to us later via setDirectoryService
         if self.sharedService is None:
             ready = Deferred()
+
             def getReady(connectionFactory, storageService):
                 self.makeAndCleanStore(
-                    testCase, notifierFactory, directoryService, attachmentRoot, enableJobProcessing
+                    testCase, notifierFactory, directoryService,
+                    attachmentRoot, enableJobProcessing
                 ).chainDeferred(ready)
                 return Service()
+
             self.sharedService = self.createService(getReady)
             self.sharedService.startService()
+
             def startStopping():
                 log.info("Starting stopping.")
                 self.sharedService.unpauseMonitor()
                 return self.sharedService.stopService()
-            reactor.addSystemEventTrigger(#@UndefinedVariable
+
+            reactor.addSystemEventTrigger(
                 "before", "shutdown", startStopping)
             result = ready
         else:
             result = self.makeAndCleanStore(
-                testCase, notifierFactory, directoryService, attachmentRoot, enableJobProcessing
+                testCase, notifierFactory, directoryService,
+                attachmentRoot, enableJobProcessing
             )
+
         def cleanUp():
             def stopit():
                 self.sharedService.pauseMonitor()
             return deferLater(reactor, 0.1, stopit)
+
         testCase.addCleanup(cleanUp)
         return result
 
 
     @inlineCallbacks
-    def makeAndCleanStore(self, testCase, notifierFactory, directoryService, attachmentRoot, enableJobProcessing=True):
+    def makeAndCleanStore(
+        self, testCase, notifierFactory, directoryService,
+        attachmentRoot, enableJobProcessing=True
+    ):
         """
         Create a L{CommonDataStore} specific to the given L{TestCase}.
 
@@ -213,14 +230,19 @@
         attachmentRoot.createDirectory()
 
         currentTestID = testCase.id()
-        cp = ConnectionPool(self.sharedService.produceConnection, maxConnections=4)
+        cp = ConnectionPool(
+            self.sharedService.produceConnection, maxConnections=4
+        )
         quota = deriveQuota(testCase)
         store = CommonDataStore(
             cp.connection,
             {"push": notifierFactory} if notifierFactory is not None else {},
             directoryService,
             attachmentRoot,
-            "https://example.com/calendars/__uids__/%(home)s/attachments/%(name)s",
+            (
+                "https://example.com/calendars/__uids__/"
+                "%(home)s/attachments/%(name)s"
+            ),
             quota=quota
         )
         store.label = currentTestID
@@ -279,24 +301,33 @@
         # table' statements are issued, so it's not possible to reference a
         # later table.  Therefore it's OK to drop them in the (reverse) order
         # that they happen to be in.
-        tables = [t.name for t in schema.model.tables #@UndefinedVariable
-                  # All tables with rows _in_ the schema are populated
-                  # exclusively _by_ the schema and shouldn't be manipulated
-                  # while the server is running, so we leave those populated.
-                  if not t.schemaRows][::-1]
+        tables = [
+            t.name for t in schema.model.tables #@UndefinedVariable
+            # All tables with rows _in_ the schema are populated
+            # exclusively _by_ the schema and shouldn't be manipulated
+            # while the server is running, so we leave those populated.
+            if not t.schemaRows
+        ][::-1]
 
         for table in tables:
             try:
                 yield cleanupTxn.execSQL("delete from " + table, [])
             except:
                 log.failure("delete table {table} failed", table=table)
+
+        # Change the starting values of sequences to random values
+        for sequence in schema.model.sequences: #@UndefinedVariable
+            try:
+                curval = (yield cleanupTxn.execSQL("select nextval('{}')".format(sequence.name), []))[0][0]
+                yield cleanupTxn.execSQL("select setval('{}', {})".format(sequence.name, curval + randint(1, 10000)), [])
+            except:
+                log.failure("setval sequence '{}' failed", sequence=sequence.name)
+        yield cleanupTxn.execSQL("update CALENDARSERVER set VALUE = '1' where NAME = 'MIN-VALID-REVISION'", [])
+
         yield cleanupTxn.commit()
 
         # Deal with memcached items that must be cleared
-        from txdav.caldav.datastore.sql import CalendarHome
-        CalendarHome._cacher.flushAll()
-        from txdav.carddav.datastore.sql import AddressBookHome
-        AddressBookHome._cacher.flushAll()
+        storeToClean.queryCacher.flushAll()
         from txdav.base.propertystore.sql import PropertyStore
         PropertyStore._cacher.flushAll()
 
@@ -439,32 +470,40 @@
         populateTxn._migrating = True
     for homeUID in requirements:
         calendars = requirements[homeUID]
-        home = yield populateTxn.calendarHomeWithUID(homeUID, True)
+        home = yield populateTxn.calendarHomeWithUID(homeUID, create=True)
         if calendars is not None:
             # We don't want the default calendar or inbox to appear unless it's
             # explicitly listed.
             try:
                 if config.RestrictCalendarsToOneComponentType:
                     for name in ical.allowedStoreComponents:
-                        yield home.removeCalendarWithName(home._componentCalendarName[name])
+                        yield home.removeCalendarWithName(
+                            home._componentCalendarName[name]
+                        )
                 else:
                     yield home.removeCalendarWithName("calendar")
                 yield home.removeCalendarWithName("inbox")
                 yield home.removeCalendarWithName("trash")
             except NoSuchHomeChildError:
                 pass
+
             for calendarName in calendars:
                 calendarObjNames = calendars[calendarName]
+
                 if calendarObjNames is not None:
                     # XXX should not be yielding!  this SQL will be executed
                     # first!
                     yield home.createCalendarWithName(calendarName)
                     calendar = yield home.calendarWithName(calendarName)
+
                     for objectName in calendarObjNames:
                         objData, metadata = calendarObjNames[objectName]
+
                         yield calendar._createCalendarObjectWithNameInternal(
                             objectName,
-                            VComponent.fromString(updateToCurrentYear(objData)),
+                            VComponent.fromString(
+                                updateToCurrentYear(objData)
+                            ),
                             internal_state=ComponentUpdateState.RAW,
                             options=metadata,
                         )
@@ -475,7 +514,8 @@
 
 def updateToCurrentYear(data):
     """
-    Update the supplied iCalendar data so that all dates are updated to the current year.
+    Update the supplied iCalendar data so that all dates are updated to the
+    current year.
     """
 
     nowYear = DateTime.getToday().getYear()
@@ -487,7 +527,8 @@
 
 def componentUpdate(data):
     """
-    Update the supplied iCalendar data so that all dates are updated to the current year.
+    Update the supplied iCalendar data so that all dates are updated to the
+    current year.
     """
 
     if len(relativeDateSubstitutions) == 0:
@@ -525,7 +566,7 @@
     for homeUID in md5s:
         calendars = md5s[homeUID]
         if calendars is not None:
-            home = yield populateTxn.calendarHomeWithUID(homeUID, True)
+            home = yield populateTxn.calendarHomeWithUID(homeUID, create=True)
             for calendarName in calendars:
                 calendarObjNames = calendars[calendarName]
                 if calendarObjNames is not None:
@@ -534,10 +575,12 @@
                     calendar = yield home.calendarWithName(calendarName)
                     for objectName in calendarObjNames:
                         md5 = calendarObjNames[objectName]
-                        obj = yield calendar.calendarObjectWithName(
-                            objectName,
+                        obj = (
+                            yield calendar.calendarObjectWithName(objectName)
                         )
-                        obj.properties()[md5key] = TwistedGETContentMD5.fromString(md5)
+                        obj.properties()[md5key] = (
+                            TwistedGETContentMD5.fromString(md5)
+                        )
     yield populateTxn.commit()
 
 
@@ -556,7 +599,7 @@
     for homeUID in requirements:
         addressbooks = requirements[homeUID]
         if addressbooks is not None:
-            home = yield populateTxn.addressbookHomeWithUID(homeUID, True)
+            home = yield populateTxn.addressbookHomeWithUID(homeUID, create=True)
             # We don't want the default addressbook
             try:
                 yield home.removeAddressBookWithName("addressbook")
@@ -568,7 +611,9 @@
                     # XXX should not be yielding!  this SQL will be executed
                     # first!
                     yield home.createAddressBookWithName(addressbookName)
-                    addressbook = yield home.addressbookWithName(addressbookName)
+                    addressbook = (
+                        yield home.addressbookWithName(addressbookName)
+                    )
                     for objectName in addressbookObjNames:
                         objData = addressbookObjNames[objectName]
                         yield addressbook.createAddressBookObjectWithName(
@@ -593,19 +638,23 @@
     for homeUID in md5s:
         addressbooks = md5s[homeUID]
         if addressbooks is not None:
-            home = yield populateTxn.addressbookHomeWithUID(homeUID, True)
+            home = yield populateTxn.addressbookHomeWithUID(homeUID, create=True)
             for addressbookName in addressbooks:
                 addressbookObjNames = addressbooks[addressbookName]
                 if addressbookObjNames is not None:
                     # XXX should not be yielding!  this SQL will be executed
                     # first!
-                    addressbook = yield home.addressbookWithName(addressbookName)
+                    addressbook = (
+                        yield home.addressbookWithName(addressbookName)
+                    )
                     for objectName in addressbookObjNames:
                         md5 = addressbookObjNames[objectName]
                         obj = yield addressbook.addressbookObjectWithName(
                             objectName,
                         )
-                        obj.properties()[md5key] = TwistedGETContentMD5.fromString(md5)
+                        obj.properties()[md5key] = (
+                            TwistedGETContentMD5.fromString(md5)
+                        )
     yield populateTxn.commit()
 
 
@@ -628,8 +677,8 @@
 
 
 def buildTestDirectory(
-    store, dataRoot, accounts=None, resources=None, augments=None, proxies=None,
-    serversDB=None, cacheSeconds=0
+    store, dataRoot, accounts=None, resources=None, augments=None,
+    proxies=None, serversDB=None, cacheSeconds=0,
 ):
     """
     @param store: the store for the directory to use
@@ -739,7 +788,8 @@
     @inlineCallbacks
     def buildStoreAndDirectory(
         self, accounts=None, resources=None, augments=None, proxies=None,
-        extraUids=None, serversDB=None, cacheSeconds=0, storeBuilder=theStoreBuilder
+        extraUids=None, serversDB=None, cacheSeconds=0,
+        storeBuilder=theStoreBuilder,
     ):
 
         self.serverRoot = self.mktemp()
@@ -792,10 +842,12 @@
         if not os.path.exists(config.LogRoot):
             os.makedirs(config.LogRoot)
 
-        # Work queues for implicit scheduling slow down tests a lot and require them all to add
-        # "waits" for work to complete. Rewriting all the current tests to do that is not practical
-        # right now, so we will turn this off by default. Instead we will have a set of tests dedicated
-        # to work queue-based scheduling which will patch this option to True.
+        # Work queues for implicit scheduling slow down tests a lot and require
+        # them all to add "waits" for work to complete.
+        # Rewriting all the current tests to do that is not practical right
+        # now, so we will turn this off by default.
+        # Instead we will have a set of tests dedicated to work queue-based
+        # scheduling which will patch this option to True.
         config.Scheduling.Options.WorkQueues.Enabled = False
 
         self.config = config
@@ -829,20 +881,25 @@
         """
         if self.savedStore is None:
             self.savedStore = self.storeUnderTest()
+
         self.counter += 1
+
         if txn is None:
             txn = self.savedStore.newTransaction(
                 self.id() + " #" + str(self.counter)
             )
         else:
             txn._label = self.id() + " #" + str(self.counter)
+
         @inlineCallbacks
         def maybeCommitThis():
             try:
                 yield txn.commit()
             except AlreadyFinishedError:
                 pass
+
         self.addCleanup(maybeCommitThis)
+
         return txn
 
 
@@ -873,34 +930,38 @@
         return self.store
 
 
-    @inlineCallbacks
-    def homeUnderTest(self, txn=None, name="home1", create=False):
+    def homeUnderTest(self, txn=None, name="home1", status=None, create=False):
         """
         Get the calendar home detailed by C{requirements['home1']}.
         """
         if txn is None:
             txn = self.transactionUnderTest()
-        returnValue((yield txn.calendarHomeWithUID(name, create=create)))
+        return txn.calendarHomeWithUID(name, status=status, create=create)
 
 
     @inlineCallbacks
-    def calendarUnderTest(self, txn=None, name="calendar_1", home="home1"):
+    def calendarUnderTest(self, txn=None, name="calendar_1", home="home1", status=None):
         """
         Get the calendar detailed by C{requirements['home1']['calendar_1']}.
         """
-        returnValue((
-            yield (yield self.homeUnderTest(txn, home)).calendarWithName(name)
-        ))
+        home = yield self.homeUnderTest(txn, home, status=status)
+        calendar = yield home.calendarWithName(name)
+        returnValue(calendar)
 
 
     @inlineCallbacks
-    def calendarObjectUnderTest(self, txn=None, name="1.ics", calendar_name="calendar_1", home="home1"):
+    def calendarObjectUnderTest(
+        self, txn=None, name="1.ics", calendar_name="calendar_1", home="home1", status=None
+    ):
         """
         Get the calendar detailed by
         C{requirements[home][calendar_name][name]}.
         """
-        returnValue((yield (yield self.calendarUnderTest(txn, name=calendar_name, home=home))
-                     .calendarObjectWithName(name)))
+        calendar = yield self.calendarUnderTest(
+            txn, name=calendar_name, home=home, status=status
+        )
+        object = yield calendar.calendarObjectWithName(name)
+        returnValue(object)
 
 
     def addressbookHomeUnderTest(self, txn=None, name="home1"):
@@ -915,52 +976,66 @@
     @inlineCallbacks
     def addressbookUnderTest(self, txn=None, name="addressbook", home="home1"):
         """
-        Get the addressbook detailed by C{requirements['home1']['addressbook']}.
+        Get the addressbook detailed by
+        C{requirements['home1']['addressbook']}.
         """
-        returnValue((
-            yield (yield self.addressbookHomeUnderTest(txn=txn, name=home)).addressbookWithName(name)
-        ))
+        home = yield self.addressbookHomeUnderTest(txn=txn, name=home)
+        addressbook = yield home.addressbookWithName(name)
+        returnValue(addressbook)
 
 
     @inlineCallbacks
-    def addressbookObjectUnderTest(self, txn=None, name="1.vcf", addressbook_name="addressbook", home="home1"):
+    def addressbookObjectUnderTest(
+        self, txn=None, name="1.vcf",
+        addressbook_name="addressbook", home="home1",
+    ):
         """
         Get the addressbook detailed by
         C{requirements['home1']['addressbook']['1.vcf']}.
         """
-        returnValue((yield (yield self.addressbookUnderTest(txn=txn, name=addressbook_name, home=home))
-                    .addressbookObjectWithName(name)))
+        addressBook = yield self.addressbookUnderTest(
+            txn=txn, name=addressbook_name, home=home
+        )
+        object = yield addressBook.addressbookObjectWithName(name)
+        returnValue(object)
 
 
-    @inlineCallbacks
+    def notificationCollectionUnderTest(self, txn=None, name="home1", status=None, create=False):
+        if txn is None:
+            txn = self.transactionUnderTest()
+        return txn.notificationsWithUID(name, status=status, create=create)
+
+
     def userRecordWithShortName(self, shortname):
-        record = yield self.directory.recordWithShortName(self.directory.recordType.user, shortname)
-        returnValue(record)
+        return self.directory.recordWithShortName(
+            self.directory.recordType.user, shortname
+        )
 
 
     @inlineCallbacks
     def userUIDFromShortName(self, shortname):
-        record = yield self.directory.recordWithShortName(self.directory.recordType.user, shortname)
+        record = yield self.directory.recordWithShortName(
+            self.directory.recordType.user, shortname
+        )
         returnValue(record.uid if record is not None else None)
 
 
-    @inlineCallbacks
     def addRecordFromFields(self, fields):
         updatedRecord = DirectoryRecord(self.directory, fields)
-        yield self.directory.updateRecords((updatedRecord,), create=True)
+        return self.directory.updateRecords((updatedRecord,), create=True)
 
 
-    @inlineCallbacks
     def removeRecord(self, uid):
-        yield self.directory.removeRecords([uid])
+        return self.directory.removeRecords([uid])
 
 
-    @inlineCallbacks
-    def changeRecord(self, record, fieldname, value):
+    def changeRecord(self, record, fieldname, value, directory=None):
+        if directory is None:
+            directory = self.directory
         fields = record.fields.copy()
         fields[fieldname] = value
-        updatedRecord = DirectoryRecord(self.directory, fields)
-        yield self.directory.updateRecords((updatedRecord,))
+        updatedRecord = DirectoryRecord(directory, fields)
+        return directory.updateRecords((updatedRecord,))
 
 
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -163,7 +163,7 @@
             startTxn = store.newTransaction("test_dbUpgrades")
             yield startTxn.execSQL("create schema test_dbUpgrades;")
             yield startTxn.execSQL("set search_path to test_dbUpgrades;")
-            yield startTxn.execSQL(path.getContent())
+            yield startTxn.execSQLBlock(path.getContent())
             yield startTxn.commit()
 
         @inlineCallbacks
@@ -269,7 +269,7 @@
             startTxn = store.newTransaction("test_dbUpgrades")
             yield startTxn.execSQL("create schema test_dbUpgrades;")
             yield startTxn.execSQL("set search_path to test_dbUpgrades;")
-            yield startTxn.execSQL(path.getContent())
+            yield startTxn.execSQLBlock(path.getContent())
             yield startTxn.execSQL("update CALENDARSERVER set VALUE = '%s' where NAME = '%s';" % (oldVersion, versionKey,))
             yield startTxn.commit()
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade_with_data.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade_with_data.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/test/test_upgrade_with_data.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -82,7 +82,7 @@
         startTxn = self.store.newTransaction("test_dbUpgrades")
         yield startTxn.execSQL("create schema test_dbUpgrades;")
         yield startTxn.execSQL("set search_path to test_dbUpgrades;")
-        yield startTxn.execSQL(path.getContent())
+        yield startTxn.execSQLBlock(path.getContent())
         yield startTxn.commit()
 
         self.addCleanup(self.cleanUp)

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_2_to_3.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_2_to_3.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/calendar_upgrade_from_2_to_3.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -21,9 +21,9 @@
 as in calendar data and properties.
 """
 
-from txdav.common.datastore.sql import fixUUIDNormalization
 from twisted.internet.defer import inlineCallbacks
 from txdav.common.datastore.upgrade.sql.upgrades.util import updateCalendarDataVersion
+from txdav.common.datastore.sql_util import fixUUIDNormalization
 
 UPGRADE_TO_VERSION = 3
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/test/test_notification_upgrade_from_0_to_1.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/test/test_notification_upgrade_from_0_to_1.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/upgrade/sql/upgrades/test/test_notification_upgrade_from_0_to_1.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -169,7 +169,7 @@
         )
 
         for uid, notificationtype, _ignore_jtype, notificationdata, _ignore_jdata in data:
-            notifications = yield self.transactionUnderTest().notificationsWithUID("user01")
+            notifications = yield self.transactionUnderTest().notificationsWithUID("user01", create=True)
             yield notifications.writeNotificationObject(uid, notificationtype, notificationdata)
 
         # Force data version to previous

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/work/test/test_revision_cleanup.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/work/test/test_revision_cleanup.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/datastore/work/test/test_revision_cleanup.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -55,7 +55,7 @@
         for homeUID in addressookRequirements:
             addressbooks = addressookRequirements[homeUID]
             if addressbooks is not None:
-                home = yield populateTxn.addressbookHomeWithUID(homeUID, True)
+                home = yield populateTxn.addressbookHomeWithUID(homeUID, create=True)
                 addressbook = home.addressbook()
 
                 addressbookObjNames = addressbooks[addressbook.name()]

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/common/icommondatastore.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/common/icommondatastore.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/common/icommondatastore.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -41,6 +41,12 @@
     "AlreadyInTrashError",
 ]
 
+# Constants for top-level store types
+ECALENDARTYPE = 0
+EADDRESSBOOKTYPE = 1
+ENOTIFICATIONTYPE = 2
+
+
 #
 # Exceptions
 #
@@ -236,6 +242,7 @@
     """
 
 
+
 class AlreadyInTrashError(CommonStoreError):
     """
     An object resource being removed is already in the trash.
@@ -355,7 +362,7 @@
         @param token: The device token of the subscriber
         @type token: C{str}
 
-        @return: tuples of (key, timestamp, guid)
+        @return: list of L{Record}
         """
 
     def apnSubscriptionsByKey(key): #@NoSelf
@@ -365,7 +372,7 @@
         @param key: The push key
         @type key: C{str}
 
-        @return: tuples of (token, guid)
+        @return: list of L{Record}
         """
 
     def apnSubscriptionsBySubscriber(guid): #@NoSelf
@@ -375,7 +382,7 @@
         @param guid: The GUID of the subscribed principal
         @type guid: C{str}
 
-        @return: tuples of (token, key, timestamp, userAgent, ipAddr)
+        @return: list of L{Record}
         """
 
     def imipCreateToken(organizer, attendee, icaluid, token=None): #@NoSelf
@@ -397,8 +404,8 @@
         """
         Returns the organizer, attendee, and icaluid corresponding to the token
 
-        @param token: the token to look up
-        @type token: C{str}
+        @param token: the token record
+        @type token: L{Record}
         """
 
     def imipGetToken(organizer, attendee, icaluid): #@NoSelf

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/who/delegates.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/who/delegates.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/who/delegates.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -353,13 +353,8 @@
 
         if delegate.recordType == BaseRecordType.group:
             # find the groupID
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                _ignore_extant
-            ) = yield txn.groupByUID(
-                delegate.uid
-            )
-            yield txn.addDelegateGroup(delegator.uid, groupID, readWrite)
+            group = yield txn.groupByUID(delegate.uid)
+            yield txn.addDelegateGroup(delegator.uid, group.groupID, readWrite)
         else:
             yield txn.addDelegate(delegator.uid, delegate.uid, readWrite)
 
@@ -393,13 +388,8 @@
 
         if delegate.recordType == BaseRecordType.group:
             # find the groupID
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                _ignore_extant
-            ) = yield txn.groupByUID(
-                delegate.uid
-            )
-            yield txn.removeDelegateGroup(delegator.uid, groupID, readWrite)
+            group = yield txn.groupByUID(delegate.uid)
+            yield txn.removeDelegateGroup(delegator.uid, group.groupID, readWrite)
         else:
             yield txn.removeDelegate(delegator.uid, delegate.uid, readWrite)
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/who/groups.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/who/groups.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/who/groups.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -20,13 +20,15 @@
 """
 
 from twext.enterprise.dal.record import fromTable
-from twext.enterprise.dal.syntax import Delete, Select, Parameter
+from twext.enterprise.dal.syntax import Select
 from twext.enterprise.jobqueue import AggregatedWorkItem, RegeneratingWorkItem
 from twext.python.log import Logger
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed, \
     DeferredList
 from twistedcaldav.config import config
 from txdav.caldav.datastore.sql import CalendarStoreFeatures
+from txdav.caldav.datastore.sql_directory import GroupAttendeeRecord
+from txdav.common.datastore.sql_directory import GroupsRecord
 from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
 import datetime
 import itertools
@@ -85,7 +87,7 @@
 
 class GroupRefreshWork(AggregatedWorkItem, fromTable(schema.GROUP_REFRESH_WORK)):
 
-    group = property(lambda self: (self.table.GROUP_UID == self.groupUid))
+    group = property(lambda self: (self.table.GROUP_UID == self.groupUID))
 
     @inlineCallbacks
     def doWork(self):
@@ -94,27 +96,27 @@
 
             try:
                 yield groupCacher.refreshGroup(
-                    self.transaction, self.groupUid.decode("utf-8")
+                    self.transaction, self.groupUID.decode("utf-8")
                 )
             except Exception, e:
                 log.error(
                     "Failed to refresh group {group} {err}",
-                    group=self.groupUid, err=e
+                    group=self.groupUID, err=e
                 )
 
         else:
             log.debug(
                 "Rescheduling group refresh for {group}: {when}",
-                group=self.groupUid,
+                group=self.groupUID,
                 when=datetime.datetime.utcnow() + datetime.timedelta(seconds=10)
             )
-            yield self.reschedule(self.transaction, 10, groupUID=self.groupUid)
+            yield self.reschedule(self.transaction, 10, groupUID=self.groupUID)
 
 
 
 class GroupDelegateChangesWork(AggregatedWorkItem, fromTable(schema.GROUP_DELEGATE_CHANGES_WORK)):
 
-    delegator = property(lambda self: (self.table.DELEGATOR_UID == self.delegatorUid))
+    delegator = property(lambda self: (self.table.DELEGATOR_UID == self.delegatorUID))
 
     @inlineCallbacks
     def doWork(self):
@@ -124,14 +126,14 @@
             try:
                 yield groupCacher.applyExternalAssignments(
                     self.transaction,
-                    self.delegatorUid.decode("utf-8"),
-                    self.readDelegateUid.decode("utf-8"),
-                    self.writeDelegateUid.decode("utf-8")
+                    self.delegatorUID.decode("utf-8"),
+                    self.readDelegateUID.decode("utf-8"),
+                    self.writeDelegateUID.decode("utf-8")
                 )
             except Exception, e:
                 log.error(
                     "Failed to apply external delegates for {uid} {err}",
-                    uid=self.delegatorUid, err=e
+                    uid=self.delegatorUID, err=e
                 )
 
 
@@ -182,8 +184,8 @@
             homeID = rows[0][0]
             home = yield self.transaction.calendarHomeWithResourceID(homeID)
             calendar = yield home.childWithID(self.calendarID)
-            groupUID = ((yield self.transaction.groupByID(self.groupID)))[0]
-            yield calendar.reconcileGroupSharee(groupUID)
+            group = (yield self.transaction.groupByID(self.groupID))
+            yield calendar.reconcileGroupSharee(group.groupUID)
 
 
 
@@ -268,33 +270,28 @@
         #     "Groups to refresh: {g}", g=groupUIDs
         # )
 
-        gr = schema.GROUPS
         if config.AutomaticPurging.Enabled and groupUIDs:
             # remove unused groups and groups that have not been seen in a while
             dateLimit = (
                 datetime.datetime.utcnow() -
                 datetime.timedelta(seconds=float(config.AutomaticPurging.GroupPurgeIntervalSeconds))
             )
-            rows = yield Delete(
-                From=gr,
-                Where=(
-                    (gr.EXTANT == 0).And(gr.MODIFIED < dateLimit)
+            rows = yield GroupsRecord.deletesome(
+                txn,
+                (
+                    (GroupsRecord.extant == 0).And(GroupsRecord.modified < dateLimit)
                 ).Or(
-                    gr.GROUP_UID.NotIn(
-                        Parameter("groupUIDs", len(groupUIDs))
-                    )
-                ) if groupUIDs else None,
-                Return=[gr.GROUP_UID]
-            ).on(txn, groupUIDs=groupUIDs)
+                    GroupsRecord.groupUID.NotIn(groupUIDs)
+                ),
+                returnCols=GroupsRecord.groupUID,
+            )
         else:
             # remove unused groups
-            rows = yield Delete(
-                From=gr,
-                Where=gr.GROUP_UID.NotIn(
-                    Parameter("groupUIDs", len(groupUIDs))
-                ) if groupUIDs else None,
-                Return=[gr.GROUP_UID]
-            ).on(txn, groupUIDs=groupUIDs)
+            rows = yield GroupsRecord.deletesome(
+                txn,
+                GroupsRecord.groupUID.NotIn(groupUIDs) if groupUIDs else None,
+                returnCols=GroupsRecord.groupUID,
+            )
         deletedGroupUIDs = [row[0] for row in rows]
         if deletedGroupUIDs:
             self.log.debug("Deleted old or unused groups {d}", d=deletedGroupUIDs)
@@ -302,7 +299,7 @@
         # For each of those groups, create a per-group refresh work item
         for groupUID in set(groupUIDs) - set(deletedGroupUIDs):
             self.log.debug("Enqueuing group refresh for {u}", u=groupUID)
-            yield GroupRefreshWork.reschedule(txn, 0, groupUid=groupUID)
+            yield GroupRefreshWork.reschedule(txn, 0, groupUID=groupUID)
 
 
     @inlineCallbacks
@@ -335,9 +332,9 @@
                     )
                 else:
                     yield GroupDelegateChangesWork.reschedule(
-                        txn, 0, delegatorUid=delegatorUID,
-                        readDelegateUid=readDelegateUID,
-                        writeDelegateUid=writeDelegateUID
+                        txn, 0, delegatorUID=delegatorUID,
+                        readDelegateUID=readDelegateUID,
+                        writeDelegateUID=writeDelegateUID
                     )
         if removed:
             for delegatorUID in removed:
@@ -351,8 +348,8 @@
                     )
                 else:
                     yield GroupDelegateChangesWork.reschedule(
-                        txn, 0, delegatorUid=delegatorUID,
-                        readDelegateUid="", writeDelegateUid=""
+                        txn, 0, delegatorUID=delegatorUID,
+                        readDelegateUID="", writeDelegateUID=""
                     )
 
 
@@ -367,26 +364,20 @@
         readDelegateGroupID = writeDelegateGroupID = None
 
         if readDelegateUID:
-            (
-                readDelegateGroupID, _ignore_name, _ignore_hash,
-                _ignore_modified, _ignore_extant
-            ) = (
-                yield txn.groupByUID(readDelegateUID)
-            )
-            if readDelegateGroupID is None:
+            readDelegateGroup = yield txn.groupByUID(readDelegateUID)
+            if readDelegateGroup is None:
                 # The group record does not actually exist
                 readDelegateUID = None
+            else:
+                readDelegateGroupID = readDelegateGroup.groupID
 
         if writeDelegateUID:
-            (
-                writeDelegateGroupID, _ignore_name, _ignore_hash,
-                _ignore_modified, _ignore_extant
-            ) = (
-                yield txn.groupByUID(writeDelegateUID)
-            )
-            if writeDelegateGroupID is None:
+            writeDelegateGroup = yield txn.groupByUID(writeDelegateUID)
+            if writeDelegateGroup is None:
                 # The group record does not actually exist
                 writeDelegateUID = None
+            else:
+                writeDelegateGroupID = writeDelegateGroup.groupID
 
         yield txn.assignExternalDelegates(
             delegatorUID, readDelegateGroupID, writeDelegateGroupID,
@@ -411,45 +402,36 @@
         else:
             self.log.debug("Got group record: {u}", u=record.uid)
 
-        (
-            groupID, cachedName, cachedMembershipHash, _ignore_modified,
-            cachedExtant
-        ) = yield txn.groupByUID(
-            groupUID,
-            create=(record is not None)
-        )
+        group = yield txn.groupByUID(groupUID, create=(record is not None))
 
-        if groupID:
-            membershipChanged, addedUIDs, removedUIDs = yield txn.refreshGroup(
-                groupUID, record, groupID,
-                cachedName, cachedMembershipHash, cachedExtant
-            )
+        if group:
+            membershipChanged, addedUIDs, removedUIDs = yield txn.refreshGroup(group, record)
 
             if membershipChanged:
                 self.log.info(
                     "Membership changed for group {uid} {name}:\n\tadded {added}\n\tremoved {removed}",
-                    uid=groupUID,
-                    name=cachedName,
+                    uid=group.groupUID,
+                    name=group.name,
                     added=",".join(addedUIDs),
                     removed=",".join(removedUIDs),
                 )
 
                 # Send cache change notifications
                 if self.cacheNotifier is not None:
-                    self.cacheNotifier.changed(groupUID)
+                    self.cacheNotifier.changed(group.groupUID)
                     for uid in itertools.chain(addedUIDs, removedUIDs):
                         self.cacheNotifier.changed(uid)
 
                 # Notifier other store APIs of changes
-                wpsAttendee = yield self.scheduleGroupAttendeeReconciliations(txn, groupID)
-                wpsShareee = yield self.scheduleGroupShareeReconciliations(txn, groupID)
+                wpsAttendee = yield self.scheduleGroupAttendeeReconciliations(txn, group.groupID)
+                wpsShareee = yield self.scheduleGroupShareeReconciliations(txn, group.groupID)
 
                 returnValue(wpsAttendee + wpsShareee)
             else:
                 self.log.debug(
                     "No membership change for group {uid} {name}",
-                    uid=groupUID,
-                    name=cachedName
+                    uid=group.groupUID,
+                    name=group.name
                 )
 
         returnValue(tuple())
@@ -480,19 +462,15 @@
         work items for them.
         returns: WorkProposal
         """
-        ga = schema.GROUP_ATTENDEE
-        rows = yield Select(
-            [ga.RESOURCE_ID, ],
-            From=ga,
-            Where=ga.GROUP_ID == groupID,
-        ).on(txn)
 
+        records = yield GroupAttendeeRecord.querysimple(txn, groupID=groupID)
+
         wps = []
-        for [eventID] in rows:
+        for record in records:
             wp = yield GroupAttendeeReconciliationWork.reschedule(
                 txn,
                 seconds=float(config.GroupAttendees.ReconciliationDelaySeconds),
-                resourceID=eventID,
+                resourceID=record.resourceID,
                 groupID=groupID,
             )
             wps.append(wp)
@@ -546,20 +524,15 @@
             )
 
         # Get groupUIDs for all group attendees
-        ga = schema.GROUP_ATTENDEE
-        gr = schema.GROUPS
-        rows = yield Select(
-            [gr.GROUP_UID],
-            From=gr,
-            Where=gr.GROUP_ID.In(
-                Select(
-                    [ga.GROUP_ID],
-                    From=ga,
-                    Distinct=True
-                )
-            )
-        ).on(txn)
-        attendeeGroupUIDs = frozenset([row[0] for row in rows])
+        groups = yield GroupsRecord.query(
+            txn,
+            GroupsRecord.groupID.In(GroupAttendeeRecord.queryExpr(
+                expr=None,
+                attributes=(GroupAttendeeRecord.groupID,),
+                distinct=True,
+            ))
+        )
+        attendeeGroupUIDs = frozenset([group.groupUID for group in groups])
         self.log.info(
             "There are {count} group attendees", count=len(attendeeGroupUIDs)
         )

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_delegates.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_delegates.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_delegates.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -19,6 +19,8 @@
 """
 
 from txdav.common.datastore.sql import CommonStoreTransaction
+from txdav.common.datastore.sql_directory import DelegateRecord, \
+    DelegateGroupsRecord
 from txdav.who.delegates import Delegates, RecordType as DelegateRecordType
 from txdav.who.groups import GroupCacher
 from twext.who.idirectory import RecordType
@@ -211,12 +213,9 @@
                 yield self.directory.recordWithShortName(RecordType.user, name)
             )
             newSet.add(record.uid)
-        (
-            groupID, name, _ignore_membershipHash, _ignore_modified,
-            _ignore_extant
-        ) = (yield txn.groupByUID(group1.uid))
+        group = yield txn.groupByUID(group1.uid)
         _ignore_added, _ignore_removed = (
-            yield self.groupCacher.synchronizeMembers(txn, groupID, newSet)
+            yield self.groupCacher.synchronizeMembers(txn, group.groupID, newSet)
         )
         delegates = (yield Delegates.delegatesOf(txn, delegator, True, expanded=True))
         self.assertEquals(
@@ -261,15 +260,14 @@
         yield txn.commit()
 
         txn = self.store.newTransaction(label="test_noDuplication")
-        results = (
-            yield txn._selectDelegatesQuery.on(
-                txn,
-                delegator=delegator.uid.encode("utf-8"),
-                readWrite=1
+        results = yield DelegateRecord.query(
+            txn,
+            (DelegateRecord.delegator == delegator.uid.encode("utf-8")).And(
+                DelegateRecord.readWrite == 1
             )
         )
         yield txn.commit()
-        self.assertEquals([["__sagen1__"]], map(list, results))
+        self.assertEquals(["__sagen1__", ], [record.delegate for record in results])
 
         # Delegate groups:
         group1 = yield self.directory.recordWithUID(u"__top_group_1__")
@@ -283,15 +281,13 @@
         yield txn.commit()
 
         txn = self.store.newTransaction(label="test_noDuplication")
-        results = (
-            yield txn._selectDelegateGroupsQuery.on(
-                txn,
-                delegator=delegator.uid.encode("utf-8"),
-                readWrite=1
-            )
+        results = yield DelegateGroupsRecord.delegateGroups(
+            txn,
+            delegator.uid,
+            True,
         )
         yield txn.commit()
-        self.assertEquals([["__top_group_1__"]], map(list, results))
+        self.assertEquals(["__top_group_1__", ], [record.groupUID for record in results])
 
 
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_attendees.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_attendees.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_attendees.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -18,7 +18,6 @@
     group attendee tests
 """
 
-from twext.enterprise.dal.syntax import Insert
 from twext.enterprise.jobqueue import JobItem
 from twext.python.filepath import CachingFilePath as FilePath
 from twext.who.directory import DirectoryService
@@ -27,8 +26,8 @@
 from twisted.trial import unittest
 from twistedcaldav.config import config
 from twistedcaldav.ical import Component, normalize_iCalStr
+from txdav.caldav.datastore.sql_directory import GroupAttendeeRecord
 from txdav.caldav.datastore.test.util import populateCalendarsFrom, CommonCommonTests
-from txdav.common.datastore.sql_tables import schema
 from txdav.who.directory import CalendarDirectoryRecordMixin
 from txdav.who.groups import GroupCacher
 import os
@@ -871,16 +870,13 @@
         # finally, simulate an event that has become old
         self.patch(CalendarDirectoryRecordMixin, "expandedMembers", unpatchedExpandedMembers)
 
-        (
-            groupID, _ignore_name, _ignore_membershipHash, _ignore_modDate,
-            _ignore_extant
-        ) = yield self.transactionUnderTest().groupByUID("group01")
-        ga = schema.GROUP_ATTENDEE
-        yield Insert({
-            ga.RESOURCE_ID: cobj._resourceID,
-            ga.GROUP_ID: groupID,
-            ga.MEMBERSHIP_HASH: (-1),
-        }).on(self.transactionUnderTest())
+        group = yield self.transactionUnderTest().groupByUID("group01")
+        yield GroupAttendeeRecord.create(
+            self.transactionUnderTest(),
+            resourceID=cobj._resourceID,
+            groupID=group.groupID,
+            membershipHash="None",
+        )
         wps = yield groupCacher.refreshGroup(self.transactionUnderTest(), "group01")
         self.assertEqual(len(wps), 1)
         yield self.commit()
@@ -1033,16 +1029,13 @@
         # finally, simulate an event that has become old
         self.patch(CalendarDirectoryRecordMixin, "expandedMembers", unpatchedExpandedMembers)
 
-        (
-            groupID, _ignore_name, _ignore_membershipHash, _ignore_modDate,
-            _ignore_extant
-        ) = yield self.transactionUnderTest().groupByUID("group01")
-        ga = schema.GROUP_ATTENDEE
-        yield Insert({
-            ga.RESOURCE_ID: cobj._resourceID,
-            ga.GROUP_ID: groupID,
-            ga.MEMBERSHIP_HASH: (-1),
-        }).on(self.transactionUnderTest())
+        group = yield self.transactionUnderTest().groupByUID("group01")
+        yield GroupAttendeeRecord.create(
+            self.transactionUnderTest(),
+            resourceID=cobj._resourceID,
+            groupID=group.groupID,
+            membershipHash="None",
+        )
         wps = yield groupCacher.refreshGroup(self.transactionUnderTest(), "group01")
         self.assertEqual(len(wps), 1)
         yield self.commit()

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_sharees.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_sharees.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_group_sharees.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -84,7 +84,7 @@
 
     @inlineCallbacks
     def _check_notifications(self, uid, items):
-        notifyHome = yield self.transactionUnderTest().notificationsWithUID(uid)
+        notifyHome = yield self.transactionUnderTest().notificationsWithUID(uid, create=True)
         notifications = yield notifyHome.listNotificationObjects()
         self.assertEqual(set(notifications), set(items))
 

Modified: CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_groups.py
===================================================================
--- CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_groups.py	2015-03-10 16:58:24 UTC (rev 14554)
+++ CalendarServer/branches/users/sagen/trashcan-5/txdav/who/test/test_groups.py	2015-03-10 20:42:34 UTC (rev 14555)
@@ -67,27 +67,24 @@
         record = yield self.directory.recordWithUID(u"__top_group_1__")
         yield self.groupCacher.refreshGroup(txn, record.uid)
 
-        (
-            groupID, _ignore_name, membershipHash, _ignore_modified,
-            extant
-        ) = (yield txn.groupByUID(record.uid))
+        group = (yield txn.groupByUID(record.uid))
 
-        self.assertEquals(extant, True)
-        self.assertEquals(membershipHash, "553eb54e3bbb26582198ee04541dbee4")
+        self.assertEquals(group.extant, True)
+        self.assertEquals(group.membershipHash, "553eb54e3bbb26582198ee04541dbee4")
 
-        groupUID, name, membershipHash, extant = (yield txn.groupByID(groupID))
-        self.assertEquals(groupUID, record.uid)
-        self.assertEquals(name, u"Top Group 1")
-        self.assertEquals(membershipHash, "553eb54e3bbb26582198ee04541dbee4")
-        self.assertEquals(extant, True)
+        group = yield txn.groupByID(group.groupID)
+        self.assertEquals(group.groupUID, record.uid)
+        self.assertEquals(group.name, u"Top Group 1")
+        self.assertEquals(group.membershipHash, "553eb54e3bbb26582198ee04541dbee4")
+        self.assertEquals(group.extant, True)
 
-        members = (yield txn.groupMemberUIDs(groupID))
+        members = (yield txn.groupMemberUIDs(group.groupID))
         self.assertEquals(
             set([u'__cdaboo1__', u'__glyph1__', u'__sagen1__', u'__wsanchez1__']),
             members
         )
 
-        records = (yield self.groupCacher.cachedMembers(txn, groupID))
+        records = (yield self.groupCacher.cachedMembers(txn, group.groupID))
         self.assertEquals(
             set([r.uid for r in records]),
             set([u'__cdaboo1__', u'__glyph1__', u'__sagen1__', u'__wsanchez1__'])
@@ -116,10 +113,7 @@
         # Refresh the group so it's assigned a group_id
         uid = u"__top_group_1__"
         yield self.groupCacher.refreshGroup(txn, uid)
-        (
-            groupID, name, _ignore_membershipHash, _ignore_modified,
-            _ignore_extant
-        ) = yield txn.groupByUID(uid)
+        group = yield txn.groupByUID(uid)
 
         # Remove two members, and add one member
         newSet = set()
@@ -133,12 +127,12 @@
             newSet.add(record.uid)
         added, removed = (
             yield self.groupCacher.synchronizeMembers(
-                txn, groupID, newSet
+                txn, group.groupID, newSet
             )
         )
         self.assertEquals(added, set(["__dre1__", ]))
         self.assertEquals(removed, set(["__glyph1__", "__sagen1__", ]))
-        records = (yield self.groupCacher.cachedMembers(txn, groupID))
+        records = (yield self.groupCacher.cachedMembers(txn, group.groupID))
         self.assertEquals(
             set([r.shortNames[0] for r in records]),
             set(["wsanchez1", "cdaboo1", "dre1"])
@@ -146,11 +140,11 @@
 
         # Remove all members
         added, removed = (
-            yield self.groupCacher.synchronizeMembers(txn, groupID, set())
+            yield self.groupCacher.synchronizeMembers(txn, group.groupID, set())
         )
         self.assertEquals(added, set())
         self.assertEquals(removed, set(["__wsanchez1__", "__cdaboo1__", "__dre1__", ]))
-        records = (yield self.groupCacher.cachedMembers(txn, groupID))
+        records = (yield self.groupCacher.cachedMembers(txn, group.groupID))
         self.assertEquals(len(records), 0)
 
         yield txn.commit()
@@ -168,12 +162,12 @@
         uid = u"__top_group_1__"
         hash = "553eb54e3bbb26582198ee04541dbee4"
         yield self.groupCacher.refreshGroup(txn, uid)
-        (
-            groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-            _ignore_extant
-        ) = yield txn.groupByUID(uid)
-        results = yield txn.groupByID(groupID)
-        self.assertEquals((uid, u"Top Group 1", hash, True), results)
+        group = yield txn.groupByUID(uid)
+        group = yield txn.groupByID(group.groupID)
+        self.assertEqual(group.groupUID, uid)
+        self.assertEqual(group.name, u"Top Group 1")
+        self.assertEqual(group.membershipHash, hash)
+        self.assertEqual(group.extant, True)
 
         yield txn.commit()
 
@@ -683,31 +677,25 @@
 
             txn = store.newTransaction()
             yield self.groupCacher.refreshGroup(txn, uid)
-            (
-                _ignore_groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                extant
-            ) = (yield txn.groupByUID(uid))
+            group = yield txn.groupByUID(uid)
             yield txn.commit()
 
-            self.assertTrue(extant)
+            self.assertTrue(group.extant)
 
             # Remove the group
             yield self.directory.removeRecords([uid])
 
             txn = store.newTransaction()
             yield self.groupCacher.refreshGroup(txn, uid)
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                extant
-            ) = (yield txn.groupByUID(uid))
+            group = (yield txn.groupByUID(uid))
             yield txn.commit()
 
             # Extant = False
-            self.assertFalse(extant)
+            self.assertFalse(group.extant)
 
             # The list of members stored in the DB for this group is now empty
             txn = store.newTransaction()
-            members = yield txn.groupMemberUIDs(groupID)
+            members = yield txn.groupMemberUIDs(group.groupID)
             yield txn.commit()
             self.assertEquals(members, set())
 
@@ -732,18 +720,15 @@
 
             txn = store.newTransaction()
             yield self.groupCacher.refreshGroup(txn, uid)
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                extant
-            ) = (yield txn.groupByUID(uid))
+            group = (yield txn.groupByUID(uid))
             yield txn.commit()
 
             # Extant = True
-            self.assertTrue(extant)
+            self.assertTrue(group.extant)
 
             # The list of members stored in the DB for this group has 100 users
             txn = store.newTransaction()
-            members = yield txn.groupMemberUIDs(groupID)
+            members = yield txn.groupMemberUIDs(group.groupID)
             yield txn.commit()
             self.assertEquals(len(members), 100 if uid == u"testgroup" else 0)
 
@@ -760,27 +745,27 @@
 
             txn = store.newTransaction()
             yield self.groupCacher.refreshGroup(txn, uid)
-            groupID = (yield txn.groupByUID(uid, create=False))[0]
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
 
-            self.assertNotEqual(groupID, None)
+            self.assertNotEqual(group, None)
 
             txn = store.newTransaction()
             yield self.groupCacher.update(txn)
-            groupID = (yield txn.groupByUID(uid, create=False))[0]
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
 
-            self.assertEqual(groupID, None)
+            self.assertEqual(group, None)
 
         # delegate groups not deleted
         for uid in (u"testgroup", u"emptygroup",):
 
             txn = store.newTransaction()
-            groupID = (yield txn.groupByUID(uid))[0]
-            yield txn.addDelegateGroup(delegator=u"sagen", delegateGroupID=groupID, readWrite=True)
+            group = yield txn.groupByUID(uid)
+            yield txn.addDelegateGroup(delegator=u"sagen", delegateGroupID=group.groupID, readWrite=True)
             yield txn.commit()
 
-            self.assertNotEqual(groupID, None)
+            self.assertNotEqual(group, None)
 
             txn = store.newTransaction()
             yield self.groupCacher.update(txn)
@@ -788,21 +773,21 @@
             yield JobItem.waitEmpty(store.newTransaction, reactor, 60)
 
             txn = store.newTransaction()
-            groupID = (yield txn.groupByUID(uid, create=False))[0]
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
 
-            self.assertNotEqual(groupID, None)
+            self.assertNotEqual(group, None)
 
         # delegate group is deleted. unused group is deleted
         txn = store.newTransaction()
-        testGroupID = (yield txn.groupByUID(u"testgroup", create=False))[0]
-        yield txn.removeDelegateGroup(delegator=u"sagen", delegateGroupID=testGroupID, readWrite=True)
-        testGroupID = (yield txn.groupByUID(u"testgroup", create=False))[0]
-        emptyGroupID = (yield txn.groupByUID(u"emptygroup", create=False))[0]
+        testGroup = yield txn.groupByUID(u"testgroup", create=False)
+        yield txn.removeDelegateGroup(delegator=u"sagen", delegateGroupID=testGroup.groupID, readWrite=True)
+        testGroup = yield txn.groupByUID(u"testgroup", create=False)
+        emptyGroup = yield txn.groupByUID(u"emptygroup", create=False)
         yield txn.commit()
 
-        self.assertNotEqual(testGroupID, None)
-        self.assertNotEqual(emptyGroupID, None)
+        self.assertNotEqual(testGroup, None)
+        self.assertNotEqual(emptyGroup, None)
 
         txn = store.newTransaction()
         yield self.groupCacher.update(txn)
@@ -810,12 +795,12 @@
         yield JobItem.waitEmpty(store.newTransaction, reactor, 60)
 
         txn = store.newTransaction()
-        testGroupID = (yield txn.groupByUID(u"testgroup", create=False))[0]
-        emptyGroupID = (yield txn.groupByUID(u"emptygroup", create=False))[0]
+        testGroup = yield txn.groupByUID(u"testgroup", create=False)
+        emptyGroup = yield txn.groupByUID(u"emptygroup", create=False)
         yield txn.commit()
 
-        self.assertEqual(testGroupID, None)
-        self.assertNotEqual(emptyGroupID, None)
+        self.assertEqual(testGroup, None)
+        self.assertNotEqual(emptyGroup, None)
 
 
     @inlineCallbacks
@@ -831,42 +816,33 @@
 
             config.AutomaticPurging.GroupPurgeIntervalSeconds = oldGroupPurgeIntervalSeconds
             txn = store.newTransaction()
-            groupID = (yield txn.groupByUID(uid))[0]
-            yield txn.addDelegateGroup(delegator=u"sagen", delegateGroupID=groupID, readWrite=True)
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                extant
-            ) = yield txn.groupByUID(uid, create=False)
+            group = yield txn.groupByUID(uid)
+            yield txn.addDelegateGroup(delegator=u"sagen", delegateGroupID=group.groupID, readWrite=True)
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
 
-            self.assertTrue(extant)
-            self.assertNotEqual(groupID, None)
+            self.assertNotEqual(group, None)
+            self.assertTrue(group.extant)
 
             # Remove the group, still cached
             yield self.directory.removeRecords([uid])
             txn = store.newTransaction()
             yield self.groupCacher.update(txn)
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                extant
-            ) = yield txn.groupByUID(uid, create=False)
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
             yield JobItem.waitEmpty(store.newTransaction, reactor, 60)
 
             txn = store.newTransaction()
-            (
-                groupID, _ignore_name, _ignore_membershipHash, _ignore_modified,
-                extant
-            ) = yield txn.groupByUID(uid, create=False)
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
-            self.assertNotEqual(groupID, None)
-            self.assertFalse(extant)
+            self.assertNotEqual(group, None)
+            self.assertFalse(group.extant)
 
             # delete the group
             config.AutomaticPurging.GroupPurgeIntervalSeconds = "0.0"
 
             txn = store.newTransaction()
             yield self.groupCacher.update(txn)
-            groupID = (yield txn.groupByUID(uid, create=False))[0]
+            group = yield txn.groupByUID(uid, create=False)
             yield txn.commit()
-            self.assertEqual(groupID, None)
+            self.assertEqual(group, None)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20150310/6754d3b0/attachment-0001.html>


More information about the calendarserver-changes mailing list