[CalendarServer-changes] [11633] CalendarServer/branches/users/gaya/directorybacker

source_changes at macosforge.org source_changes at macosforge.org
Thu Aug 22 14:45:36 PDT 2013


Revision: 11633
          http://trac.calendarserver.org//changeset/11633
Author:   gaya at apple.com
Date:     2013-08-22 14:45:36 -0700 (Thu, 22 Aug 2013)
Log Message:
-----------
Merge from trunk r11513 through r11631

Revision Links:
--------------
    http://trac.calendarserver.org//changeset/11513
    http://trac.calendarserver.org//changeset/11631

Modified Paths:
--------------
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/controlsocket.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/logAnalysis.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/platform/darwin/wiki.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/provision/root.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/push/applepush.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/caldav.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/test/test_caldav.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/config.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_config.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_gateway.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_principals.py
    CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_resources.py
    CalendarServer/branches/users/gaya/directorybacker/conf/auth/augments.dtd
    CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-apple.plist
    CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-test.plist
    CalendarServer/branches/users/gaya/directorybacker/contrib/launchd/calendarserver.plist
    CalendarServer/branches/users/gaya/directorybacker/contrib/performance/loadtest/thresholds.json
    CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/sqlusage.py
    CalendarServer/branches/users/gaya/directorybacker/contrib/tools/readStats.py
    CalendarServer/branches/users/gaya/directorybacker/doc/Admin/ExtendedLogItems.rst
    CalendarServer/branches/users/gaya/directorybacker/support/build.sh
    CalendarServer/branches/users/gaya/directorybacker/test
    CalendarServer/branches/users/gaya/directorybacker/testserver
    CalendarServer/branches/users/gaya/directorybacker/twext/enterprise/dal/model.py
    CalendarServer/branches/users/gaya/directorybacker/twext/python/log.py
    CalendarServer/branches/users/gaya/directorybacker/twext/web2/http.py
    CalendarServer/branches/users/gaya/directorybacker/twext/web2/test/test_http.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/caldavxml.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/peruserdata.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/test/test_peruserdata.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/appleopendirectory.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/directory.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/ldapdirectory.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/principal.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_buildquery.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_directory.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_ldapdirectory.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_livedirectory.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/ical.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/resource.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/sharing.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/storebridge.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_sharing.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_timezones.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_wrapping.py
    CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/upgrade.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/test/test_subpostgres.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/sql.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/test/test_sql.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/sql.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/common.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_attachments.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_file.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_util.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/util.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/icalendarstore.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/sql.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/test/test_file.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/iaddressbookstore.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/file.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/sql.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/test_sql_schema_files.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/migrate.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/others/attachment_migration.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/upgrade.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/test/test_migrate.py
    CalendarServer/branches/users/gaya/directorybacker/txdav/xml/base.py

Added Paths:
-----------
    CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/requests/propfind_invite.py
    CalendarServer/branches/users/gaya/directorybacker/twext/internet/fswatch.py
    CalendarServer/branches/users/gaya/directorybacker/twext/internet/test/test_fswatch.py

Property Changed:
----------------
    CalendarServer/branches/users/gaya/directorybacker/


Property changes on: CalendarServer/branches/users/gaya/directorybacker
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:9759-9832,11085-11111,11120-11510
   + /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11612
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:9759-9832,11085-11111,11120-11510

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/controlsocket.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/controlsocket.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/controlsocket.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -125,4 +125,3 @@
         from twisted.internet import reactor
         endpoint = self.endpointFactory(reactor)
         endpoint.connect(self.controlSocket)
-

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/logAnalysis.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/logAnalysis.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/logAnalysis.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -334,6 +334,7 @@
 
 osClients = (
     "Mac OS X/",
+    "Mac_OS_X/",
     "iOS/",
 )
 

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/platform/darwin/wiki.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/platform/darwin/wiki.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/platform/darwin/wiki.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -28,15 +28,15 @@
 log = Logger()
 
 @inlineCallbacks
-def usernameForAuthToken(token, host="localhost", port=80):
+def guidForAuthToken(token, host="localhost", port=80):
     """
     Send a GET request to the web auth service to retrieve the user record
-    name associated with the provided auth token.
+    guid associated with the provided auth token.
 
     @param token: An auth token, usually passed in via cookie when webcal
         makes a request.
     @type token: C{str}
-    @return: deferred returning a record name (C{str}) if successful, or
+    @return: deferred returning a guid (C{str}) if successful, or
         will raise WebAuthError otherwise.
     """
     url = "http://%s:%d/auth/verify?auth_token=%s" % (host, port, token,)
@@ -48,7 +48,7 @@
             (jsonResponse, str(e)))
         raise WebAuthError("Could not look up token: %s" % (token,))
     if response["succeeded"]:
-        returnValue(response["shortName"])
+        returnValue(response["generated_uid"])
     else:
         raise WebAuthError("Could not look up token: %s" % (token,))
 

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/provision/root.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/provision/root.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/provision/root.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -41,7 +41,7 @@
 from twistedcaldav.extensions import ReadOnlyResourceMixIn
 from twistedcaldav.resource import CalDAVComplianceMixIn
 from twistedcaldav.directory.principal import DirectoryPrincipalResource
-from calendarserver.platform.darwin.wiki import usernameForAuthToken
+from calendarserver.platform.darwin.wiki import guidForAuthToken
 
 log = Logger()
 
@@ -239,15 +239,23 @@
                 if token is not None and token != "unauthenticated":
                     log.debug("Wiki sessionID cookie value: %s" % (token,))
 
+                    record = None
                     try:
                         if wikiConfig.LionCompatibility:
+                            guid = None
                             proxy = Proxy(wikiConfig["URL"])
                             username = (yield proxy.callRemote(wikiConfig["UserMethod"], token))
+                            directory = request.site.resource.getDirectory()
+                            record = directory.recordWithShortName("users", username)
+                            if record is not None:
+                                guid = record.guid
                         else:
-                            username = (yield usernameForAuthToken(token))
+                            guid = (yield guidForAuthToken(token))
+                            if guid == "unauthenticated":
+                                guid = None
 
                     except WebError, w:
-                        username = None
+                        guid = None
                         # FORBIDDEN status means it's an unknown token
                         if int(w.status) == responsecode.NOT_FOUND:
                             log.debug("Unknown wiki token: %s" % (token,))
@@ -257,16 +265,16 @@
 
                     except Exception, e:
                         log.error("Failed to look up wiki token (%s)" % (e,))
-                        username = None
+                        guid = None
 
-                    if username is not None:
-                        log.debug("Wiki lookup returned user: %s" % (username,))
+                    if guid is not None:
+                        log.debug("Wiki lookup returned guid: %s" % (guid,))
                         principal = None
                         directory = request.site.resource.getDirectory()
-                        record = directory.recordWithShortName("users", username)
-                        log.debug("Wiki user record for user %s : %s" % (username, record))
-                        if record:
-                            # Note: record will be None if it's a /Local/Default user
+                        record = directory.recordWithGUID(guid)
+                        if record is not None:
+                            username = record.shortNames[0]
+                            log.debug("Wiki user record for user %s : %s" % (username, record))
                             for collection in self.principalCollections():
                                 principal = collection.principalForRecord(record)
                                 if principal is not None:
@@ -302,7 +310,8 @@
                                 host,
                                 request.path
                             )
-                        )
+                        ),
+                        temporary=True
                     )
                 raise HTTPError(response)
 

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/push/applepush.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/push/applepush.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/push/applepush.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -365,8 +365,8 @@
             }
         )
         payloadLength = len(payload)
-        self.log.debug("Sending APNS notification to %s: id=%d payload=%s" %
-            (token, identifier, payload))
+        self.log.debug("Sending APNS notification to {token}: id={id} payload={payload}",
+            token=token, id=identifier, payload=payload)
 
         self.transport.write(
             struct.pack("!BIIH32sH%ds" % (payloadLength,),

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/caldav.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/caldav.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/caldav.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -57,6 +57,7 @@
 from twext.python.filepath import CachingFilePath
 from twext.internet.ssl import ChainingOpenSSLContextFactory
 from twext.internet.tcp import MaxAcceptTCPServer, MaxAcceptSSLServer
+from twext.internet.fswatch import DirectoryChangeListener, IDirectoryChangeListenee
 from twext.web2.channel.http import LimitingHTTPFactory, SSLRedirectRequest
 from twext.web2.metafd import ConnectionLimiter, ReportingHTTPService
 from twext.enterprise.ienterprise import POSTGRES_DIALECT
@@ -106,7 +107,6 @@
 from calendarserver.push.notifier import PushDistributor
 from calendarserver.push.amppush import AMPPushMaster, AMPPushForwarder
 from calendarserver.push.applepush import ApplePushNotifierService
-from calendarserver.tools.agent import makeAgentService
 
 try:
     from calendarserver.version import version
@@ -225,14 +225,31 @@
     """ Registers a rotating file logger for error logging, if
         config.ErrorLogEnabled is True. """
 
+    def __init__(self, logEnabled, logPath, logRotateLength, logMaxFiles):
+        """
+        @param logEnabled: Whether to write to a log file
+        @type logEnabled: C{boolean}
+        @param logPath: the full path to the log file
+        @type logPath: C{str}
+        @param logRotateLength: rotate when files exceed this many bytes
+        @type logRotateLength: C{int}
+        @param logMaxFiles: keep at most this many files
+        @type logMaxFiles: C{int}
+        """
+        MultiService.__init__(self)
+        self.logEnabled = logEnabled
+        self.logPath = logPath
+        self.logRotateLength = logRotateLength
+        self.logMaxFiles = logMaxFiles
+
     def setServiceParent(self, app):
         MultiService.setServiceParent(self, app)
 
-        if config.ErrorLogEnabled:
+        if self.logEnabled:
             errorLogFile = LogFile.fromFullPath(
-                config.ErrorLogFile,
-                rotateLength=config.ErrorLogRotateMB * 1024 * 1024,
-                maxRotatedFiles=config.ErrorLogMaxRotatedFiles
+                self.logPath,
+                rotateLength = self.logRotateLength,
+                maxRotatedFiles = self.logMaxFiles
             )
             errorLogObserver = FileLogObserver(errorLogFile).emit
 
@@ -251,7 +268,9 @@
 
     def __init__(self, logObserver):
         self.logObserver = logObserver # accesslog observer
-        MultiService.__init__(self)
+        ErrorLoggingMultiService.__init__(self, config.ErrorLogEnabled,
+            config.ErrorLogFile, config.ErrorLogRotateMB * 1024 * 1024,
+            config.ErrorLogMaxRotatedFiles)
 
 
     def privilegedStartService(self):
@@ -588,18 +607,21 @@
     """
 
     def __init__(self, serviceCreator, connectionPool, store, logObserver,
-        reactor=None):
+        storageService, reactor=None):
         """
         @param serviceCreator: callable which will be passed the connection
             pool, store, and log observer, and should return a Service
         @param connectionPool: connection pool to pass to serviceCreator
         @param store: the store object being processed
         @param logObserver: log observer to pass to serviceCreator
+        @param storageService: the service responsible for starting/stopping
+            the data store
         """
         self.serviceCreator = serviceCreator
         self.connectionPool = connectionPool
         self.store = store
         self.logObserver = logObserver
+        self.storageService = storageService
         self.stepper = Stepper()
 
         if reactor is None:
@@ -613,7 +635,7 @@
         we create the main service and pass in the store.
         """
         service = self.serviceCreator(self.connectionPool, self.store,
-            self.logObserver)
+            self.logObserver, self.storageService)
         if self.parent is not None:
             self.reactor.callLater(0, service.setServiceParent, self.parent)
         return succeed(None)
@@ -626,7 +648,7 @@
         """
         try:
             service = self.serviceCreator(self.connectionPool, None,
-                self.logObserver)
+                self.logObserver, self.storageService)
             if self.parent is not None:
                 self.reactor.callLater(0, service.setServiceParent, self.parent)
         except StoreNotAvailable:
@@ -870,6 +892,7 @@
                 directory,
                 config.GroupCaching.UpdateSeconds,
                 config.GroupCaching.ExpireSeconds,
+                config.GroupCaching.LockSeconds,
                 namespace=config.GroupCaching.MemcachedPool,
                 useExternalProxies=config.GroupCaching.UseExternalProxies
                 )
@@ -1128,7 +1151,7 @@
         Create a service to be used in a single-process, stand-alone
         configuration.  Memcached will be spawned automatically.
         """
-        def slaveSvcCreator(pool, store, logObserver):
+        def slaveSvcCreator(pool, store, logObserver, storageService):
 
             if store is None:
                 raise StoreNotAvailable()
@@ -1171,6 +1194,7 @@
                     directory,
                     config.GroupCaching.UpdateSeconds,
                     config.GroupCaching.ExpireSeconds,
+                    config.GroupCaching.LockSeconds,
                     namespace=config.GroupCaching.MemcachedPool,
                     useExternalProxies=config.GroupCaching.UseExternalProxies
                     )
@@ -1230,7 +1254,7 @@
         When created, that service will have access to the storage facilities.
         """
 
-        def toolServiceCreator(pool, store, ignored):
+        def toolServiceCreator(pool, store, ignored, storageService):
             return config.UtilityServiceClass(store)
 
         uid, gid = getSystemIDs(config.UserName, config.GroupName)
@@ -1253,11 +1277,26 @@
         # These we need to set in order to open the store
         config.EnableCalDAV = config.EnableCardDAV = True
 
-        def agentServiceCreator(pool, store, ignored):
+        def agentServiceCreator(pool, store, ignored, storageService):
+            from calendarserver.tools.agent import makeAgentService
+            if storageService is not None:
+                # Shut down if DataRoot becomes unavailable
+                from twisted.internet import reactor
+                dataStoreWatcher = DirectoryChangeListener(reactor,
+                    config.DataRoot, DataStoreMonitor(reactor, storageService))
+                dataStoreWatcher.startListening()
             return makeAgentService(store)
 
         uid, gid = getSystemIDs(config.UserName, config.GroupName)
-        return self.storageService(agentServiceCreator, None, uid=uid, gid=gid)
+        svc = self.storageService(agentServiceCreator, None, uid=uid, gid=gid)
+        agentLoggingService = ErrorLoggingMultiService(
+            config.ErrorLogEnabled,
+            config.AgentLogFile,
+            config.ErrorLogRotateMB * 1024 * 1024,
+            config.ErrorLogMaxRotatedFiles
+            )
+        svc.setServiceParent(agentLoggingService)
+        return agentLoggingService
 
 
     def storageService(self, createMainService, logObserver, uid=None, gid=None):
@@ -1292,7 +1331,7 @@
         """
         def createSubServiceFactory(dialect=POSTGRES_DIALECT,
                                     paramstyle='pyformat'):
-            def subServiceFactory(connectionFactory):
+            def subServiceFactory(connectionFactory, storageService):
                 ms = MultiService()
                 cp = ConnectionPool(connectionFactory, dialect=dialect,
                                     paramstyle=paramstyle,
@@ -1301,7 +1340,7 @@
                 store = storeFromConfig(config, cp.connection)
 
                 pps = PreProcessingService(createMainService, cp, store,
-                    logObserver)
+                    logObserver, storageService)
 
                 # The following "steps" will run sequentially when the service
                 # hierarchy is started.  If any of the steps raise an exception
@@ -1397,18 +1436,18 @@
                 return pgserv
             elif config.DBType == 'postgres':
                 # Connect to a postgres database that is already running.
-                return createSubServiceFactory()(pgConnectorFromConfig(config))
+                return createSubServiceFactory()(pgConnectorFromConfig(config), None)
             elif config.DBType == 'oracle':
                 # Connect to an Oracle database that is already running.
                 return createSubServiceFactory(dialect=ORACLE_DIALECT,
                                                paramstyle='numeric')(
-                    oracleConnectorFromConfig(config)
+                    oracleConnectorFromConfig(config), None
                 )
             else:
                 raise UsageError("Unknown database type %r" (config.DBType,))
         else:
             store = storeFromConfig(config, None)
-            return createMainService(None, store, logObserver)
+            return createMainService(None, store, logObserver, None)
 
 
     def makeService_Combined(self, options):
@@ -1416,7 +1455,12 @@
         Create a master service to coordinate a multi-process configuration,
         spawning subprocesses that use L{makeService_Slave} to perform work.
         """
-        s = ErrorLoggingMultiService()
+        s = ErrorLoggingMultiService(
+            config.ErrorLogEnabled,
+            config.ErrorLogFile,
+            config.ErrorLogRotateMB * 1024 * 1024,
+            config.ErrorLogMaxRotatedFiles
+        )
 
         # Add a service to re-exec the master when it receives SIGHUP
         ReExecService(config.PIDFile).setServiceParent(s)
@@ -1591,7 +1635,7 @@
         # to), and second, the service which does an upgrade from the
         # filesystem to the database (if that's necessary, and there is
         # filesystem data in need of upgrading).
-        def spawnerSvcCreator(pool, store, ignored):
+        def spawnerSvcCreator(pool, store, ignored, storageService):
             if store is None:
                 raise StoreNotAvailable()
 
@@ -1638,6 +1682,7 @@
                     directory,
                     config.GroupCaching.UpdateSeconds,
                     config.GroupCaching.ExpireSeconds,
+                    config.GroupCaching.LockSeconds,
                     namespace=config.GroupCaching.MemcachedPool,
                     useExternalProxies=config.GroupCaching.UseExternalProxies
                     )
@@ -2372,3 +2417,31 @@
         gid = getgid()
 
     return uid, gid
+
+
+class DataStoreMonitor(object):
+    implements(IDirectoryChangeListenee)
+
+    def __init__(self, reactor, storageService):
+        """
+        @param storageService: the service making use of the DataStore
+            directory; we send it a hardStop() to shut it down
+        """
+        self._reactor = reactor
+        self._storageService = storageService
+
+    def disconnected(self):
+        self._storageService.hardStop()
+        self._reactor.stop()
+
+    def deleted(self):
+        self._storageService.hardStop()
+        self._reactor.stop()
+
+    def renamed(self):
+        self._storageService.hardStop()
+        self._reactor.stop()
+
+    def connectionLost(self, reason):
+        pass
+

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/test/test_caldav.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/test/test_caldav.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tap/test/test_caldav.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -61,7 +61,7 @@
     CalDAVOptions, CalDAVServiceMaker, CalDAVService, GroupOwnedUNIXServer,
     DelayedStartupProcessMonitor, DelayedStartupLineLogger, TwistdSlaveProcess,
     _CONTROL_SERVICE_NAME, getSystemIDs, PreProcessingService,
-    QuitAfterUpgradeStep
+    QuitAfterUpgradeStep, DataStoreMonitor
 )
 from calendarserver.provision.root import RootResource
 from twext.enterprise.queue import PeerConnectionPool, LocalQueuer
@@ -555,7 +555,9 @@
                                       uid=None, gid=None):
                 pool = None
                 logObserver = None
-                svc = createMainService(pool, store, logObserver)
+                storageService = None
+                svc = createMainService(pool, store, logObserver,
+                    storageService)
                 multi = MultiService()
                 svc.setServiceParent(multi)
                 return multi
@@ -1238,12 +1240,19 @@
         and stopService to be called again by counting the number of times
         START and STOP appear in the process output.
         """
+        # Inherit the reactor used to run trial
+        reactorArg = ""
+        for arg in sys.argv:
+            if arg.startswith("--reactor"):
+                reactorArg = arg
+                break
+
         tacFilePath = os.path.join(os.path.dirname(__file__), "reexec.tac")
         twistd = which("twistd")[0]
         deferred = Deferred()
         proc = reactor.spawnProcess(
             CapturingProcessProtocol(deferred, None), twistd,
-                [twistd, '-n', '-y', tacFilePath],
+                [twistd, reactorArg, '-n', '-y', tacFilePath],
                 env=os.environ
         )
         reactor.callLater(3, proc.signalProcess, "HUP")
@@ -1381,15 +1390,15 @@
 
 class PreProcessingServiceTestCase(TestCase):
 
-    def fakeServiceCreator(self, cp, store, lo):
-        self.history.append(("serviceCreator", store))
+    def fakeServiceCreator(self, cp, store, lo, storageService):
+        self.history.append(("serviceCreator", store, storageService))
 
 
     def setUp(self):
         self.history = []
         self.clock = Clock()
         self.pps = PreProcessingService(self.fakeServiceCreator, None, "store",
-            None, reactor=self.clock)
+            None, "storageService", reactor=self.clock)
 
 
     def _record(self, value, failure):
@@ -1409,7 +1418,7 @@
         self.pps.startService()
         self.assertEquals(self.history,
             ['one success', 'two success', 'three success', 'four success',
-            ('serviceCreator', 'store')])
+            ('serviceCreator', 'store', 'storageService')])
 
 
     def test_allFailure(self):
@@ -1425,7 +1434,7 @@
         self.pps.startService()
         self.assertEquals(self.history,
             ['one success', 'two failure', 'three failure', 'four failure',
-            ('serviceCreator', None)])
+            ('serviceCreator', None, 'storageService')])
 
 
     def test_partialFailure(self):
@@ -1441,7 +1450,7 @@
         self.pps.startService()
         self.assertEquals(self.history,
             ['one success', 'two failure', 'three success', 'four failure',
-            ('serviceCreator', 'store')])
+            ('serviceCreator', 'store', 'storageService')])
 
 
     def test_quitAfterUpgradeStep(self):
@@ -1460,5 +1469,47 @@
         self.pps.startService()
         self.assertEquals(self.history,
             ['one success', 'two success', 'four failure',
-            ('serviceCreator', None)])
+            ('serviceCreator', None, 'storageService')])
         self.assertFalse(triggerFile.exists())
+
+
+class StubStorageService(object):
+
+    def __init__(self):
+        self.hardStopCalled = False
+
+    def hardStop(self):
+        self.hardStopCalled = True
+
+
+class StubReactor(object):
+
+    def __init__(self):
+        self.stopCalled = False
+
+    def stop(self):
+        self.stopCalled = True
+
+
+class DataStoreMonitorTestCase(TestCase):
+
+    def test_monitor(self):
+        storageService = StubStorageService()
+        stubReactor = StubReactor()
+        monitor = DataStoreMonitor(stubReactor, storageService)
+
+        monitor.disconnected()
+        self.assertTrue(storageService.hardStopCalled)
+        self.assertTrue(stubReactor.stopCalled)
+
+        storageService.hardStopCalled = False
+        stubReactor.stopCalled = False
+        monitor.deleted()
+        self.assertTrue(storageService.hardStopCalled)
+        self.assertTrue(stubReactor.stopCalled)
+
+        storageService.hardStopCalled = False
+        stubReactor.stopCalled = False
+        monitor.renamed()
+        self.assertTrue(storageService.hardStopCalled)
+        self.assertTrue(stubReactor.stopCalled)

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/config.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/config.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/config.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -137,8 +137,23 @@
     writable = WritableConfig(config, writeConfigFileName)
     writable.read()
 
+    processArgs(writable, args)
+
+
+def processArgs(writable, args, restart=True):
+    """
+    Perform the read/write operations requested in the command line args.
+    If there are no args, stdin is read, and plist-formatted commands are
+    processed from there.
+    @param writable: the WritableConfig
+    @param args: a list of utf-8 encoded strings
+    @param restart: whether to restart the calendar server after making a
+        config change.
+    """
     if args:
         for configKey in args:
+            # args come in as utf-8 encoded strings
+            configKey = configKey.decode("utf-8")
 
             if "=" in configKey:
                 # This is an assignment
@@ -153,9 +168,9 @@
                     if c is None:
                         sys.stderr.write("No such config key: %s\n" % configKey)
                         break
-                sys.stdout.write("%s=%s\n" % (configKey, c))
+                sys.stdout.write("%s=%s\n" % (configKey.encode("utf-8"), c))
 
-        writable.save(restart=True)
+        writable.save(restart=restart)
 
     else:
         # Read plist commands from stdin

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_config.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_config.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_config.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -16,7 +16,8 @@
 
 from twistedcaldav.test.util import TestCase
 from twistedcaldav.config import ConfigDict
-from calendarserver.tools.config import WritableConfig, setKeyPath, getKeyPath, flattenDictionary
+from calendarserver.tools.config import (WritableConfig, setKeyPath, getKeyPath,
+    flattenDictionary, processArgs)
 from calendarserver.tools.test.test_gateway import RunCommandTestCase
 from twisted.internet.defer import inlineCallbacks
 from twisted.python.filepath import FilePath
@@ -89,7 +90,26 @@
         self.assertEquals("xy.zzy", WritableConfig.convertToValue("xy.zzy"))
 
 
+    def test_processArgs(self):
+        """
+        Ensure utf-8 encoded command line args are handled properly
+        """
+        content = """<plist version="1.0">
+    <dict>
+        <key>key1</key>
+        <string>before</string>
+    </dict>
+</plist>"""
+        self.fp.setContent(PREAMBLE + content)
+        config = ConfigDict()
+        writable = WritableConfig(config, self.configFile)
+        writable.read()
+        processArgs(writable, ["key1=\xf0\x9f\x92\xa3"], restart=False)
+        writable2 = WritableConfig(config, self.configFile)
+        writable2.read()
+        self.assertEquals(writable2.currentConfigSubset, {'key1': u'\U0001f4a3'})
 
+
 class ConfigTestCase(RunCommandTestCase):
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_gateway.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_gateway.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_gateway.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -27,7 +27,6 @@
 from twistedcaldav.config import config
 from twistedcaldav.test.util import TestCase, CapturingProcessProtocol
 from calendarserver.tools.util import getDirectory
-from txdav.common.datastore.test.util import SQLStoreBuilder
 import plistlib
 
 
@@ -42,8 +41,7 @@
         template = templateFile.read()
         templateFile.close()
 
-        # Use the same DatabaseRoot as the SQLStoreBuilder
-        databaseRoot = os.path.abspath(SQLStoreBuilder.SHARED_DB_PATH)
+        databaseRoot = os.path.abspath("_spawned_scripts_db" + str(os.getpid()))
         newConfig = template % {
             "ServerRoot" : os.path.abspath(config.ServerRoot),
             "DatabaseRoot" : databaseRoot,

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_principals.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_principals.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_principals.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -31,7 +31,6 @@
 from calendarserver.tap.util import directoryFromConfig
 from calendarserver.tools.principals import (parseCreationArgs, matchStrings,
     updateRecord, principalForPrincipalID, getProxies, setProxies)
-from txdav.common.datastore.test.util import SQLStoreBuilder
 
 
 class ManagePrincipalsTestCase(TestCase):
@@ -50,9 +49,7 @@
         template = templateFile.read()
         templateFile.close()
 
-        # Use the same DatabaseRoot as the SQLStoreBuilder
-        databaseRoot = os.path.abspath(SQLStoreBuilder.SHARED_DB_PATH)
-
+        databaseRoot = os.path.abspath("_spawned_scripts_db" + str(os.getpid()))
         newConfig = template % {
             "ServerRoot" : os.path.abspath(config.ServerRoot),
             "DataRoot" : os.path.abspath(config.DataRoot),

Modified: CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_resources.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_resources.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/calendarserver/tools/test/test_resources.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -14,151 +14,149 @@
 # limitations under the License.
 ##
 
-from calendarserver.tools.resources import migrateResources
-from twisted.internet.defer import inlineCallbacks, succeed
-from twistedcaldav.directory.directory import DirectoryService
-from twistedcaldav.test.util import TestCase
 
-
 try:
+    from calendarserver.tools.resources import migrateResources
+    from twisted.internet.defer import inlineCallbacks, succeed
+    from twistedcaldav.directory.directory import DirectoryService
+    from twistedcaldav.test.util import TestCase
     import dsattributes
     strGUID = dsattributes.kDS1AttrGeneratedUID
     strName = dsattributes.kDS1AttrDistinguishedName
+    RUN_TESTS = True
 except ImportError:
-    dsattributes = None
+    RUN_TESTS = False
 
 
 
-class StubDirectoryRecord(object):
+if RUN_TESTS:
+    class StubDirectoryRecord(object):
 
-    def __init__(self, recordType, guid=None, shortNames=None, fullName=None):
-        self.recordType = recordType
-        self.guid = guid
-        self.shortNames = shortNames
-        self.fullName = fullName
+        def __init__(self, recordType, guid=None, shortNames=None, fullName=None):
+            self.recordType = recordType
+            self.guid = guid
+            self.shortNames = shortNames
+            self.fullName = fullName
 
 
-class StubDirectoryService(object):
+    class StubDirectoryService(object):
 
-    def __init__(self, augmentService):
-        self.records = {}
-        self.augmentService = augmentService
+        def __init__(self, augmentService):
+            self.records = {}
+            self.augmentService = augmentService
 
-    def recordWithGUID(self, guid):
-        return None
+        def recordWithGUID(self, guid):
+            return None
 
-    def createRecords(self, data):
-        for recordType, recordData in data:
-            guid = recordData["guid"]
-            record = StubDirectoryRecord(recordType, guid=guid,
-                shortNames=recordData['shortNames'],
-                fullName=recordData['fullName'])
-            self.records[guid] = record
+        def createRecords(self, data):
+            for recordType, recordData in data:
+                guid = recordData["guid"]
+                record = StubDirectoryRecord(recordType, guid=guid,
+                    shortNames=recordData['shortNames'],
+                    fullName=recordData['fullName'])
+                self.records[guid] = record
 
-    def updateRecord(self, recordType, guid=None, shortNames=None,
-        fullName=None):
-        pass
+        def updateRecord(self, recordType, guid=None, shortNames=None,
+            fullName=None):
+            pass
 
 
-class StubAugmentRecord(object):
+    class StubAugmentRecord(object):
 
-    def __init__(self, guid=None):
-        self.guid = guid
-        self.autoSchedule = True
+        def __init__(self, guid=None):
+            self.guid = guid
+            self.autoSchedule = True
 
 
-class StubAugmentService(object):
+    class StubAugmentService(object):
 
-    records = {}
+        records = {}
 
-    @classmethod
-    def getAugmentRecord(cls, guid, recordType):
-        if not cls.records.has_key(guid):
-            record = StubAugmentRecord(guid=guid)
-            cls.records[guid] = record
-        return succeed(cls.records[guid])
+        @classmethod
+        def getAugmentRecord(cls, guid, recordType):
+            if not cls.records.has_key(guid):
+                record = StubAugmentRecord(guid=guid)
+                cls.records[guid] = record
+            return succeed(cls.records[guid])
 
-    @classmethod
-    def addAugmentRecords(cls, records):
-        for record in records:
-            cls.records[record.guid] = record
-        return succeed(True)
+        @classmethod
+        def addAugmentRecords(cls, records):
+            for record in records:
+                cls.records[record.guid] = record
+            return succeed(True)
 
 
-class MigrateResourcesTestCase(TestCase):
+    class MigrateResourcesTestCase(TestCase):
 
-    if dsattributes is None:
-        skip = "dsattributes module not available"
+        @inlineCallbacks
+        def test_migrateResources(self):
 
-    @inlineCallbacks
-    def test_migrateResources(self):
+            data = {
+                    dsattributes.kDSStdRecordTypeResources :
+                    [
+                        ['projector1', {
+                            strGUID : '6C99E240-E915-4012-82FA-99E0F638D7EF',
+                            strName : 'Projector 1'
+                        }],
+                        ['projector2', {
+                            strGUID : '7C99E240-E915-4012-82FA-99E0F638D7EF',
+                            strName : 'Projector 2'
+                        }],
+                    ],
+                    dsattributes.kDSStdRecordTypePlaces :
+                    [
+                        ['office1', {
+                            strGUID : '8C99E240-E915-4012-82FA-99E0F638D7EF',
+                            strName : 'Office 1'
+                        }],
+                    ],
+                }
 
-        data = {
-                dsattributes.kDSStdRecordTypeResources :
-                [
-                    ['projector1', {
-                        strGUID : '6C99E240-E915-4012-82FA-99E0F638D7EF',
-                        strName : 'Projector 1'
-                    }],
-                    ['projector2', {
-                        strGUID : '7C99E240-E915-4012-82FA-99E0F638D7EF',
-                        strName : 'Projector 2'
-                    }],
-                ],
-                dsattributes.kDSStdRecordTypePlaces :
-                [
-                    ['office1', {
-                        strGUID : '8C99E240-E915-4012-82FA-99E0F638D7EF',
-                        strName : 'Office 1'
-                    }],
-                ],
-            }
+            def queryMethod(sourceService, recordType, verbose=False):
+                return data[recordType]
 
-        def queryMethod(sourceService, recordType, verbose=False):
-            return data[recordType]
+            directoryService = StubDirectoryService(StubAugmentService())
+            yield migrateResources(None, directoryService, queryMethod=queryMethod)
+            for guid, recordType in (
+                ('6C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
+                ('7C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
+                ('8C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_locations),
+            ):
+                self.assertTrue(guid in directoryService.records)
+                record = directoryService.records[guid]
+                self.assertEquals(record.recordType, recordType)
 
-        directoryService = StubDirectoryService(StubAugmentService())
-        yield migrateResources(None, directoryService, queryMethod=queryMethod)
-        for guid, recordType in (
-            ('6C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
-            ('7C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
-            ('8C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_locations),
-        ):
-            self.assertTrue(guid in directoryService.records)
-            record = directoryService.records[guid]
-            self.assertEquals(record.recordType, recordType)
+                self.assertTrue(guid in StubAugmentService.records)
 
-            self.assertTrue(guid in StubAugmentService.records)
 
+            #
+            # Add more to OD and re-migrate
+            #
 
-        #
-        # Add more to OD and re-migrate
-        #
+            data[dsattributes.kDSStdRecordTypeResources].append(
+                ['projector3', {
+                    strGUID : '9C99E240-E915-4012-82FA-99E0F638D7EF',
+                    strName : 'Projector 3'
+                }]
+            )
+            data[dsattributes.kDSStdRecordTypePlaces].append(
+                ['office2', {
+                    strGUID : 'AC99E240-E915-4012-82FA-99E0F638D7EF',
+                    strName : 'Office 2'
+                }]
+            )
 
-        data[dsattributes.kDSStdRecordTypeResources].append(
-            ['projector3', {
-                strGUID : '9C99E240-E915-4012-82FA-99E0F638D7EF',
-                strName : 'Projector 3'
-            }]
-        )
-        data[dsattributes.kDSStdRecordTypePlaces].append(
-            ['office2', {
-                strGUID : 'AC99E240-E915-4012-82FA-99E0F638D7EF',
-                strName : 'Office 2'
-            }]
-        )
+            yield migrateResources(None, directoryService, queryMethod=queryMethod)
 
-        yield migrateResources(None, directoryService, queryMethod=queryMethod)
+            for guid, recordType in (
+                ('6C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
+                ('7C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
+                ('9C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
+                ('8C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_locations),
+                ('AC99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_locations),
+            ):
+                self.assertTrue(guid in directoryService.records)
+                record = directoryService.records[guid]
+                self.assertEquals(record.recordType, recordType)
 
-        for guid, recordType in (
-            ('6C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
-            ('7C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
-            ('9C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_resources),
-            ('8C99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_locations),
-            ('AC99E240-E915-4012-82FA-99E0F638D7EF', DirectoryService.recordType_locations),
-        ):
-            self.assertTrue(guid in directoryService.records)
-            record = directoryService.records[guid]
-            self.assertEquals(record.recordType, recordType)
-
-            self.assertTrue(guid in StubAugmentService.records)
+                self.assertTrue(guid in StubAugmentService.records)

Modified: CalendarServer/branches/users/gaya/directorybacker/conf/auth/augments.dtd
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/conf/auth/augments.dtd	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/conf/auth/augments.dtd	2013-08-22 21:45:36 UTC (rev 11633)
@@ -16,7 +16,17 @@
 
 <!ELEMENT augments (record*) >
 
-  <!ELEMENT record (uid, enable, (server-id, partition-id?)?, enable-calendar?, enable-addressbook?, auto-schedule?, auto-schedule-mode?, auto-accept-group?)>
+  <!ELEMENT record (
+  		uid,
+  		enable,
+  		(server-id, partition-id?)?,
+  		enable-calendar?,
+  		enable-addressbook?,
+  		enable-login?,
+  		auto-schedule?,
+  		auto-schedule-mode?,
+  		auto-accept-group?
+  )>
     <!ATTLIST record repeat CDATA "1">
 
   <!ELEMENT uid                (#PCDATA)>
@@ -25,6 +35,7 @@
   <!ELEMENT partition-id       (#PCDATA)>
   <!ELEMENT enable-calendar    (#PCDATA)>
   <!ELEMENT enable-addressbook (#PCDATA)>
+  <!ELEMENT enable-login       (#PCDATA)>
   <!ELEMENT auto-schedule      (#PCDATA)>
   <!ELEMENT auto-schedule-mode (#PCDATA)>
   <!ELEMENT auto-accept-group  (#PCDATA)>

Modified: CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-apple.plist
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-apple.plist	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-apple.plist	2013-08-22 21:45:36 UTC (rev 11633)
@@ -111,11 +111,18 @@
             <string>-c log_lock_waits=TRUE</string>
             <string>-c deadlock_timeout=10</string>
             <string>-c log_line_prefix='%m [%p] '</string>
+            <string>-c logging_collector=on</string>
+            <string>-c log_truncate_on_rotation=on</string>
+            <string>-c log_directory=/var/log/caldavd/postgresql</string>
+            <string>-c log_filename=postgresql_%w.log</string>
+            <string>-c log_rotation_age=1440</string>
         </array>
         <key>ExtraConnections</key>
         <integer>20</integer>
         <key>ClusterName</key>
         <string>cluster.pg</string>
+        <key>LogFile</key>
+        <string>xpg_ctl.log</string>
     </dict>
 
     <!-- Data root -->

Modified: CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-test.plist
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-test.plist	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/conf/caldavd-test.plist	2013-08-22 21:45:36 UTC (rev 11633)
@@ -773,6 +773,8 @@
         <false/>
         <key>AttendeeRefreshBatch</key>
         <integer>0</integer>
+        <key>AttendeeRefreshCountLimit</key>
+        <integer>50</integer>
 
 		<key>AutoSchedule</key>
 		<dict>

Modified: CalendarServer/branches/users/gaya/directorybacker/contrib/launchd/calendarserver.plist
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/contrib/launchd/calendarserver.plist	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/contrib/launchd/calendarserver.plist	2013-08-22 21:45:36 UTC (rev 11633)
@@ -31,13 +31,16 @@
     <string>/Applications/Server.app/Contents/ServerRoot/usr/sbin/caldavd</string>
     <string>-X</string>
     <string>-R</string>
-    <string>caldav_kqueue</string>
+    <string>kqueue</string>
     <string>-o</string>
     <string>FailIfUpgradeNeeded=False</string>
   </array>
 
   <key>InitGroups</key>
   <true/>
+  
+  <key>AbandonProcessGroup</key>
+  <true/>
 
   <key>KeepAlive</key>
   <true/>

Modified: CalendarServer/branches/users/gaya/directorybacker/contrib/performance/loadtest/thresholds.json
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/contrib/performance/loadtest/thresholds.json	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/contrib/performance/loadtest/thresholds.json	2013-08-22 21:45:36 UTC (rev 11633)
@@ -10,8 +10,8 @@
 			"PUT{event}"                    : [ 100.0, 100.0, 100.0,  75.0,  50.0,  25.0,   0.5],
 			"PUT{attendee-small}"           : [ 100.0, 100.0, 100.0,  75.0,  50.0,  25.0,   5.0],
 			"PUT{attendee-medium}"          : [ 100.0, 100.0, 100.0,  75.0,  50.0,  25.0,   5.0], 
-			"PUT{attendee-large}"           : [ 100.0, 100.0, 100.0,  75.0,  50.0,  25.0,   5.0],
-			"PUT{attendee-huge}"            : [ 100.0, 100.0, 100.0, 100.0, 100.0,  50.0,  25.0],
+			"PUT{attendee-large}"           : [ 100.0, 100.0, 100.0,  100.0, 100.0, 100.0, 25.0],
+			"PUT{attendee-huge}"            : [ 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0],
 			"PUT{organizer-small}"          : [ 100.0, 100.0, 100.0,  75.0,  50.0,  25.0,   5.0], 
 			"PUT{organizer-medium}"         : [ 100.0, 100.0, 100.0,  75.0,  50.0,  25.0,   5.0],
 			"PUT{organizer-large}"          : [ 100.0, 100.0, 100.0, 100.0, 100.0,  75.0,  25.0],

Copied: CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/requests/propfind_invite.py (from rev 11631, CalendarServer/trunk/contrib/performance/sqlusage/requests/propfind_invite.py)
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/requests/propfind_invite.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/requests/propfind_invite.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -0,0 +1,54 @@
+##
+# Copyright (c) 2012-2013 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from caldavclientlibrary.protocol.http.data.string import ResponseDataString
+from caldavclientlibrary.protocol.webdav.definitions import statuscodes, \
+    headers
+from caldavclientlibrary.protocol.webdav.propfind import PropFind
+from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase
+from caldavclientlibrary.protocol.caldav.definitions import csxml
+
+class PropfindInviteTest(HTTPTestBase):
+    """
+    A propfind operation
+    """
+
+    def __init__(self, label, sessions, logFilePath, depth=1):
+        super(PropfindInviteTest, self).__init__(label, sessions, logFilePath)
+        self.depth = headers.Depth1 if depth == 1 else headers.Depth0
+
+
+    def doRequest(self):
+        """
+        Execute the actual HTTP request.
+        """
+        props = (
+            csxml.invite,
+        )
+
+        # Create WebDAV propfind
+        request = PropFind(self.sessions[0], self.sessions[0].calendarHref, self.depth, props)
+        result = ResponseDataString()
+        request.setOutput(result)
+
+        # Process it
+        self.sessions[0].runSession(request)
+
+        # If its a 207 we want to parse the XML
+        if request.getStatusCode() == statuscodes.MultiStatus:
+            pass
+        else:
+            raise RuntimeError("Propfind request failed: %s" % (request.getStatusCode(),))

Modified: CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/sqlusage.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/sqlusage.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/contrib/performance/sqlusage/sqlusage.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -23,6 +23,7 @@
 from contrib.performance.sqlusage.requests.invite import InviteTest
 from contrib.performance.sqlusage.requests.multiget import MultigetTest
 from contrib.performance.sqlusage.requests.propfind import PropfindTest
+from contrib.performance.sqlusage.requests.propfind_invite import PropfindInviteTest
 from contrib.performance.sqlusage.requests.put import PutTest
 from contrib.performance.sqlusage.requests.query import QueryTest
 from contrib.performance.sqlusage.requests.sync import SyncTest
@@ -31,6 +32,7 @@
 import getopt
 import itertools
 import sys
+from caldavclientlibrary.client.principal import principalCache
 
 """
 This tool is designed to analyze how SQL is being used for various HTTP requests.
@@ -41,7 +43,8 @@
 with calendar size can be plotted.
 """
 
-EVENT_COUNTS = (0, 1, 5, 10, 50, 100, 500, 1000, 5000)
+EVENT_COUNTS = (0, 1, 5, 10, 50, 100, 500, 1000,)
+SHAREE_COUNTS = (0, 1, 5, 10, 50, 100,)
 
 ICAL = """BEGIN:VCALENDAR
 CALSCALE:GREGORIAN
@@ -78,29 +81,31 @@
 
 class SQLUsageSession(CalDAVSession):
 
-    def __init__(self, server, port=None, ssl=False, user="", pswd="", principal=None, root=None, logging=False):
+    def __init__(self, server, port=None, ssl=False, user="", pswd="", principal=None, root=None, calendar="calendar", logging=False):
 
         super(SQLUsageSession, self).__init__(server, port, ssl, user, pswd, principal, root, logging)
         self.homeHref = "/calendars/users/%s/" % (self.user,)
-        self.calendarHref = "/calendars/users/%s/calendar/" % (self.user,)
+        self.calendarHref = "/calendars/users/%s/%s/" % (self.user, calendar,)
         self.inboxHref = "/calendars/users/%s/inbox/" % (self.user,)
+        self.notificationHref = "/calendars/users/%s/notification/" % (self.user,)
 
 
 
-class SQLUsage(object):
+class EventSQLUsage(object):
 
-    def __init__(self, server, port, users, pswds, logFilePath):
+    def __init__(self, server, port, users, pswds, logFilePath, compact):
         self.server = server
         self.port = port
         self.users = users
         self.pswds = pswds
         self.logFilePath = logFilePath
+        self.compact = compact
         self.requestLabels = []
         self.results = {}
         self.currentCount = 0
 
 
-    def runLoop(self, counts):
+    def runLoop(self, event_counts):
 
         # Make the sessions
         sessions = [
@@ -110,13 +115,13 @@
 
         # Set of requests to execute
         requests = [
-            MultigetTest("multiget-1", sessions, self.logFilePath, 1),
-            MultigetTest("multiget-50", sessions, self.logFilePath, 50),
-            PropfindTest("propfind-cal", sessions, self.logFilePath, 1),
-            SyncTest("sync-full", sessions, self.logFilePath, True, 0),
-            SyncTest("sync-1", sessions, self.logFilePath, False, 1),
-            QueryTest("query-1", sessions, self.logFilePath, 1),
-            QueryTest("query-10", sessions, self.logFilePath, 10),
+            MultigetTest("mget-1" if self.compact else "multiget-1", sessions, self.logFilePath, 1),
+            MultigetTest("mget-50" if self.compact else "multiget-50", sessions, self.logFilePath, 50),
+            PropfindTest("prop-cal" if self.compact else "propfind-cal", sessions, self.logFilePath, 1),
+            SyncTest("s-full" if self.compact else "sync-full", sessions, self.logFilePath, True, 0),
+            SyncTest("s-1" if self.compact else "sync-1", sessions, self.logFilePath, False, 1),
+            QueryTest("q-1" if self.compact else "query-1", sessions, self.logFilePath, 1),
+            QueryTest("q-10" if self.compact else "query-10", sessions, self.logFilePath, 10),
             PutTest("put", sessions, self.logFilePath),
             InviteTest("invite", sessions, self.logFilePath),
         ]
@@ -129,7 +134,7 @@
             session.getPropertiesOnHierarchy(URL(path=session.calendarHref), props)
 
         # Now loop over sets of events
-        for count in counts:
+        for count in event_counts:
             print("Testing count = %d" % (count,))
             self.ensureEvents(sessions[0], sessions[0].calendarHref, count)
             result = {}
@@ -181,6 +186,113 @@
 
 
 
+class SharerSQLUsage(object):
+
+    def __init__(self, server, port, users, pswds, logFilePath, compact):
+        self.server = server
+        self.port = port
+        self.users = users
+        self.pswds = pswds
+        self.logFilePath = logFilePath
+        self.compact = compact
+        self.requestLabels = []
+        self.results = {}
+        self.currentCount = 0
+
+
+    def runLoop(self, sharee_counts):
+
+        # Make the sessions
+        sessions = [
+            SQLUsageSession(self.server, self.port, user=user, pswd=pswd, root="/", calendar="shared")
+            for user, pswd in itertools.izip(self.users, self.pswds)
+        ]
+        sessions = sessions[0:1]
+
+        # Create the calendar first
+        sessions[0].makeCalendar(URL(path=sessions[0].calendarHref))
+
+        # Set of requests to execute
+        requests = [
+            MultigetTest("mget-1" if self.compact else "multiget-1", sessions, self.logFilePath, 1),
+            MultigetTest("mget-50" if self.compact else "multiget-50", sessions, self.logFilePath, 50),
+            PropfindInviteTest("propfind", sessions, self.logFilePath, 1),
+            SyncTest("s-full" if self.compact else "sync-full", sessions, self.logFilePath, True, 0),
+            SyncTest("s-1" if self.compact else "sync-1", sessions, self.logFilePath, False, 1),
+            QueryTest("q-1" if self.compact else "query-1", sessions, self.logFilePath, 1),
+            QueryTest("q-10" if self.compact else "query-10", sessions, self.logFilePath, 10),
+            PutTest("put", sessions, self.logFilePath),
+        ]
+        self.requestLabels = [request.label for request in requests]
+
+        # Warm-up server by doing shared calendar propfinds
+        props = (davxml.resourcetype,)
+        for session in sessions:
+            session.getPropertiesOnHierarchy(URL(path=session.calendarHref), props)
+
+        # Now loop over sets of events
+        for count in sharee_counts:
+            print("Testing count = %d" % (count,))
+            self.ensureSharees(sessions[0], sessions[0].calendarHref, count)
+            result = {}
+            for request in requests:
+                print("  Test = %s" % (request.label,))
+                result[request.label] = request.execute(count)
+            self.results[count] = result
+
+
+    def report(self):
+
+        self._printReport("SQL Statement Count", "count", "%d")
+        self._printReport("SQL Rows Returned", "rows", "%d")
+        self._printReport("SQL Time", "timing", "%.1f")
+
+
+    def _printReport(self, title, attr, colFormat):
+        table = tables.Table()
+
+        print(title)
+        headers = ["Sharees"] + self.requestLabels
+        table.addHeader(headers)
+        formats = [tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY)] + \
+            [tables.Table.ColumnFormat(colFormat, tables.Table.ColumnFormat.RIGHT_JUSTIFY)] * len(self.requestLabels)
+        table.setDefaultColumnFormats(formats)
+        for k in sorted(self.results.keys()):
+            row = [k] + [getattr(self.results[k][item], attr) for item in self.requestLabels]
+            table.addRow(row)
+        os = StringIO()
+        table.printTable(os=os)
+        print(os.getvalue())
+        print("")
+
+
+    def ensureSharees(self, session, calendarhref, n):
+        """
+        Make sure the required number of sharees are present in the calendar.
+
+        @param n: number of sharees
+        @type n: C{int}
+        """
+
+        users = []
+        uids = []
+        for i in range(n - self.currentCount):
+            index = self.currentCount + i + 2
+            users.append("user%02d" % (index,))
+            uids.append("urn:uuid:user%02d" % (index,))
+        session.addInvitees(URL(path=calendarhref), uids, True)
+
+        # Now accept each one
+        for user in users:
+            acceptor = SQLUsageSession(self.server, self.port, user=user, pswd=user, root="/", calendar="shared")
+            notifications = acceptor.getNotifications(URL(path=acceptor.notificationHref))
+            principal = principalCache.getPrincipal(acceptor, acceptor.principalPath)
+            acceptor.processNotification(principal, notifications[0], True)
+
+        self.currentCount = n
+
+
+
 def usage(error_msg=None):
     if error_msg:
         print(error_msg)
@@ -192,7 +304,11 @@
     --port         Server port
     --user         User name
     --pswd         Password
-    --counts       Comma-separated list of event counts to test
+    --event        Do event scaling
+    --share        Do sharee sclaing
+    --event-counts       Comma-separated list of event counts to test
+    --sharee-counts      Comma-separated list of sharee counts to test
+    --compact      Make printed tables as thin as possible
 
 Arguments:
     FILE           File name for sqlstats.log to analyze.
@@ -213,10 +329,26 @@
     users = ("user01", "user02",)
     pswds = ("user01", "user02",)
     file = "sqlstats.logs"
-    counts = EVENT_COUNTS
+    event_counts = EVENT_COUNTS
+    sharee_counts = SHAREE_COUNTS
+    compact = False
 
-    options, args = getopt.getopt(sys.argv[1:], "h", ["server=", "port=", "user=", "pswd=", "counts=", ])
+    do_all = True
+    do_event = False
+    do_share = False
 
+    options, args = getopt.getopt(
+        sys.argv[1:],
+        "h",
+        [
+            "server=", "port=",
+            "user=", "pswd=",
+            "compact",
+            "event", "share",
+            "event-counts=", "sharee-counts=",
+        ]
+    )
+
     for option, value in options:
         if option == "-h":
             usage()
@@ -228,8 +360,18 @@
             users = value.split(",")
         elif option == "--pswd":
             pswds = value.split(",")
-        elif option == "--counts":
-            counts = [int(i) for i in value.split(",")]
+        elif option == "--compact":
+            compact = True
+        elif option == "--event":
+            do_all = False
+            do_event = True
+        elif option == "--share":
+            do_all = False
+            do_share = True
+        elif option == "--event-counts":
+            event_counts = [int(i) for i in value.split(",")]
+        elif option == "--sharee-counts":
+            sharee_counts = [int(i) for i in value.split(",")]
         else:
             usage("Unrecognized option: %s" % (option,))
 
@@ -239,6 +381,12 @@
     elif len(args) != 0:
         usage("Must zero or one file arguments")
 
-    sql = SQLUsage(server, port, users, pswds, file)
-    sql.runLoop(counts)
-    sql.report()
+    if do_all or do_event:
+        sql = EventSQLUsage(server, port, users, pswds, file, compact)
+        sql.runLoop(event_counts)
+        sql.report()
+
+    if do_all or do_share:
+        sql = SharerSQLUsage(server, port, users, pswds, file, compact)
+        sql.runLoop(sharee_counts)
+        sql.report()

Modified: CalendarServer/branches/users/gaya/directorybacker/contrib/tools/readStats.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/contrib/tools/readStats.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/contrib/tools/readStats.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -472,7 +472,7 @@
         for ua in stat[index]["user-agent"]:
             uas[ua] += stat[index]["user-agent"][ua]
 
-    printUserCounts({"user-agent": uas})
+    printAgentCounts({"user-agent": uas})
 
 
 

Modified: CalendarServer/branches/users/gaya/directorybacker/doc/Admin/ExtendedLogItems.rst
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/doc/Admin/ExtendedLogItems.rst	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/doc/Admin/ExtendedLogItems.rst	2013-08-22 21:45:36 UTC (rev 11633)
@@ -1,7 +1,7 @@
 Apache-style Access Log Extensions
 ==================================
 
-Calendar Server extends the Apache log file format it uses by:
+If the administrator enables the EnableExtendedAccessLog config option, Calendar Server extends the Apache log file format it uses by:
 
  * Adding a "sub-method" to the HTTP method field.
  * Adding key-value pairs at the end of log lines.

Modified: CalendarServer/branches/users/gaya/directorybacker/support/build.sh
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/support/build.sh	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/support/build.sh	2013-08-22 21:45:36 UTC (rev 11633)
@@ -112,6 +112,10 @@
     md5 () { "$(type -p md5sum)" "$@"; }
   fi;
 
+  if type -ft sha1sum > /dev/null; then
+    if [ -z "${hash}" ]; then hash="sha1sum"; fi;
+    sha1 () { "$(type -p sha1sum)" "$@"; }
+  fi;
   if type -ft shasum > /dev/null; then
     if [ -z "${hash}" ]; then hash="sha1"; fi;
     sha1 () { "$(type -p shasum)" "$@"; }

Modified: CalendarServer/branches/users/gaya/directorybacker/test
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/test	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/test	2013-08-22 21:45:36 UTC (rev 11633)
@@ -32,7 +32,12 @@
 coverage="";
 m_twisted="";
 numjobs="";
+reactor="";
 
+if [ "$(uname -s)" == "Darwin" ]; then
+  reactor="--reactor=kqueue";
+fi;
+
 usage ()
 {
   program="$(basename "$0")";
@@ -83,7 +88,7 @@
 find "${wd}" -name \*.pyc -print0 | xargs -0 rm;
 
 mkdir -p "${wd}/data";
-cd "${wd}" && "${python}" "${trial}" --temp-directory="${wd}/data/trial" --rterrors ${random} ${until_fail} ${no_colour} ${coverage} ${numjobs} ${test_modules};
+cd "${wd}" && "${python}" "${trial}" --temp-directory="${wd}/data/trial" --rterrors ${reactor} ${random} ${until_fail} ${no_colour} ${coverage} ${numjobs} ${test_modules};
 
 if ${flaky}; then
   echo "";

Modified: CalendarServer/branches/users/gaya/directorybacker/testserver
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/testserver	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/testserver	2013-08-22 21:45:36 UTC (rev 11633)
@@ -27,32 +27,35 @@
 serverinfo="${cdt}/scripts/server/serverinfo.xml";
 printres="";
 subdir="";
+random="--random";
 
 usage ()
 {
   program="$(basename "$0")";
   echo "Usage: ${program} [-v] [-s serverinfo]";
   echo "Options:";
+  echo "        -d  Set the script subdirectory";
   echo "        -h  Print this help and exit";
+  echo "        -o  Execute tests in order";
+  echo "        -r  Print request and response";
+  echo "        -s  Set the serverinfo.xml";
   echo "        -t  Set the CalDAVTester directory";
-  echo "        -d  Set the script subdirectory";
-  echo "        -s  Set the serverinfo.xml";
-  echo "        -r  Print request and response";
   echo "        -v  Verbose.";
 
   if [ "${1-}" == "-" ]; then return 0; fi;
   exit 64;
 }
 
-while getopts 'hvrt:s:d:' option; do
+while getopts 'hvrot:s:d:' option; do
   case "$option" in 
     '?') usage; ;;
     'h') usage -; exit 0; ;;
-    't')   cdt="${OPTARG}"; serverinfo="${OPTARG}/scripts/server/serverinfo.xml"; ;;
-    'd')   subdir="--subdir=${OPTARG}"; ;;
-    's')   serverinfo="${OPTARG}"; ;;
-    'r')   printres="--always-print-request --always-print-response"; ;;
-    'v')   verbose="v"; ;;
+    't') cdt="${OPTARG}"; serverinfo="${OPTARG}/scripts/server/serverinfo.xml"; ;;
+    'd') subdir="--subdir=${OPTARG}"; ;;
+    's') serverinfo="${OPTARG}"; ;;
+    'r') printres="--always-print-request --always-print-response"; ;;
+    'v') verbose="v"; ;;
+    'o') random=""; ;;
   esac;
 done;
 
@@ -68,5 +71,5 @@
 
 source "${wd}/support/shell.sh";
 
-cd "${cdt}" && "${python}" testcaldav.py --print-details-onfail ${printres} -s "${serverinfo}" ${subdir} "$@";
+cd "${cdt}" && "${python}" testcaldav.py ${random} --print-details-onfail ${printres} -s "${serverinfo}" ${subdir} "$@";
 

Modified: CalendarServer/branches/users/gaya/directorybacker/twext/enterprise/dal/model.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twext/enterprise/dal/model.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twext/enterprise/dal/model.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -433,6 +433,9 @@
 
     def __init__(self, table, columns, unique=False):
         self.name = "%s%s:(%s)" % (table.name, "-unique" if unique else "", ",".join([col.name for col in columns]))
+        self.table = table
+        self.unique = unique
+        self.columns = columns
 
 
     def compare(self, other):

Copied: CalendarServer/branches/users/gaya/directorybacker/twext/internet/fswatch.py (from rev 11631, CalendarServer/trunk/twext/internet/fswatch.py)
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twext/internet/fswatch.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/directorybacker/twext/internet/fswatch.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -0,0 +1,169 @@
+##
+# Copyright (c) 2013 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+"""
+Watch the availablity of a file system directory
+"""
+
+import os
+from zope.interface import Interface
+from twisted.internet import reactor
+from twisted.python.log import Logger
+
+try:
+    from select import (kevent, KQ_FILTER_VNODE, KQ_EV_ADD, KQ_EV_ENABLE,
+                        KQ_EV_CLEAR, KQ_NOTE_DELETE, KQ_NOTE_RENAME, KQ_EV_EOF)
+    kqueueSupported = True
+except ImportError:
+    # kqueue not supported on this platform
+    kqueueSupported = False
+
+
+class IDirectoryChangeListenee(Interface):
+    """
+    A delegate of DirectoryChangeListener
+    """
+
+    def disconnected(): #@NoSelf
+        """
+        The directory has been unmounted
+        """
+
+    def deleted(): #@NoSelf
+        """
+        The directory has been deleted
+        """
+
+    def renamed(): #@NoSelf
+        """
+        The directory has been renamed
+        """
+
+    def connectionLost(reason): #@NoSelf
+        """
+        The file descriptor has been closed
+        """
+
+
+#TODO: better way to tell if reactor is kqueue or not
+if kqueueSupported and hasattr(reactor, "_doWriteOrRead"):
+
+
+    def patchReactor(reactor):
+        # Wrap _doWriteOrRead to support KQ_FILTER_VNODE
+        origDoWriteOrRead = reactor._doWriteOrRead
+        def _doWriteOrReadOrVNodeEvent(selectable, fd, event):
+            origDoWriteOrRead(selectable, fd, event)
+            if event.filter == KQ_FILTER_VNODE:
+                selectable.vnodeEventHappened(event)
+        reactor._doWriteOrRead = _doWriteOrReadOrVNodeEvent
+
+    patchReactor(reactor)
+
+
+
+    class DirectoryChangeListener(Logger, object):
+        """
+        Listens for the removal, renaming, or general unavailability of a
+        given directory, and lets a delegate listenee know about them.
+        """
+
+        def __init__(self, reactor, dirname, listenee):
+            """
+            @param reactor: the reactor
+            @param dirname: the full path to the directory to watch; it must
+                already exist
+            @param listenee: the delegate to call
+            @type listenee: IDirectoryChangeListenee
+            """
+            self._reactor = reactor
+            self._fd = os.open(dirname, os.O_RDONLY)
+            self._dirname = dirname
+            self._listenee = listenee
+
+
+        def logPrefix(self):
+            return repr(self._dirname)
+
+
+        def fileno(self):
+            return self._fd
+
+
+        def vnodeEventHappened(self, evt):
+            if evt.flags & KQ_EV_EOF:
+                self._listenee.disconnected()
+            if evt.fflags & KQ_NOTE_DELETE:
+                self._listenee.deleted()
+            if evt.fflags & KQ_NOTE_RENAME:
+                self._listenee.renamed()
+
+
+        def startListening(self):
+            ke = kevent(self._fd, filter=KQ_FILTER_VNODE,
+                        flags=(KQ_EV_ADD | KQ_EV_ENABLE | KQ_EV_CLEAR),
+                        fflags=KQ_NOTE_DELETE | KQ_NOTE_RENAME)
+            self._reactor._kq.control([ke], 0, None)
+            self._reactor._selectables[self._fd] = self
+
+
+        def connectionLost(self, reason):
+            os.close(self._fd)
+            self._listenee.connectionLost(reason)
+
+
+else:
+
+    # TODO: implement this for systems without kqueue support:
+
+    class DirectoryChangeListener(Logger, object):
+        """
+        Listens for the removal, renaming, or general unavailability of a
+        given directory, and lets a delegate listenee know about them.
+        """
+
+        def __init__(self, reactor, dirname, listenee):
+            """
+            @param reactor: the reactor
+            @param dirname: the full path to the directory to watch
+            @param listenee: 
+            """
+            self._reactor = reactor
+            self._fd = os.open(dirname, os.O_RDONLY)
+            self._dirname = dirname
+            self._listenee = listenee
+
+
+        def logPrefix(self):
+            return repr(self._dirname)
+
+
+        def fileno(self):
+            return self._fd
+
+
+        def vnodeEventHappened(self, evt):
+            pass
+
+
+        def startListening(self):
+            pass
+
+
+        def connectionLost(self, reason):
+            os.close(self._fd)
+            self._listenee.connectionLost(reason)
+

Copied: CalendarServer/branches/users/gaya/directorybacker/twext/internet/test/test_fswatch.py (from rev 11631, CalendarServer/trunk/twext/internet/test/test_fswatch.py)
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twext/internet/test/test_fswatch.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/directorybacker/twext/internet/test/test_fswatch.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -0,0 +1,167 @@
+##
+# Copyright (c) 2013 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+"""
+Tests for L{twext.internet.fswatch}.
+"""
+
+from twext.internet.fswatch import DirectoryChangeListener, patchReactor, \
+    IDirectoryChangeListenee
+from twisted.internet.kqreactor import KQueueReactor
+from twisted.python.filepath import FilePath
+from twisted.trial.unittest import TestCase
+from zope.interface import implements
+
+
+class KQueueReactorTestFixture(object):
+
+    def __init__(self, testCase, action=None, timeout=10):
+        """
+        Creates a kqueue reactor for use in unit tests.  The reactor is patched
+        with the vnode event handler.  Once the reactor is running, it will
+        call a supplied method.  It's expected that the method will ultimately
+        trigger the stop() of the reactor.  The reactor will time out after 10
+        seconds.
+
+        @param testCase: a test method which is needed for adding cleanup to
+        @param action: a method which will get called after the reactor is
+            running
+        @param timeout: how many seconds to keep the reactor running before
+            giving up and stopping it
+        """
+        self.testCase = testCase
+        self.reactor = KQueueReactor()
+        patchReactor(self.reactor)
+        self.action = action
+        self.timeout = timeout
+
+        def maybeStop():
+            if self.reactor.running:
+                return self.reactor.stop()
+
+        self.testCase.addCleanup(maybeStop)
+
+
+    def runReactor(self):
+        """
+        Run the test reactor, adding cleanup code to stop if after a timeout,
+        and calling the action method
+        """
+        def getReadyToStop():
+            self.reactor.callLater(self.timeout, self.reactor.stop)
+        self.reactor.callWhenRunning(getReadyToStop)
+        if self.action is not None:
+            self.reactor.callWhenRunning(self.action)
+        self.reactor.run(installSignalHandlers=False)
+
+
+
+class DataStoreMonitor(object):
+    """
+    Stub IDirectoryChangeListenee
+    """
+    implements(IDirectoryChangeListenee)
+
+
+    def __init__(self, reactor, storageService):
+        """
+        @param storageService: the service making use of the DataStore
+            directory; we send it a hardStop() to shut it down
+        """
+        self._reactor = reactor
+        self._storageService = storageService
+        self.methodCalled = ""
+
+
+    def disconnected(self):
+        self.methodCalled = "disconnected"
+        self._storageService.hardStop()
+        self._reactor.stop()
+
+
+    def deleted(self):
+        self.methodCalled = "deleted"
+        self._storageService.hardStop()
+        self._reactor.stop()
+
+
+    def renamed(self):
+        self.methodCalled = "renamed"
+        self._storageService.hardStop()
+        self._reactor.stop()
+
+
+    def connectionLost(self, reason):
+        pass
+
+
+
+class StubStorageService(object):
+    """
+    Implements hardStop for testing
+    """
+
+    def __init__(self, ignored):
+        self.stopCalled = False
+
+
+    def hardStop(self):
+        self.stopCalled = True
+
+
+
+class DirectoryChangeListenerTestCase(TestCase):
+
+    def test_delete(self):
+        """
+        Verify directory deletions can be monitored
+        """
+
+        self.tmpdir = FilePath(self.mktemp())
+        self.tmpdir.makedirs()
+
+        def deleteAction():
+            self.tmpdir.remove()
+
+        resource = KQueueReactorTestFixture(self, deleteAction)
+        storageService = StubStorageService(resource.reactor)
+        delegate = DataStoreMonitor(resource.reactor, storageService)
+        dcl = DirectoryChangeListener(resource.reactor, self.tmpdir.path, delegate)
+        dcl.startListening()
+        resource.runReactor()
+        self.assertTrue(storageService.stopCalled)
+        self.assertEquals(delegate.methodCalled, "deleted")
+
+
+    def test_rename(self):
+        """
+        Verify directory renames can be monitored
+        """
+
+        self.tmpdir = FilePath(self.mktemp())
+        self.tmpdir.makedirs()
+
+        def renameAction():
+            self.tmpdir.moveTo(FilePath(self.mktemp()))
+
+        resource = KQueueReactorTestFixture(self, renameAction)
+        storageService = StubStorageService(resource.reactor)
+        delegate = DataStoreMonitor(resource.reactor, storageService)
+        dcl = DirectoryChangeListener(resource.reactor, self.tmpdir.path, delegate)
+        dcl.startListening()
+        resource.runReactor()
+        self.assertTrue(storageService.stopCalled)
+        self.assertEquals(delegate.methodCalled, "renamed")

Modified: CalendarServer/branches/users/gaya/directorybacker/twext/python/log.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twext/python/log.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twext/python/log.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -139,6 +139,23 @@
             raise InvalidLogLevelError(name)
 
 
+    @classmethod
+    def _priorityForLevel(cls, constant):
+        """
+        We want log levels to have defined ordering - the order of definition -
+        but they aren't value constants (the only value is the name).  This is
+        arguably a bug in Twisted, so this is just a workaround for U{until
+        this is fixed in some way
+        <https://twistedmatrix.com/trac/ticket/6523>}.
+        """
+        return cls._levelPriorities[constant]
+
+LogLevel._levelPriorities = dict((constant, idx)
+                                 for (idx, constant) in
+                                     (enumerate(LogLevel.iterconstants())))
+
+
+
 #
 # Mappings to Python's logging module
 #
@@ -195,6 +212,7 @@
             return u"MESSAGE LOST"
 
 
+
 def formatUnformattableEvent(event, error):
     """
     Formats an event as a L{unicode} that describes the event
@@ -686,7 +704,8 @@
         level     = event["log_level"]
         namespace = event["log_namespace"]
 
-        if level < self.logLevelForNamespace(namespace):
+        if (LogLevel._priorityForLevel(level) <
+            LogLevel._priorityForLevel(self.logLevelForNamespace(namespace))):
             return PredicateResult.no
 
         return PredicateResult.maybe

Modified: CalendarServer/branches/users/gaya/directorybacker/twext/web2/http.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twext/web2/http.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twext/web2/http.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -213,12 +213,14 @@
     """
     A L{Response} object that contains a redirect to another network location.
     """
-    def __init__(self, location):
+    def __init__(self, location, temporary=False):
         """
         @param location: the URI to redirect to.
+        @param temporary: whether it's a temporary redirect or permanent
         """
+        code = responsecode.TEMPORARY_REDIRECT if temporary else responsecode.MOVED_PERMANENTLY
         super(RedirectResponse, self).__init__(
-            responsecode.MOVED_PERMANENTLY,
+            code,
             "Document moved to %s." % (location,)
         )
 

Modified: CalendarServer/branches/users/gaya/directorybacker/twext/web2/test/test_http.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twext/web2/test/test_http.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twext/web2/test/test_http.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -17,6 +17,18 @@
 from twext.web2.channel.http import SSLRedirectRequest, HTTPFactory
 from twisted.internet.task import deferLater
 
+
+class RedirectResponseTestCase(unittest.TestCase):
+
+    def testTemporary(self):
+        """
+        Verify the "temporary" parameter sets the appropriate response code
+        """
+        req = http.RedirectResponse("http://example.com/", temporary=False)
+        self.assertEquals(req.code, responsecode.MOVED_PERMANENTLY)
+        req = http.RedirectResponse("http://example.com/", temporary=True)
+        self.assertEquals(req.code, responsecode.TEMPORARY_REDIRECT)
+
 class PreconditionTestCase(unittest.TestCase):
     def checkPreconditions(self, request, response, expectedResult, expectedCode,
                            **kw):

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/caldavxml.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/caldavxml.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/caldavxml.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -847,7 +847,7 @@
     name = "text-match"
 
 
-    def fromString(clazz, string, caseless=False): # @NoSelf
+    def fromString(clazz, string, caseless=False): #@NoSelf
         if caseless:
             caseless = "yes"
         else:

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/peruserdata.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/peruserdata.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/peruserdata.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -176,6 +176,8 @@
         if ical.masterComponent() is not None:
             for rid in peruser_only_set:
                 ical_component = ical.deriveInstance(rid)
+                if ical_component is None:
+                    continue
                 peruser_component = peruser_recurrence_map[rid]
                 self._mergeBackComponent(ical_component, peruser_component)
                 ical.addComponent(ical_component)
@@ -310,7 +312,7 @@
             if rid is None:
                 continue
             derived = ical.deriveInstance(rid, newcomp=masterDerived)
-            if derived and derived == subcomponent:
+            if derived is not None and derived == subcomponent:
                 ical.removeComponent(subcomponent)
 
 

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/test/test_peruserdata.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/test/test_peruserdata.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/datafilters/test/test_peruserdata.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -706,6 +706,89 @@
             self.assertEqual(str(PerUserDataFilter("").filter(item)), result02)
 
 
+    def test_public_oneuser_master_invalid_derived_override(self):
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+DTSTAMP:20080601T120000Z
+ORGANIZER;CN=User 01:mailto:user1 at example.com
+RRULE:FREQ=DAILY
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:12345-67890
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Test-master
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+TRANSP:OPAQUE
+END:X-CALENDARSERVER-PERINSTANCE
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:20080602T000000Z
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Test-override
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+TRANSP:TRANSPARENT
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+""".replace("\n", "\r\n")
+        result01 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+DTSTAMP:20080601T120000Z
+ORGANIZER;CN=User 01:mailto:user1 at example.com
+RRULE:FREQ=DAILY
+TRANSP:OPAQUE
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Test-master
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
+        result02 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+DTSTAMP:20080601T120000Z
+ORGANIZER;CN=User 01:mailto:user1 at example.com
+RRULE:FREQ=DAILY
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
+
+        for item in (data, Component.fromString(data),):
+            self.assertEqual(str(PerUserDataFilter("user01").filter(item)), result01)
+        for item in (data, Component.fromString(data),):
+            self.assertEqual(str(PerUserDataFilter("user02").filter(item)), result02)
+        for item in (data, Component.fromString(data),):
+            self.assertEqual(str(PerUserDataFilter("").filter(item)), result02)
+
+
     def test_public_oneuser_master_derived_override_x2(self):
 
         data = """BEGIN:VCALENDAR

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/appleopendirectory.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/appleopendirectory.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/appleopendirectory.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -33,7 +33,6 @@
 from twext.web2.auth.digest import DigestedCredentials
 from twext.python.log import Logger
 
-from twistedcaldav.config import config
 from twistedcaldav.directory.cachingdirectory import CachingDirectoryService, \
     CachingDirectoryRecord
 from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord
@@ -122,7 +121,22 @@
             self.restrictToGUID = True
         self.restrictedTimestamp = 0
 
+        # Set up the /Local/Default node if it's in the search path so we can 
+        # send custom queries to it
+        self.localNode = None
+        try:
+            if self.node == "/Search":
+                result = self.odModule.getNodeAttributes(self.directory, "/Search",
+                    (dsattributes.kDS1AttrSearchPath,))
+                if "/Local/Default" in result[dsattributes.kDS1AttrSearchPath]:
+                    try:
+                        self.localNode = self.odModule.odInit("/Local/Default")
+                    except self.odModule.ODError, e:
+                        self.log.error("Failed to open /Local/Default): %s" % (e,))
+        except AttributeError:
+            pass
 
+
     @property
     def restrictedGUIDs(self):
         """
@@ -568,7 +582,7 @@
         def collectResults(results):
             self.log.debug("Got back %d records from OD" % (len(results),))
             for key, value in results:
-                self.log.debug("OD result: %s %s" % (key, value))
+                # self.log.debug("OD result: {key} {value}", key=key, value=value)
                 try:
                     recordNodeName = value.get(
                         dsattributes.kDSNAttrMetaNodeLocation)
@@ -665,10 +679,8 @@
             for compound in queries:
                 compound = compound.generate()
 
-                self.log.debug("Calling OD: Types %s, Query %s" %
-                    (recordTypes, compound))
-
                 try:
+                    startTime = time.time()
                     queryResults = lookupMethod(
                         directory,
                         compound,
@@ -676,6 +688,7 @@
                         recordTypes,
                         attrs,
                     )
+                    totalTime = time.time() - startTime
 
                     newSet = set()
                     for recordName, data in queryResults:
@@ -684,6 +697,8 @@
                             byGUID[guid] = (recordName, data)
                             newSet.add(guid)
 
+                    self.log.debug("Attendee OD query: Types %s, Query %s, %.2f sec, %d results" %
+                        (recordTypes, compound, totalTime, len(queryResults)))
                     sets.append(newSet)
 
                 except self.odModule.ODError, e:
@@ -698,7 +713,8 @@
                     results.append((data[dsattributes.kDSNAttrRecordName], data))
             return results
 
-        queries = buildQueriesFromTokens(tokens, self._ODFields)
+        localQueries = buildLocalQueriesFromTokens(tokens, self._ODFields)
+        nestedQuery = buildNestedQueryFromTokens(tokens, self._ODFields)
 
         # Starting with the record types corresponding to the context...
         recordTypes = self.recordTypesForSearchContext(context)
@@ -708,9 +724,13 @@
         recordTypes = [self._toODRecordTypes[r] for r in recordTypes]
 
         if recordTypes:
+            # Perform the complex/nested query.  If there was more than one
+            # token, this won't match anything in /Local, therefore we run
+            # the un-nested queries below and AND the results ourselves in
+            # multiQuery.
             results = multiQuery(
                 self.directory,
-                queries,
+                [nestedQuery],
                 recordTypes,
                 [
                     dsattributes.kDS1AttrGeneratedUID,
@@ -726,6 +746,30 @@
                     dsattributes.kDSNAttrNestedGroups,
                 ]
             )
+            if self.localNode is not None and len(tokens) > 1:
+                # /Local is in our search path and the complex query above
+                # would not have matched anything in /Local.  So now run
+                # the un-nested queries.
+                results.extend(
+                    multiQuery(
+                        self.localNode,
+                        localQueries,
+                        recordTypes,
+                        [
+                            dsattributes.kDS1AttrGeneratedUID,
+                            dsattributes.kDSNAttrRecordName,
+                            dsattributes.kDSNAttrAltSecurityIdentities,
+                            dsattributes.kDSNAttrRecordType,
+                            dsattributes.kDS1AttrDistinguishedName,
+                            dsattributes.kDS1AttrFirstName,
+                            dsattributes.kDS1AttrLastName,
+                            dsattributes.kDSNAttrEMailAddress,
+                            dsattributes.kDSNAttrMetaNodeLocation,
+                            dsattributes.kDSNAttrGroupMembers,
+                            dsattributes.kDSNAttrNestedGroups,
+                        ]
+                    )
+                )
             return succeed(collectResults(results))
         else:
             return succeed([])
@@ -744,7 +788,7 @@
         def collectResults(results):
             self.log.debug("Got back %d records from OD" % (len(results),))
             for key, value in results:
-                self.log.debug("OD result: %s %s" % (key, value))
+                # self.log.debug("OD result: {key} {value}", key=key, value=value)
                 try:
                     recordNodeName = value.get(
                         dsattributes.kDSNAttrMetaNodeLocation)
@@ -1065,10 +1109,7 @@
 
             # If restrictToGroup is in effect, all guids which are not a member
             # of that group are disabled (overriding the augments db).
-            if (
-                self.restrictedGUIDs is not None and
-                config.Scheduling.iMIP.Username != recordShortName
-            ):
+            if (self.restrictedGUIDs is not None):
                 unrestricted = recordGUID in self.restrictedGUIDs
             else:
                 unrestricted = True
@@ -1302,7 +1343,7 @@
 
 
 
-def buildQueriesFromTokens(tokens, mapping):
+def buildLocalQueriesFromTokens(tokens, mapping):
     """
     OD /Local doesn't support nested complex queries, so create a list of
     complex queries that will be ANDed together in recordsMatchingTokens()
@@ -1318,20 +1359,54 @@
     if len(tokens) == 0:
         return None
 
-    fields = ["fullName", "emailAddresses"]
+    fields = [
+        ("fullName", dsattributes.eDSContains),
+        ("emailAddresses", dsattributes.eDSStartsWith),
+    ]
 
     results = []
     for token in tokens:
         queries = []
-        for field in fields:
+        for field, comparison in fields:
             ODField = mapping[field]['odField']
-            query = dsquery.match(ODField, token, "contains")
+            query = dsquery.match(ODField, token, comparison)
             queries.append(query)
         results.append(dsquery.expression(dsquery.expression.OR, queries))
     return results
 
 
+def buildNestedQueryFromTokens(tokens, mapping):
+    """
+    Build a DS query espression such that all the tokens must appear in either
+    the fullName (anywhere) or emailAddresses (at the beginning).
+    
+    @param tokens: The tokens to search on
+    @type tokens: C{list} of C{str}
+    @param mapping: The mapping of DirectoryRecord attributes to OD attributes
+    @type mapping: C{dict}
+    @return: The nested expression object
+    @type: dsquery.expression
+    """
 
+    if len(tokens) == 0:
+        return None
+
+    fields = [
+        ("fullName", dsattributes.eDSContains),
+        ("emailAddresses", dsattributes.eDSStartsWith),
+    ]
+
+    outer = []
+    for token in tokens:
+        inner = []
+        for field, comparison in fields:
+            ODField = mapping[field]['odField']
+            query = dsquery.match(ODField, token, comparison)
+            inner.append(query)
+        outer.append(dsquery.expression(dsquery.expression.OR, inner))
+    return dsquery.expression(dsquery.expression.AND, outer)
+
+
 class OpenDirectoryRecord(CachingDirectoryRecord):
     """
     OpenDirectory implementation of L{IDirectoryRecord}.

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/directory.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/directory.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/directory.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -88,6 +88,9 @@
     recordType_resources = "resources"
 
     searchContext_location = "location"
+    searchContext_resource = "resource"
+    searchContext_user     = "user"
+    searchContext_group    = "group"
     searchContext_attendee = "attendee"
 
     aggregateService = None
@@ -272,13 +275,19 @@
         """
         Map calendarserver-principal-search REPORT context value to applicable record types
 
-        @param context: The context value to map (either "location" or "attendee")
+        @param context: The context value to map
         @type context: C{str}
         @returns: The list of record types the context maps to
         @rtype: C{list} of C{str}
         """
         if context == self.searchContext_location:
             recordTypes = [self.recordType_locations]
+        elif context == self.searchContext_resource:
+            recordTypes = [self.recordType_resources]
+        elif context == self.searchContext_user:
+            recordTypes = [self.recordType_users]
+        elif context == self.searchContext_group:
+            recordTypes = [self.recordType_groups]
         elif context == self.searchContext_attendee:
             recordTypes = [self.recordType_users, self.recordType_groups,
                 self.recordType_resources]
@@ -606,15 +615,15 @@
     log = Logger()
 
     def __init__(self, namespace, pickle=True, no_invalidation=False,
-        key_normalization=True, expireSeconds=0):
+        key_normalization=True, expireSeconds=0, lockSeconds=60):
 
         super(GroupMembershipCache, self).__init__(namespace, pickle=pickle,
             no_invalidation=no_invalidation,
             key_normalization=key_normalization)
 
         self.expireSeconds = expireSeconds
+        self.lockSeconds = lockSeconds
 
-
     def setGroupsFor(self, guid, memberships):
         self.log.debug("set groups-for %s : %s" % (guid, memberships))
         return self.set("groups-for:%s" %
@@ -651,7 +660,36 @@
         returnValue(value is not None)
 
 
+    def acquireLock(self):
+        """
+        Acquire a memcached lock named group-cacher-lock
 
+        return: Deferred firing True if successful, False if someone already has
+            the lock
+        """
+        self.log.debug("add group-cacher-lock")
+        return self.add("group-cacher-lock", "1", expireTime=self.lockSeconds)
+
+
+
+    def extendLock(self):
+        """
+        Update the expiration time of the memcached lock
+        Return: Deferred firing True if successful, False otherwise
+        """
+        self.log.debug("extend group-cacher-lock")
+        return self.set("group-cacher-lock", "1", expireTime=self.lockSeconds)
+
+
+    def releaseLock(self):
+        """
+        Release the memcached lock
+        Return: Deferred firing True if successful, False otherwise
+        """
+        self.log.debug("delete group-cacher-lock")
+        return self.delete("group-cacher-lock")
+
+
 class GroupMembershipCacheUpdater(object):
     """
     Responsible for updating memcached with group memberships.  This will run
@@ -661,7 +699,7 @@
     log = Logger()
 
     def __init__(self, proxyDB, directory, updateSeconds, expireSeconds,
-        cache=None, namespace=None, useExternalProxies=False,
+        lockSeconds, cache=None, namespace=None, useExternalProxies=False,
         externalProxiesSource=None):
         self.proxyDB = proxyDB
         self.directory = directory
@@ -673,7 +711,8 @@
 
         if cache is None:
             assert namespace is not None, "namespace must be specified if GroupMembershipCache is not provided"
-            cache = GroupMembershipCache(namespace, expireSeconds=expireSeconds)
+            cache = GroupMembershipCache(namespace, expireSeconds=expireSeconds,
+                lockSeconds=lockSeconds)
         self.cache = cache
 
 
@@ -761,6 +800,8 @@
 
         # TODO: add memcached eviction protection
 
+        useLock = True
+
         # See if anyone has completely populated the group membership cache
         isPopulated = (yield self.cache.isPopulated())
 
@@ -771,6 +812,9 @@
                 self.log.info("Group membership cache is already populated")
                 returnValue((fast, 0))
 
+            # We don't care what others are doing right now, we need to update
+            useLock = False
+
         self.log.info("Updating group membership cache")
 
         dataRoot = FilePath(config.DataRoot)
@@ -788,6 +832,14 @@
             previousMembers = pickle.loads(membershipsCacheFile.getContent())
             callGroupsChanged = True
 
+        if useLock:
+            self.log.info("Attempting to acquire group membership cache lock")
+            acquiredLock = (yield self.cache.acquireLock())
+            if not acquiredLock:
+                self.log.info("Group membership cache lock held by another process")
+                returnValue((fast, 0))
+            self.log.info("Acquired lock")
+
         if not fast and self.useExternalProxies:
 
             # Load in cached copy of external proxies so we can diff against them
@@ -797,11 +849,17 @@
                     (extProxyCacheFile.path,))
                 previousAssignments = pickle.loads(extProxyCacheFile.getContent())
 
+            if useLock:
+                yield self.cache.extendLock()
+
             self.log.info("Retrieving proxy assignments from directory")
             assignments = self.externalProxiesSource()
             self.log.info("%d proxy assignments retrieved from directory" %
                 (len(assignments),))
 
+            if useLock:
+                yield self.cache.extendLock()
+
             changed, removed = diffAssignments(previousAssignments, assignments)
             # changed is the list of proxy assignments (either new or updates).
             # removed is the list of principals who used to have an external
@@ -957,6 +1015,10 @@
 
         yield self.cache.setPopulatedMarker()
 
+        if useLock:
+            self.log.info("Releasing lock")
+            yield self.cache.releaseLock()
+
         self.log.info("Group memberships cache updated")
 
         returnValue((fast, len(members), len(changedMembers)))
@@ -975,16 +1037,19 @@
 
         groupCacher = getattr(self.transaction, "_groupCacher", None)
         if groupCacher is not None:
+
+            # Schedule next update
+            notBefore = (datetime.datetime.utcnow() +
+                datetime.timedelta(seconds=groupCacher.updateSeconds))
+            log.debug("Scheduling next group cacher update: %s" % (notBefore,))
+            yield self.transaction.enqueue(GroupCacherPollingWork,
+                notBefore=notBefore)
+
             try:
-                yield groupCacher.updateCache()
+                groupCacher.updateCache()
             except Exception, e:
                 log.error("Failed to update group membership cache (%s)" % (e,))
-            finally:
-                notBefore = (datetime.datetime.utcnow() +
-                    datetime.timedelta(seconds=groupCacher.updateSeconds))
-                log.debug("Scheduling next group cacher update: %s" % (notBefore,))
-                yield self.transaction.enqueue(GroupCacherPollingWork,
-                    notBefore=notBefore)
+
         else:
             notBefore = (datetime.datetime.utcnow() +
                 datetime.timedelta(seconds=10))

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/ldapdirectory.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/ldapdirectory.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -56,8 +56,9 @@
     CachingDirectoryRecord)
 from twistedcaldav.directory.directory import DirectoryConfigurationError
 from twistedcaldav.directory.augment import AugmentRecord
-from twistedcaldav.directory.util import splitIntoBatches
+from twistedcaldav.directory.util import splitIntoBatches, normalizeUUID
 from twisted.internet.defer import succeed, inlineCallbacks, returnValue
+from twisted.internet.threads import deferToThread
 from twext.python.log import Logger
 from twext.web2.http import HTTPError, StatusResponse
 from twext.web2 import responsecode
@@ -347,7 +348,7 @@
             records.append(record)
 
         if numMissingGuids:
-            self.log.info("{num} {recordType] records are missing {attr}",
+            self.log.info("{num} {recordType} records are missing {attr}",
                 num=numMissingGuids, recordType=recordType, attr=guidAttr)
 
         return records
@@ -404,12 +405,15 @@
             dn = normalizeDNstr(dn)
             guid = self._getUniqueLdapAttribute(attrs, guidAttr)
             if guid:
+                guid = normalizeUUID(guid)
                 readDelegate = self._getUniqueLdapAttribute(attrs, readAttr)
                 if readDelegate:
+                    readDelegate = normalizeUUID(readDelegate)
                     assignments.append(("%s#calendar-proxy-read" % (guid,),
                         [readDelegate]))
                 writeDelegate = self._getUniqueLdapAttribute(attrs, writeAttr)
                 if writeDelegate:
+                    writeDelegate = normalizeUUID(writeDelegate)
                     assignments.append(("%s#calendar-proxy-write" % (guid,),
                         [writeDelegate]))
 
@@ -781,6 +785,7 @@
             if not guid:
                 self.log.debug("LDAP data for %s is missing guid attribute %s" % (shortNames, guidAttr))
                 raise MissingGuidException()
+            guid = normalizeUUID(guid)
 
         # Find or build email
         # (The emailAddresses mapping is a list of ldap fields)
@@ -1066,8 +1071,10 @@
                         % (recordTypes, indexType, indexKey))
 
 
-    def recordsMatchingTokens(self, tokens, context=None):
+    def recordsMatchingTokens(self, tokens, context=None, limitResults=50, timeoutSeconds=10):
         """
+        # TODO: hook up limitResults to the client limit in the query
+
         @param tokens: The tokens to search on
         @type tokens: C{list} of C{str} (utf-8 bytes)
         @param context: An indication of what the end user is searching
@@ -1086,29 +1093,34 @@
         are considered.
         """
         self.log.debug("Peforming calendar user search for %s (%s)" % (tokens, context))
-
+        startTime = time.time()
         records = []
         recordTypes = self.recordTypesForSearchContext(context)
         recordTypes = [r for r in recordTypes if r in self.recordTypes()]
-        guidAttr = self.rdnSchema["guidAttr"]
 
+        typeCounts = {}
         for recordType in recordTypes:
+            if limitResults == 0:
+                self.log.debug("LDAP search aggregate limit reached")
+                break
+            typeCounts[recordType] = 0
             base = self.typeDNs[recordType]
             scope = ldap.SCOPE_SUBTREE
-            filterstr = buildFilterFromTokens(self.rdnSchema[recordType]["mapping"],
-                tokens)
+            extraFilter = self.rdnSchema[recordType]["filter"]
+            filterstr = buildFilterFromTokens(recordType, self.rdnSchema[recordType]["mapping"],
+                tokens, extra=extraFilter)
 
             if filterstr is not None:
                 # Query the LDAP server
-                self.log.debug("LDAP search %s %s %s" %
-                    (ldap.dn.dn2str(base), scope, filterstr))
+                self.log.debug("LDAP search %s %s (limit=%d)" %
+                    (ldap.dn.dn2str(base), filterstr, limitResults))
                 results = self.timedSearch(ldap.dn.dn2str(base), scope,
                     filterstr=filterstr, attrlist=self.attrlist,
-                    timeoutSeconds=self.requestTimeoutSeconds,
-                    resultLimit=self.requestResultsLimit)
-                self.log.debug("LDAP search returned %d results" % (len(results),))
+                    timeoutSeconds=timeoutSeconds,
+                    resultLimit=limitResults)
                 numMissingGuids = 0
                 numMissingRecordNames = 0
+                numNotEnabled = 0
                 for dn, attrs in results:
                     dn = normalizeDNstr(dn)
                     # Skip if group restriction is in place and guid is not
@@ -1124,9 +1136,12 @@
                         # not include in principal property search results
                         if (recordType != self.recordType_groups):
                             if not record.enabledForCalendaring:
+                                numNotEnabled += 1
                                 continue
 
                         records.append(record)
+                        typeCounts[recordType] += 1
+                        limitResults -= 1
 
                     except MissingGuidException:
                         numMissingGuids += 1
@@ -1134,18 +1149,16 @@
                     except MissingRecordNameException:
                         numMissingRecordNames += 1
 
-                if numMissingGuids:
-                    self.log.warn("%d %s records are missing %s" %
-                        (numMissingGuids, recordType, guidAttr))
+                self.log.debug("LDAP search returned %d results, %d usable" % (len(results), typeCounts[recordType]))
 
-                if numMissingRecordNames:
-                    self.log.warn("%d %s records are missing record name" %
-                        (numMissingRecordNames, recordType))
 
-        self.log.debug("Calendar user search matched %d records" % (len(records),))
+        typeCountsStr = ", ".join(["%s:%d" % (rt, ct) for (rt, ct) in typeCounts.iteritems()])
+        totalTime = time.time() - startTime
+        self.log.info("Calendar user search for %s matched %d records (%s) in %.2f seconds" % (tokens, len(records), typeCountsStr, totalTime))
         return succeed(records)
 
 
+    @inlineCallbacks
     def recordsMatchingFields(self, fields, operand="or", recordType=None):
         """
         Carries out the work of a principal-property-search against LDAP
@@ -1192,10 +1205,10 @@
                 # Query the LDAP server
                 self.log.debug("LDAP search %s %s %s" %
                     (ldap.dn.dn2str(base), scope, filterstr))
-                results = self.timedSearch(ldap.dn.dn2str(base), scope,
+                results = (yield deferToThread(self.timedSearch, ldap.dn.dn2str(base), scope,
                     filterstr=filterstr, attrlist=self.attrlist,
                     timeoutSeconds=self.requestTimeoutSeconds,
-                    resultLimit=self.requestResultsLimit)
+                    resultLimit=self.requestResultsLimit))
                 self.log.debug("LDAP search returned %d results" % (len(results),))
                 numMissingGuids = 0
                 numMissingRecordNames = 0
@@ -1233,7 +1246,7 @@
                         (numMissingRecordNames, recordType))
 
         self.log.debug("Principal property search matched %d records" % (len(records),))
-        return succeed(records)
+        returnValue(records)
 
 
     @inlineCallbacks
@@ -1416,44 +1429,54 @@
     return filterstr
 
 
-def buildFilterFromTokens(mapping, tokens):
+def buildFilterFromTokens(recordType, mapping, tokens, extra=None):
     """
     Create an LDAP filter string from a list of query tokens.  Each token is
     searched for in each LDAP attribute corresponding to "fullName" and
     "emailAddresses" (could be multiple LDAP fields for either).
 
+    @param recordType: The recordType to use to customize the filter
     @param mapping: A dict mapping internal directory attribute names to ldap names.
     @type mapping: C{dict}
     @param tokens: The list of tokens to search for
     @type tokens: C{list}
+    @param extra: Extra filter to "and" into the final filter
+    @type extra: C{str} or None
     @return: An LDAP filterstr
     @rtype: C{str}
     """
 
     filterStr = None
-    tokens = [ldapEsc(t) for t in tokens]
+    tokens = [ldapEsc(t) for t in tokens if len(t) > 2]
     if len(tokens) == 0:
         return None
 
-    attributes = ["fullName", "emailAddresses"]
+    attributes = [
+        ("fullName", "(%s=*%s*)"),
+        ("emailAddresses", "(%s=%s*)"),
+    ]
 
     ldapFields = []
-    for attribute in attributes:
+    for attribute, template in attributes:
         ldapField = mapping.get(attribute, None)
         if ldapField:
             if isinstance(ldapField, str):
-                ldapFields.append(ldapField)
+                ldapFields.append((ldapField, template))
             else:
-                ldapFields.extend(ldapField)
+                for lf in ldapField:
+                    ldapFields.append((lf, template))
 
     if len(ldapFields) == 0:
         return None
 
     tokenFragments = []
+    if extra:
+        tokenFragments.append(extra)
+
     for token in tokens:
         fragments = []
-        for ldapField in ldapFields:
-            fragments.append("(%s=*%s*)" % (ldapField, token))
+        for ldapField, template in ldapFields:
+            fragments.append(template % (ldapField, token))
         if len(fragments) == 1:
             tokenFragment = fragments[0]
         else:

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/principal.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/principal.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/principal.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -232,6 +232,7 @@
             ("emailAddresses", None, "Email Addresses",
             customxml.EmailAddressSet),
     }
+    _fieldList = [v for _ignore_k, v in sorted(_fieldMap.iteritems(), key=lambda x:x[0])]
 
 
     def propertyToField(self, property, match):
@@ -250,7 +251,7 @@
 
     def principalSearchPropertySet(self):
         props = []
-        for _ignore_field, _ignore_converter, description, xmlClass in self._fieldMap.itervalues():
+        for _ignore_field, _ignore_converter, description, xmlClass in self._fieldList:
             props.append(
                 davxml.PrincipalSearchProperty(
                     davxml.PropertyContainer(

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_buildquery.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_buildquery.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_buildquery.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -15,7 +15,8 @@
 ##
 
 from twistedcaldav.test.util import TestCase
-from twistedcaldav.directory.appleopendirectory import buildQueries, buildQueriesFromTokens, OpenDirectoryService
+from twistedcaldav.directory.appleopendirectory import (buildQueries,
+    buildLocalQueriesFromTokens, OpenDirectoryService, buildNestedQueryFromTokens)
 from calendarserver.platform.darwin.od import dsattributes
 
 class BuildQueryTests(TestCase):
@@ -104,22 +105,52 @@
             }
         )
 
-    def test_buildQueryFromTokens(self):
-        results = buildQueriesFromTokens([], OpenDirectoryService._ODFields)
+
+    def test_buildLocalQueryFromTokens(self):
+        """
+        Verify the generating of the simpler queries passed to /Local/Default
+        """
+        results = buildLocalQueriesFromTokens([], OpenDirectoryService._ODFields)
         self.assertEquals(results, None)
 
-        results = buildQueriesFromTokens(["foo"], OpenDirectoryService._ODFields)
+        results = buildLocalQueriesFromTokens(["foo"], OpenDirectoryService._ODFields)
         self.assertEquals(
             results[0].generate(),
-            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=*foo*))"
+            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))"
         )
 
-        results = buildQueriesFromTokens(["foo", "bar"], OpenDirectoryService._ODFields)
+        results = buildLocalQueriesFromTokens(["foo", "bar"], OpenDirectoryService._ODFields)
         self.assertEquals(
             results[0].generate(),
-            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=*foo*))"
+            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))"
         )
         self.assertEquals(
             results[1].generate(),
-            "(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=*bar*))"
+            "(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*))"
         )
+
+
+    def test_buildNestedQueryFromTokens(self):
+        """
+        Verify the generating of the complex nested queries
+        """
+        query = buildNestedQueryFromTokens([], OpenDirectoryService._ODFields)
+        self.assertEquals(query, None)
+
+        query = buildNestedQueryFromTokens(["foo"], OpenDirectoryService._ODFields)
+        self.assertEquals(
+            query.generate(),
+            "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))"
+        )
+
+        query = buildNestedQueryFromTokens(["foo", "bar"], OpenDirectoryService._ODFields)
+        self.assertEquals(
+            query.generate(),
+            "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*)))"
+        )
+
+        query = buildNestedQueryFromTokens(["foo", "bar", "baz"], OpenDirectoryService._ODFields)
+        self.assertEquals(
+            query.generate(),
+            "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*))(|(dsAttrTypeStandard:RealName=*baz*)(dsAttrTypeStandard:EMailAddress=baz*)))"
+        )

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_directory.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_directory.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_directory.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -173,7 +173,7 @@
         self.directoryService.groupMembershipCache = cache
 
         updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30,
+            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
             cache=cache, useExternalProxies=False)
 
         # Exercise getGroups()
@@ -240,6 +240,18 @@
             )
         )
 
+        # Prevent an update by locking the cache
+        acquiredLock = (yield cache.acquireLock())
+        self.assertTrue(acquiredLock)
+        self.assertEquals((False, 0), (yield updater.updateCache()))
+
+        # You can't lock when already locked:
+        acquiredLockAgain = (yield cache.acquireLock())
+        self.assertFalse(acquiredLockAgain)
+
+        # Allow an update by unlocking the cache
+        yield cache.releaseLock()
+
         self.assertEquals((False, 9, 9), (yield updater.updateCache()))
 
         # Verify cache is populated:
@@ -372,7 +384,7 @@
             ]
 
         updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30,
+            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
             cache=cache, useExternalProxies=True,
             externalProxiesSource=fakeExternalProxies)
 
@@ -456,7 +468,7 @@
             ]
 
         updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30,
+            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
             cache=cache, useExternalProxies=True,
             externalProxiesSource=fakeExternalProxiesRemoved)
 
@@ -623,7 +635,7 @@
         self.directoryService.groupMembershipCache = cache
 
         updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30,
+            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
             cache=cache)
 
         dataRoot = FilePath(config.DataRoot)
@@ -636,6 +648,10 @@
         # time), but since the snapshot doesn't exist we fault in from the
         # directory (fast now is False), and snapshot will get created
 
+        # Note that because fast=True and isPopulated() is False, locking is
+        # ignored:
+        yield cache.acquireLock()
+
         self.assertFalse((yield cache.isPopulated()))
         fast, numMembers, numChanged = (yield updater.updateCache(fast=True))
         self.assertEquals(fast, False)
@@ -644,6 +660,8 @@
         self.assertTrue(snapshotFile.exists())
         self.assertTrue((yield cache.isPopulated()))
 
+        yield cache.releaseLock()
+
         # Try another fast update where the snapshot already exists (as in a
         # server-restart scenario), which will only read from the snapshot
         # as indicated by the return value for "fast".  Note that the cache
@@ -824,7 +842,33 @@
         self.assertEquals(records[0].shortNames[0], "apollo")
 
 
+    def test_recordTypesForSearchContext(self):
+        self.assertEquals(
+            [self.directoryService.recordType_locations],
+            self.directoryService.recordTypesForSearchContext("location")
+        )
+        self.assertEquals(
+            [self.directoryService.recordType_resources],
+            self.directoryService.recordTypesForSearchContext("resource")
+        )
+        self.assertEquals(
+            [self.directoryService.recordType_users],
+            self.directoryService.recordTypesForSearchContext("user")
+        )
+        self.assertEquals(
+            [self.directoryService.recordType_groups],
+            self.directoryService.recordTypesForSearchContext("group")
+        )
+        self.assertEquals(
+            set([
+                self.directoryService.recordType_resources,
+                self.directoryService.recordType_users,
+                self.directoryService.recordType_groups
+            ]),
+            set(self.directoryService.recordTypesForSearchContext("attendee"))
+        )
 
+
 class GUIDTests(TestCase):
 
     def setUp(self):

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_ldapdirectory.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_ldapdirectory.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -207,7 +207,8 @@
                         "fullName" : "cn",
                         "emailAddresses" : "mail",
                     },
-                    "expected" : "(|(cn=*foo*)(mail=*foo*))",
+                    "expected" : "(&(a=b)(|(cn=*foo*)(mail=foo*)))",
+                    "extra" : "(a=b)",
                 },
                 {
                     "tokens" : ["foo"],
@@ -215,7 +216,8 @@
                         "fullName" : "cn",
                         "emailAddresses" : ["mail", "mailAliases"],
                     },
-                    "expected" : "(|(cn=*foo*)(mail=*foo*)(mailAliases=*foo*))",
+                    "expected" : "(&(a=b)(|(cn=*foo*)(mail=foo*)(mailAliases=foo*)))",
+                    "extra" : "(a=b)",
                 },
                 {
                     "tokens" : [],
@@ -224,18 +226,21 @@
                         "emailAddresses" : "mail",
                     },
                     "expected" : None,
+                    "extra" : None,
                 },
                 {
                     "tokens" : ["foo", "bar"],
                     "mapping" : { },
                     "expected" : None,
+                    "extra" : None,
                 },
                 {
                     "tokens" : ["foo", "bar"],
                     "mapping" : {
                         "emailAddresses" : "mail",
                     },
-                    "expected" : "(&(mail=*foo*)(mail=*bar*))",
+                    "expected" : "(&(mail=foo*)(mail=bar*))",
+                    "extra" : None,
                 },
                 {
                     "tokens" : ["foo", "bar"],
@@ -243,7 +248,8 @@
                         "fullName" : "cn",
                         "emailAddresses" : "mail",
                     },
-                    "expected" : "(&(|(cn=*foo*)(mail=*foo*))(|(cn=*bar*)(mail=*bar*)))",
+                    "expected" : "(&(|(cn=*foo*)(mail=foo*))(|(cn=*bar*)(mail=bar*)))",
+                    "extra" : None,
                 },
                 {
                     "tokens" : ["foo", "bar"],
@@ -251,7 +257,8 @@
                         "fullName" : "cn",
                         "emailAddresses" : ["mail", "mailAliases"],
                     },
-                    "expected" : "(&(|(cn=*foo*)(mail=*foo*)(mailAliases=*foo*))(|(cn=*bar*)(mail=*bar*)(mailAliases=*bar*)))",
+                    "expected" : "(&(|(cn=*foo*)(mail=foo*)(mailAliases=foo*))(|(cn=*bar*)(mail=bar*)(mailAliases=bar*)))",
+                    "extra" : None,
                 },
                 {
                     "tokens" : ["foo", "bar", "baz("],
@@ -259,12 +266,13 @@
                         "fullName" : "cn",
                         "emailAddresses" : "mail",
                     },
-                    "expected" : "(&(|(cn=*foo*)(mail=*foo*))(|(cn=*bar*)(mail=*bar*))(|(cn=*baz\\28*)(mail=*baz\\28*)))",
+                    "expected" : "(&(|(cn=*foo*)(mail=foo*))(|(cn=*bar*)(mail=bar*))(|(cn=*baz\\28*)(mail=baz\\28*)))",
+                    "extra" : None,
                 },
             ]
             for entry in entries:
                 self.assertEquals(
-                    buildFilterFromTokens(entry["mapping"], entry["tokens"]),
+                    buildFilterFromTokens(None, entry["mapping"], entry["tokens"], extra=entry["extra"]),
                     entry["expected"]
                 )
 
@@ -330,6 +338,10 @@
                             key, value = fragment.split("=")
                             if value in attrs.get(key, []):
                                 results.append(("ignored", (dn, attrs)))
+                                break
+                            elif value == "*" and key in attrs:
+                                results.append(("ignored", (dn, attrs)))
+                                break
 
             return results
 
@@ -401,7 +413,8 @@
                     "uid=odtestamanda,cn=users,dc=example,dc=com",
                     {
                         'uid': ['odtestamanda'],
-                        'apple-generateduid': ['9DC04A70-E6DD-11DF-9492-0800200C9A66'],
+                        # purposely throw in an un-normalized GUID
+                        'apple-generateduid': ['9dc04a70-e6dd-11df-9492-0800200c9a66'],
                         'sn': ['Test'],
                         'mail': ['odtestamanda at example.com', 'alternate at example.com'],
                         'givenName': ['Amanda'],
@@ -452,6 +465,30 @@
                         'cn': ['Wilfredo Sanchez']
                     }
                 ),
+                (
+                    "uid=testresource  ,  cn=resources  , dc=example,dc=com",
+                    {
+                        'uid': ['testresource'],
+                        'apple-generateduid': ['D91B21B9-B856-495A-8E36-0E5AD54EFB3A'],
+                        'sn': ['Resource'],
+                        'givenName': ['Test'],
+                        'cn': ['Test Resource'],
+                        # purposely throw in an un-normalized GUID
+                        'read-write-proxy' : ['6423f94a-6b76-4a3a-815b-d52cfd77935d'],
+                        'read-only-proxy' : ['5A985493-EE2C-4665-94CF-4DFEA3A89500'],
+                    }
+                ),
+                (
+                    "uid=testresource2  ,  cn=resources  , dc=example,dc=com",
+                    {
+                        'uid': ['testresource2'],
+                        'apple-generateduid': ['753E5A60-AFFD-45E4-BF2C-31DAB459353F'],
+                        'sn': ['Resource2'],
+                        'givenName': ['Test'],
+                        'cn': ['Test Resource2'],
+                        'read-write-proxy' : ['6423F94A-6B76-4A3A-815B-D52CFD77935D'],
+                    }
+                ),
             ),
             {
                 "augmentService" : None,
@@ -546,8 +583,8 @@
                 "resourceSchema": {
                     "resourceInfoAttr": "apple-resource-info", # contains location/resource info
                     "autoScheduleAttr": None,
-                    "proxyAttr": None,
-                    "readOnlyProxyAttr": None,
+                    "proxyAttr": "read-write-proxy",
+                    "readOnlyProxyAttr": "read-only-proxy",
                     "autoAcceptGroupAttr": None,
                 },
                 "partitionSchema": {
@@ -1227,6 +1264,7 @@
             self.assertEquals(
                 len(self.service.ldap.search_s("cn=groups,dc=example,dc=com", 0, "(|(apple-generateduid=right_coast)(apple-generateduid=left_coast))", [])), 2)
 
+
         def test_ldapRecordCreation(self):
             """
             Exercise _ldapResultToRecord(), which converts a dictionary
@@ -1468,6 +1506,21 @@
             self.assertEquals(record.autoAcceptGroup,
                 '77A8EB52-AA2A-42ED-8843-B2BEE863AC70')
 
+            # Record with lowercase guid
+            dn = "uid=odtestamanda,cn=users,dc=example,dc=com"
+            guid = '9dc04a70-e6dd-11df-9492-0800200c9a66'
+            attrs = {
+                'uid': ['odtestamanda'],
+                'apple-generateduid': [guid],
+                'sn': ['Test'],
+                'mail': ['odtestamanda at example.com', 'alternate at example.com'],
+                'givenName': ['Amanda'],
+                'cn': ['Amanda Test']
+            }
+            record = self.service._ldapResultToRecord(dn, attrs,
+                self.service.recordType_users)
+            self.assertEquals(record.guid, guid.upper())
+
         def test_listRecords(self):
             """
             listRecords makes an LDAP query (with fake results in this test)
@@ -1576,7 +1629,7 @@
         @inlineCallbacks
         def test_groupMembershipAliases(self):
             """
-            Exercise a directory enviornment where group membership does not refer
+            Exercise a directory environment where group membership does not refer
             to guids but instead uses LDAP DNs.  This example uses the LDAP attribute
             "uniqueMember" to specify members of a group.  The value of this attribute
             is each members' DN.  Even though the proxy database deals strictly in
@@ -1593,7 +1646,7 @@
             cache = GroupMembershipCache("ProxyDB", expireSeconds=60)
             self.service.groupMembershipCache = cache
             updater = GroupMembershipCacheUpdater(calendaruserproxy.ProxyDBService,
-                self.service, 30, 15, cache=cache, useExternalProxies=False)
+                self.service, 30, 15, 30, cache=cache, useExternalProxies=False)
 
             self.assertEquals((False, 8, 8), (yield updater.updateCache()))
 
@@ -1608,6 +1661,26 @@
                 self.assertEquals(groups, (yield record.cachedGroups()))
 
 
+        def test_getExternalProxyAssignments(self):
+            """
+            Verify getExternalProxyAssignments can extract assignments from the
+            directory, and that guids are normalized.
+            """
+            self.setupService(self.nestedUsingDifferentAttributeUsingDN)
+            self.assertEquals(
+                self.service.getExternalProxyAssignments(),
+                [
+                    ('D91B21B9-B856-495A-8E36-0E5AD54EFB3A#calendar-proxy-read',
+                        ['5A985493-EE2C-4665-94CF-4DFEA3A89500']),
+                    ('D91B21B9-B856-495A-8E36-0E5AD54EFB3A#calendar-proxy-write',
+                        ['6423F94A-6B76-4A3A-815B-D52CFD77935D']),
+                    ('753E5A60-AFFD-45E4-BF2C-31DAB459353F#calendar-proxy-write',
+                        ['6423F94A-6B76-4A3A-815B-D52CFD77935D'])
+                ]
+            )
+
+
+
         def test_splitIntoBatches(self):
             self.setupService(self.nestedUsingDifferentAttributeUsingDN)
             # Data is perfect multiple of size

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_livedirectory.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_livedirectory.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/directory/test/test_livedirectory.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -20,9 +20,11 @@
 
 try:
     import ldap
+    import socket
 
     testServer = "localhost"
-    base = "dc=example,dc=com"
+    base = ",".join(["dc=%s" % (p,) for p in socket.gethostname().split(".")])
+    print("Using base: %s" % (base,))
 
     try:
         cxn = ldap.open(testServer)
@@ -162,12 +164,28 @@
                             "attr": "uid", # used only to synthesize email address
                             "emailSuffix": None, # used only to synthesize email address
                             "filter": None, # additional filter for this type
+                            "loginEnabledAttr" : "", # attribute controlling login
+                            "loginEnabledValue" : "yes", # "True" value of above attribute
+                            "mapping" : { # maps internal record names to LDAP
+                                "recordName": "uid",
+                                "fullName" : "cn",
+                                "emailAddresses" : ["mail"], # multiple LDAP fields supported
+                                "firstName" : "givenName",
+                                "lastName" : "sn",
+                            },
                         },
                         "groups": {
                             "rdn": "cn=groups",
                             "attr": "cn", # used only to synthesize email address
                             "emailSuffix": None, # used only to synthesize email address
                             "filter": None, # additional filter for this type
+                            "mapping" : { # maps internal record names to LDAP
+                                "recordName": "cn",
+                                "fullName" : "cn",
+                                "emailAddresses" : ["mail"], # multiple LDAP fields supported
+                                "firstName" : "givenName",
+                                "lastName" : "sn",
+                            },
                         },
                     },
                     "groupSchema": {

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/ical.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/ical.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/ical.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -30,6 +30,7 @@
 
 import cStringIO as StringIO
 import codecs
+from difflib import unified_diff
 import heapq
 import itertools
 import uuid
@@ -950,12 +951,17 @@
         Remove a property from this component.
         @param property: the L{Property} to remove from this component.
         """
-        self._pycalendar.removeProperty(property._pycalendar)
-        self._pycalendar.finalise()
-        property._parent = None
-        self._markAsDirty()
 
+        if isinstance(property, str):
+            for property in self.properties(property):
+                self.removeProperty(property)
+        else:
+            self._pycalendar.removeProperty(property._pycalendar)
+            self._pycalendar.finalise()
+            property._parent = None
+            self._markAsDirty()
 
+
     def removeAllPropertiesWithName(self, pname):
         """
         Remove all properties with the given name from all components.
@@ -1440,6 +1446,10 @@
         currently marked as an EXDATE in the existing master, allow an option whereby the override
         is added as STATUS:CANCELLED and the EXDATE removed.
 
+        IMPORTANT: all callers of this method MUST check the return value for None. Never assume that
+        a valid instance will be derived - no matter how much you think you understand iCalendar recurrence.
+        There is always some new thing that will surprise you.
+
         @param rid: recurrence-id value
         @type rid: L{PyCalendarDateTime} or C{str}
         @param allowCancelled: whether to allow a STATUS:CANCELLED override
@@ -1447,7 +1457,7 @@
         @param allowExcluded: whether to derive an instance for an existing EXDATE
         @type allowExcluded: C{bool}
 
-        @return: L{Component} for newly derived instance, or None if not valid override
+        @return: L{Component} for newly derived instance, or C{None} if not a valid override
         """
 
         if allowCancelled and newcomp is not None:
@@ -3512,3 +3522,23 @@
             break
         else:
             heapq.heappop(heap)
+
+
+
+def normalize_iCalStr(icalstr):
+    """
+    Normalize a string representation of ical data for easy test comparison.
+    """
+
+    icalstr = str(icalstr).replace("\r\n ", "")
+    icalstr = icalstr.replace("\n ", "")
+    icalstr = "\r\n".join([line for line in icalstr.splitlines() if not line.startswith("DTSTAMP")])
+    return icalstr
+
+
+
+def diff_iCalStrs(icalstr1, icalstr2):
+
+    icalstr1 = normalize_iCalStr(icalstr1).splitlines()
+    icalstr2 = normalize_iCalStr(icalstr2).splitlines()
+    return "\n".join(unified_diff(icalstr1, icalstr2))

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/resource.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/resource.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/resource.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -324,7 +324,7 @@
         @param transaction: optional transaction to use instead of associated transaction
         @type transaction: L{txdav.caldav.idav.ITransaction}
         """
-        result = yield super(CalDAVResource, self).renderHTTP(request)
+        response = yield super(CalDAVResource, self).renderHTTP(request)
         if transaction is None:
             transaction = self._associatedTransaction
         if transaction is not None:
@@ -332,9 +332,13 @@
                 yield transaction.abort()
             else:
                 yield transaction.commit()
-        returnValue(result)
 
+                # May need to reset the last-modified header in the response as txn.commit() can change it due to pre-commit hooks
+                if response.headers.hasHeader("last-modified"):
+                    response.headers.setHeader("last-modified", self.lastModified())
+        returnValue(response)
 
+
     # Begin transitional new-store resource interface:
 
     def copyDeadPropertiesTo(self, other):
@@ -466,7 +470,7 @@
                     customxml.SharedURL.qname(),
                 )
 
-            elif config.Sharing.AddressBooks.Enabled and self.isAddressBookCollection() and not self.isDirectoryBackedAddressBookCollection():
+            elif config.Sharing.AddressBooks.Enabled and (self.isAddressBookCollection() or self.isGroup()) and not self.isDirectoryBackedAddressBookCollection():
                 baseProperties += (
                     customxml.Invite.qname(),
                     customxml.AllowedSharingModes.qname(),
@@ -649,7 +653,7 @@
         elif qname == customxml.Invite.qname():
             if config.Sharing.Enabled and (
                 config.Sharing.Calendars.Enabled and self.isCalendarCollection() or
-                config.Sharing.AddressBooks.Enabled and self.isAddressBookCollection() and not self.isDirectoryBackedAddressBookCollection()
+                config.Sharing.AddressBooks.Enabled and (self.isAddressBookCollection() or self.isGroup()) and not self.isDirectoryBackedAddressBookCollection()
             ):
                 result = (yield self.inviteProperty(request))
                 returnValue(result)
@@ -657,7 +661,7 @@
         elif qname == customxml.AllowedSharingModes.qname():
             if config.Sharing.Enabled and config.Sharing.Calendars.Enabled and self.isCalendarCollection():
                 returnValue(customxml.AllowedSharingModes(customxml.CanBeShared()))
-            elif config.Sharing.Enabled and config.Sharing.AddressBooks.Enabled and self.isAddressBookCollection() and not self.isDirectoryBackedAddressBookCollection():
+            elif config.Sharing.Enabled and config.Sharing.AddressBooks.Enabled and (self.isAddressBookCollection() or self.isGroup()) and not self.isDirectoryBackedAddressBookCollection():
                 returnValue(customxml.AllowedSharingModes(customxml.CanBeShared()))
 
         elif qname == customxml.SharedURL.qname():

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/sharing.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/sharing.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/sharing.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -1,5 +1,5 @@
 # -*- test-case-name: twistedcaldav.test.test_sharing -*-
-##
+# #
 # Copyright (c) 2010-2013 Apple Inc. All rights reserved.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,7 +13,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-##
+# #
 
 """
 Sharing behavior
@@ -29,6 +29,8 @@
 from twext.web2.dav.http import ErrorResponse, MultiStatusResponse
 from twext.web2.dav.resource import TwistedACLInheritable
 from twext.web2.dav.util import allDataFromStream, joinURL
+
+from txdav.common.datastore.sql import SharingInvitation
 from txdav.common.datastore.sql_tables import _BIND_MODE_OWN, \
     _BIND_MODE_READ, _BIND_MODE_WRITE, _BIND_STATUS_INVITED, \
     _BIND_MODE_DIRECT, _BIND_STATUS_ACCEPTED, _BIND_STATUS_DECLINED, \
@@ -50,10 +52,10 @@
 # FIXME: Get rid of these imports
 from twistedcaldav.directory.util import TRANSACTION_KEY
 # circular import
-#from txdav.common.datastore.sql import ECALENDARTYPE, EADDRESSBOOKTYPE
+# from txdav.common.datastore.sql import ECALENDARTYPE, EADDRESSBOOKTYPE
 ECALENDARTYPE = 0
 EADDRESSBOOKTYPE = 1
-#ENOTIFICATIONTYPE = 2
+# ENOTIFICATIONTYPE = 2
 
 
 class SharedResourceMixin(object):
@@ -83,14 +85,13 @@
                     customxml.UID.fromString(invitation.uid()) if includeUID else None,
                     element.HRef.fromString(userid),
                     customxml.CommonName.fromString(cn),
-                    customxml.InviteAccess(invitationAccessMapToXML[invitation.access()]()),
-                    invitationStatusMapToXML[invitation.state()](),
+                    customxml.InviteAccess(invitationBindModeToXMLMap[invitation.mode()]()),
+                    invitationBindStatusToXMLMap[invitation.status()](),
                 )
 
             # See if this property is on the shared calendar
             if self.isShared():
-                yield self.validateInvites(request)
-                invitations = yield self._allInvitations()
+                invitations = yield self.validateInvites(request)
                 returnValue(customxml.Invite(
                     *[invitePropertyElement(invitation) for invitation in invitations]
                 ))
@@ -98,20 +99,24 @@
             # See if it is on the sharee calendar
             if self.isShareeResource():
                 original = (yield request.locateResource(self._share.url()))
-                yield original.validateInvites(request)
-                invitations = yield original._allInvitations()
+                if original is not None:
+                    invitations = yield original.validateInvites(request)
 
-                ownerPrincipal = (yield original.ownerPrincipal(request))
-                owner = ownerPrincipal.principalURL()
-                ownerCN = ownerPrincipal.displayName()
+                    ownerPrincipal = (yield original.ownerPrincipal(request))
+                    # FIXME:  use urn:uuid in all cases
+                    if self.isCalendarCollection():
+                        owner = ownerPrincipal.principalURL()
+                    else:
+                        owner = "urn:uuid:" + ownerPrincipal.principalUID()
+                    ownerCN = ownerPrincipal.displayName()
 
-                returnValue(customxml.Invite(
-                    customxml.Organizer(
-                        element.HRef.fromString(owner),
-                        customxml.CommonName.fromString(ownerCN),
-                    ),
-                    *[invitePropertyElement(invitation, includeUID=False) for invitation in invitations]
-                ))
+                    returnValue(customxml.Invite(
+                        customxml.Organizer(
+                            element.HRef.fromString(owner),
+                            customxml.CommonName.fromString(ownerCN),
+                        ),
+                        *[invitePropertyElement(invitation, includeUID=False) for invitation in invitations]
+                    ))
 
         returnValue(None)
 
@@ -157,8 +162,8 @@
             ))
 
         # Only certain states are owner controlled
-        if invitation.state() in ("NEEDS-ACTION", "ACCEPTED", "DECLINED",):
-            yield self._updateInvitation(invitation, state=state, summary=summary)
+        if invitation.status() in (_BIND_STATUS_INVITED, _BIND_STATUS_ACCEPTED, _BIND_STATUS_DECLINED,):
+            yield self._updateInvitation(invitation, status=state, summary=summary)
 
 
     @inlineCallbacks
@@ -262,7 +267,10 @@
 
 
     @inlineCallbacks
-    def removeShareeCollection(self, request):
+    def removeShareeResource(self, request):
+        """
+        Called when the sharee DELETEs a shared collection.
+        """
 
         sharee = self.principalForUID(self._share.shareeUID())
 
@@ -284,7 +292,7 @@
         elif self.isAddressBookCollection():
             return "addressbook"
         elif self.isGroup():
-            #TODO: Add group xml resource type ?
+            # TODO: Add group xml resource type ?
             return "group"
         else:
             return ""
@@ -328,7 +336,7 @@
         else:
             # Invited shares use access mode from the invite
             # Get the access for self
-            returnValue(Invitation(self._newStoreObject).access())
+            returnValue(invitationAccessFromBindModeMap.get(self._newStoreObject.shareMode()))
 
 
     @inlineCallbacks
@@ -461,7 +469,7 @@
 
         # TODO: we do not support external users right now so this is being hard-coded
         # off in spite of the config option.
-        #elif config.Sharing.AllowExternalUsers:
+        # elif config.Sharing.AllowExternalUsers:
         #    return userid
         else:
             returnValue(None)
@@ -472,14 +480,16 @@
         """
         Make sure each userid in an invite is valid - if not re-write status.
         """
-        #assert request
+        # assert request
         invitations = yield self._allInvitations()
         for invitation in invitations:
-            if invitation.state() != "INVALID":
+            if invitation.status() != _BIND_STATUS_INVALID:
                 if not (yield self.validUserIDForShare("urn:uuid:" + invitation.shareeUID(), request)):
-                    yield self._updateInvitation(invitation, state="INVALID")
+                    # FIXME: temporarily disable this to deal with flaky directory
+                    #yield self._updateInvitation(invitation, status=_BIND_STATUS_INVALID)
+                    self.log.error("Invalid sharee detected: {uid}", uid=invitation.shareeUID())
 
-        returnValue(len(invitations))
+        returnValue(invitations)
 
 
     def inviteUserToShare(self, userid, cn, ace, summary, request):
@@ -500,7 +510,7 @@
         return self._processShareActionList(dl, resultIsList)
 
 
-    def uninviteUserToShare(self, userid, ace, request):
+    def uninviteUserFromShare(self, userid, ace, request):
         """
         Send out in uninvite first, and then remove this user from the share list.
         """
@@ -539,7 +549,7 @@
 
 
     @inlineCallbacks
-    def _createInvitation(self, shareeUID, access, summary,):
+    def _createInvitation(self, shareeUID, mode, summary,):
         """
         Create a new homeChild and wrap it in an Invitation
         """
@@ -549,45 +559,41 @@
             shareeHome = yield self._newStoreObject._txn.addressbookHomeWithUID(shareeUID, create=True)
 
         shareUID = yield self._newStoreObject.shareWith(shareeHome,
-                                                    mode=invitationAccessToBindModeMap[access],
+                                                    mode=mode,
                                                     status=_BIND_STATUS_INVITED,
                                                     message=summary)
         shareeStoreObject = yield shareeHome.invitedObjectWithShareUID(shareUID)
-        invitation = Invitation(shareeStoreObject)
+        invitation = SharingInvitation.fromCommonHomeChild(shareeStoreObject)
         returnValue(invitation)
 
 
     @inlineCallbacks
-    def _updateInvitation(self, invitation, access=None, state=None, summary=None):
-        mode = None if access is None else invitationAccessToBindModeMap[access]
-        status = None if state is None else invitationStateToBindStatusMap[state]
-        yield self._newStoreObject.updateShare(invitation._shareeStoreObject, mode=mode, status=status, message=summary)
+    def _updateInvitation(self, invitation, mode=None, status=None, summary=None):
+        yield self._newStoreObject.updateShareFromSharingInvitation(invitation, mode=mode, status=status, message=summary)
+        if mode is not None:
+            invitation.setMode(mode)
+        if status is not None:
+            invitation.setStatus(status)
+        if summary is not None:
+            invitation.setSummary(summary)
 
 
     @inlineCallbacks
     def _allInvitations(self):
         """
-        Get list of all invitations to this object
-
-        For legacy reasons, all invitations are all invited + shared (accepted, not direct).
-        Combine these two into a single sorted list so code is similar to that for legacy invite db
+        Get list of all invitations (non-direct) to this object.
         """
         if not self.exists():
             returnValue([])
 
-        #TODO: Cache
-        if True:  # not hasattr(self, "_invitations"):
+        invitations = yield self._newStoreObject.sharingInvites()
 
-            acceptedHomeChildren = yield self._newStoreObject.asShared()
-            # remove direct shares (it might be OK not to remove these, but that would be different from legacy code)
-            indirectAccceptedHomeChildren = [homeChild for homeChild in acceptedHomeChildren
-                                             if homeChild.shareMode() != _BIND_MODE_DIRECT]
-            invitedHomeChildren = (yield self._newStoreObject.asInvited()) + indirectAccceptedHomeChildren
+        # remove direct shares as those are not "real" invitations
+        invitations = filter(lambda x: x.mode() != _BIND_MODE_DIRECT, invitations)
 
-            self._invitations = sorted([Invitation(homeChild) for homeChild in invitedHomeChildren],
-                                 key=lambda invitation: invitation.shareeUID())
+        invitations.sort(key=lambda invitation: invitation.shareeUID())
 
-        returnValue(self._invitations)
+        returnValue(invitations)
 
 
     @inlineCallbacks
@@ -627,11 +633,12 @@
         # Look for existing invite and update its fields or create new one
         invitation = yield self._invitationForShareeUID(shareeUID)
         if invitation:
-            yield self._updateInvitation(invitation, access=invitationAccessMapFromXML[type(ace)], summary=summary)
+            status = _BIND_STATUS_INVITED if invitation.status() in (_BIND_STATUS_DECLINED, _BIND_STATUS_INVALID) else None
+            yield self._updateInvitation(invitation, mode=invitationBindModeFromXMLMap[type(ace)], status=status, summary=summary)
         else:
             invitation = yield self._createInvitation(
                                 shareeUID=shareeUID,
-                                access=invitationAccessMapFromXML[type(ace)],
+                                mode=invitationBindModeFromXMLMap[type(ace)],
                                 summary=summary)
         # Send invite notification
         yield self.sendInviteNotification(invitation, request)
@@ -664,23 +671,27 @@
         # Remove any shared calendar or address book
         sharee = self.principalForUID(invitation.shareeUID())
         if sharee:
-            previousInvitationState = invitation.state()
+            previousInvitationStatus = invitation.status()
+            displayName = None
             if self.isCalendarCollection():
                 shareeHomeResource = yield sharee.calendarHome(request)
-                displayName = yield shareeHomeResource.removeShareByUID(request, invitation.uid())
+                if shareeHomeResource is not None:
+                    displayName = yield shareeHomeResource.removeShareByUID(request, invitation.uid())
             elif self.isAddressBookCollection() or self.isGroup():
                 shareeHomeResource = yield sharee.addressBookHome(request)
-                yield shareeHomeResource.removeShareByUID(request, invitation.uid())
-                displayName = None
+                if shareeHomeResource is not None:
+                    yield shareeHomeResource.removeShareByUID(request, invitation.uid())
+
             # If current user state is accepted then we send an invite with the new state, otherwise
             # we cancel any existing invites for the user
-            if previousInvitationState != "ACCEPTED":
-                yield self.removeInviteNotification(invitation, request)
-            else:
-                yield self.sendInviteNotification(invitation, request, displayName=displayName, notificationState="DELETED")
+            if shareeHomeResource is not None:
+                if previousInvitationStatus != _BIND_STATUS_ACCEPTED:
+                    yield self.removeInviteNotification(invitation, request)
+                else:
+                    yield self.sendInviteNotification(invitation, request, displayName=displayName, notificationState="DELETED")
 
         # Direct shares for  with valid sharee principal will already be deleted
-        yield self._newStoreObject.unshareWith(invitation._shareeStoreObject.viewerHome())
+        yield self._newStoreObject.unshareWithUID(invitation.shareeUID())
 
         returnValue(True)
 
@@ -695,7 +706,11 @@
     def sendInviteNotification(self, invitation, request, notificationState=None, displayName=None):
 
         ownerPrincipal = (yield self.ownerPrincipal(request))
-        owner = ownerPrincipal.principalURL()
+        # FIXME:  use urn:uuid in all cases
+        if self.isCalendarCollection():
+            owner = ownerPrincipal.principalURL()
+        else:
+            owner = "urn:uuid:" + ownerPrincipal.principalUID()
         ownerCN = ownerPrincipal.displayName()
         hosturl = (yield self.canonicalURL(request))
 
@@ -719,7 +734,7 @@
 
         # Generate invite XML
         userid = "urn:uuid:" + invitation.shareeUID()
-        state = notificationState if notificationState else invitation.state()
+        state = notificationState if notificationState else invitation.status()
         summary = invitation.summary() if displayName is None else displayName
 
         typeAttr = {'shared-type': self.sharedResourceType()}
@@ -729,8 +744,8 @@
             customxml.InviteNotification(
                 customxml.UID.fromString(invitation.uid()),
                 element.HRef.fromString(userid),
-                invitationStatusMapToXML[state](),
-                customxml.InviteAccess(invitationAccessMapToXML[invitation.access()]()),
+                invitationBindStatusToXMLMap[state](),
+                customxml.InviteAccess(invitationBindModeToXMLMap[invitation.mode()]()),
                 customxml.HostURL(
                     element.HRef.fromString(hosturl),
                 ),
@@ -859,7 +874,7 @@
                 del removeDict[u]
                 del setDict[u]
             for userid, access in removeDict.iteritems():
-                result = (yield self.uninviteUserToShare(userid, access, request))
+                result = (yield self.uninviteUserFromShare(userid, access, request))
                 # If result is False that means the user being removed was not
                 # actually invited, but let's not return an error in this case.
                 okusers.add(userid)
@@ -877,7 +892,8 @@
             ok_code = responsecode.FAILED_DEPENDENCY
 
         # Do a final validation of the entire set of invites
-        numRecords = (yield self.validateInvites(request))
+        invites = (yield self.validateInvites(request))
+        numRecords = len(invites)
 
         # Set the sharing state on the collection
         shared = self.isShared()
@@ -974,28 +990,21 @@
     }
 
 
-invitationAccessMapToXML = {
-    "read-only"           : customxml.ReadAccess,
-    "read-write"          : customxml.ReadWriteAccess,
+invitationBindStatusToXMLMap = {
+    _BIND_STATUS_INVITED      : customxml.InviteStatusNoResponse,
+    _BIND_STATUS_ACCEPTED     : customxml.InviteStatusAccepted,
+    _BIND_STATUS_DECLINED     : customxml.InviteStatusDeclined,
+    _BIND_STATUS_INVALID      : customxml.InviteStatusInvalid,
+    "DELETED"                 : customxml.InviteStatusDeleted,
 }
-invitationAccessMapFromXML = dict([(v, k) for k, v in invitationAccessMapToXML.iteritems()])
+invitationBindStatusFromXMLMap = dict((v, k) for k, v in invitationBindStatusToXMLMap.iteritems())
 
-invitationStatusMapToXML = {
-    "NEEDS-ACTION" : customxml.InviteStatusNoResponse,
-    "ACCEPTED"     : customxml.InviteStatusAccepted,
-    "DECLINED"     : customxml.InviteStatusDeclined,
-    "DELETED"      : customxml.InviteStatusDeleted,
-    "INVALID"      : customxml.InviteStatusInvalid,
+invitationBindModeToXMLMap = {
+    _BIND_MODE_READ           : customxml.ReadAccess,
+    _BIND_MODE_WRITE          : customxml.ReadWriteAccess,
 }
-invitationStatusMapFromXML = dict([(v, k) for k, v in invitationStatusMapToXML.iteritems()])
+invitationBindModeFromXMLMap = dict((v, k) for k, v in invitationBindModeToXMLMap.iteritems())
 
-invitationStateToBindStatusMap = {
-    "NEEDS-ACTION": _BIND_STATUS_INVITED,
-    "ACCEPTED": _BIND_STATUS_ACCEPTED,
-    "DECLINED": _BIND_STATUS_DECLINED,
-    "INVALID": _BIND_STATUS_INVALID,
-}
-invitationStateFromBindStatusMap = dict((v, k) for k, v in invitationStateToBindStatusMap.iteritems())
 invitationAccessToBindModeMap = {
     "own": _BIND_MODE_OWN,
     "read-only": _BIND_MODE_READ,
@@ -1004,35 +1013,6 @@
 invitationAccessFromBindModeMap = dict((v, k) for k, v in invitationAccessToBindModeMap.iteritems())
 
 
-class Invitation(object):
-    """
-        Invitation is a read-only wrapper for CommonHomeChild, that uses terms similar LegacyInvite sharing.py code base.
-    """
-    def __init__(self, shareeStoreObject):
-        self._shareeStoreObject = shareeStoreObject
-
-
-    def uid(self):
-        return self._shareeStoreObject.shareUID()
-
-
-    def shareeUID(self):
-        return self._shareeStoreObject.viewerHome().uid()
-
-
-    def access(self):
-        return invitationAccessFromBindModeMap.get(self._shareeStoreObject.shareMode())
-
-
-    def state(self):
-        return invitationStateFromBindStatusMap.get(self._shareeStoreObject.shareStatus())
-
-
-    def summary(self):
-        return self._shareeStoreObject.shareMessage()
-
-
-
 class SharedHomeMixin(LinkFollowerMixIn):
     """
     A mix-in for calendar/addressbook homes that defines the operations for
@@ -1071,36 +1051,46 @@
         @rtype: L{Share} or L{NoneType}
         """
         # Find a matching share
-        if not storeObject or storeObject.owned():
+        # use "storeObject.shareUID is not None" to prevent partially shared address books form getting a share
+        if storeObject is None or storeObject.owned():
             returnValue(None)
 
-        # get the shared object's URL
+        # Get the shared object's URL - we may need to fake this if the sharer principal is missing or disabled
+        url = None
         owner = self.principalForUID(storeObject.ownerHome().uid())
+        from twistedcaldav.directory.principal import DirectoryCalendarPrincipalResource
+        if isinstance(owner, DirectoryCalendarPrincipalResource):
 
-        if not request:
-            # FIXEME:  Fake up a request that can be used to get the owner home resource
-            class _FakeRequest(object):
-                pass
-            fakeRequest = _FakeRequest()
-            setattr(fakeRequest, TRANSACTION_KEY, self._newStoreHome._txn)
-            request = fakeRequest
+            if not request:
+                # FIXEME:  Fake up a request that can be used to get the owner home resource
+                class _FakeRequest(object):
+                    pass
+                fakeRequest = _FakeRequest()
+                setattr(fakeRequest, TRANSACTION_KEY, self._newStoreHome._txn)
+                request = fakeRequest
 
-        if self._newStoreHome._homeType == ECALENDARTYPE:
-            ownerHomeCollection = yield owner.calendarHome(request)
-        elif self._newStoreHome._homeType == EADDRESSBOOKTYPE:
-            ownerHomeCollection = yield owner.addressBookHome(request)
+            if self._newStoreHome._homeType == ECALENDARTYPE:
+                ownerHomeCollection = yield owner.calendarHome(request)
+            elif self._newStoreHome._homeType == EADDRESSBOOKTYPE:
+                ownerHomeCollection = yield owner.addressBookHome(request)
 
+            if ownerHomeCollection is not None:
+                url = ownerHomeCollection.url()
+
+        if url is None:
+            url = "/calendars/__uids__/%s/" % (storeObject.ownerHome().uid(),)
+
         ownerHomeChild = yield storeObject.ownerHome().childWithID(storeObject._resourceID)
         if ownerHomeChild:
             assert ownerHomeChild != storeObject
-            url = joinURL(ownerHomeCollection.url(), ownerHomeChild.name())
+            url = joinURL(url, ownerHomeChild.name())
             share = Share(shareeStoreObject=storeObject, ownerStoreObject=ownerHomeChild, url=url)
         else:
             for ownerHomeChild in (yield storeObject.ownerHome().children()):
                 if ownerHomeChild.owned():
                     sharedGroup = yield ownerHomeChild.objectResourceWithID(storeObject._resourceID)
                     if sharedGroup:
-                        url = joinURL(ownerHomeCollection.url(), ownerHomeChild.name(), sharedGroup.name())
+                        url = joinURL(url, ownerHomeChild.name(), sharedGroup.name())
                         share = Share(shareeStoreObject=storeObject, ownerStoreObject=sharedGroup, url=url)
                         break
 
@@ -1110,18 +1100,19 @@
     @inlineCallbacks
     def _shareForUID(self, shareUID, request):
 
-        shareeStoreObject = yield self._newStoreHome.objectWithShareUID(shareUID)
-        if shareeStoreObject:
-            share = yield self._shareForStoreObject(shareeStoreObject, request)
-            if share:
-                returnValue(share)
+        if shareUID is not None:  # shareUID may be None for partially shared addressbooks
+            shareeStoreObject = yield self._newStoreHome.objectWithShareUID(shareUID)
+            if shareeStoreObject:
+                share = yield self._shareForStoreObject(shareeStoreObject, request)
+                if share:
+                    returnValue(share)
 
-        # find direct shares
-        children = yield self._newStoreHome.children()
-        for child in children:
-            share = yield self._shareForStoreObject(child, request)
-            if share and share.uid() == shareUID:
-                returnValue(share)
+            # find direct shares
+            children = yield self._newStoreHome.children()
+            for child in children:
+                share = yield self._shareForStoreObject(child, request)
+                if share and share.uid() == shareUID:
+                    returnValue(share)
 
         returnValue(None)
 
@@ -1133,8 +1124,7 @@
         oldShare = yield self._shareForUID(inviteUID, request)
 
         # Send the invite reply then add the link
-        yield self._changeShare(request, "ACCEPTED", hostUrl, inviteUID,
-                                displayname)
+        yield self._changeShare(request, _BIND_STATUS_ACCEPTED, hostUrl, inviteUID, displayname)
         if oldShare:
             share = oldShare
         else:
@@ -1145,8 +1135,7 @@
                           ownerStoreObject=sharedResource._newStoreObject,
                           url=hostUrl)
 
-        response = yield self._acceptShare(request, not oldShare, share,
-                                           displayname)
+        response = yield self._acceptShare(request, not oldShare, share, displayname)
         returnValue(response)
 
 
@@ -1172,8 +1161,7 @@
                           ownerStoreObject=sharedCollection._newStoreObject,
                           url=hostUrl)
 
-        response = yield self._acceptShare(request, not oldShare, share,
-                                           displayname)
+        response = yield self._acceptShare(request, not oldShare, share, displayname)
         returnValue(response)
 
 
@@ -1291,7 +1279,7 @@
         Remove a shared collection but do not send a decline back. Return the
         current display name of the shared collection.
         """
-        #FIXME: This is only works for calendar
+        # FIXME: only works for calendar
         shareURL = joinURL(self.url(), share.name())
         shared = (yield request.locateResource(shareURL))
         displayname = shared.displayName()
@@ -1309,12 +1297,12 @@
 
         # Remove it if it is in the DB
         yield self.removeShareByUID(request, inviteUID)
-        yield self._changeShare(request, "DECLINED", hostUrl, inviteUID)
+        yield self._changeShare(request, _BIND_STATUS_DECLINED, hostUrl, inviteUID, processed=True)
         returnValue(Response(code=responsecode.NO_CONTENT))
 
 
     @inlineCallbacks
-    def _changeShare(self, request, state, hostUrl, replytoUID, displayname=None):
+    def _changeShare(self, request, state, hostUrl, replytoUID, displayname=None, processed=False):
         """
         Accept or decline an invite to a shared collection.
         """
@@ -1323,6 +1311,10 @@
         ownerPrincipalUID = ownerPrincipal.principalUID()
         sharedResource = (yield request.locateResource(hostUrl))
         if sharedResource is None:
+            # FIXME: have to return here rather than raise to allow removal of a share for a sharer
+            # whose principal is no longer valid yet still exists in the store. Really we need to get rid of
+            # locateResource calls and just do everything via store objects.
+            returnValue(None)
             # Original shared collection is gone - nothing we can do except ignore it
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
@@ -1331,7 +1323,8 @@
             ))
 
         # Change the record
-        yield sharedResource.changeUserInviteState(request, replytoUID, ownerPrincipalUID, state, displayname)
+        if not processed:
+            yield sharedResource.changeUserInviteState(request, replytoUID, ownerPrincipalUID, state, displayname)
 
         yield self.sendReply(request, ownerPrincipal, sharedResource, state, hostUrl, replytoUID, displayname)
 
@@ -1341,6 +1334,12 @@
 
         # Locate notifications collection for owner
         owner = (yield sharedResource.ownerPrincipal(request))
+        if owner is None:
+            # FIXME: have to return here rather than raise to allow removal of a share for a sharer
+            # whose principal is no longer valid yet still exists in the store. Really we need to get rid of
+            # locateResource calls and just do everything via store objects.
+            returnValue(None)
+
         notificationResource = (yield request.locateResource(owner.notificationURL()))
         notifications = notificationResource._newStoreNotifications
 
@@ -1348,12 +1347,16 @@
         notificationUID = "%s-reply" % (replytoUID,)
         xmltype = customxml.InviteReply()
 
-        # Prefer mailto:, otherwise use principal URL
-        for cua in shareePrincipal.calendarUserAddresses():
-            if cua.startswith("mailto:"):
-                break
+        # FIXME:  use urn:uuid in all cases
+        if self._newStoreHome and self._newStoreHome._homeType == EADDRESSBOOKTYPE:
+            cua = "urn:uuid:" + shareePrincipal.principalUID()
         else:
-            cua = shareePrincipal.principalURL()
+            # Prefer mailto:, otherwise use principal URL
+            for cua in shareePrincipal.calendarUserAddresses():
+                if cua.startswith("mailto:"):
+                    break
+            else:
+                cua = shareePrincipal.principalURL()
 
         commonName = shareePrincipal.displayName()
         record = shareePrincipal.record
@@ -1364,7 +1367,7 @@
                 *(
                     (
                         element.HRef.fromString(cua),
-                        invitationStatusMapToXML[state](),
+                        invitationBindStatusToXMLMap[state](),
                         customxml.HostURL(
                             element.HRef.fromString(hostUrl),
                         ),

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/stdconfig.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/stdconfig.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -485,6 +485,7 @@
     #
     "AccessLogFile"  : "access.log", # Apache-style access log
     "ErrorLogFile"   : "error.log", # Server activity log
+    "AgentLogFile"   : "agent.log", # Agent activity log
     "ErrorLogEnabled"   : True, # True = use log file, False = stdout
     "ErrorLogRotateMB"  : 10, # Rotate error log after so many megabytes
     "ErrorLogMaxRotatedFiles"  : 5, # Retain this many error log files
@@ -614,7 +615,7 @@
             "Enabled"         : True, # Calendar on/off switch
         },
         "AddressBooks" : {
-            "Enabled"         : True, # Address Books on/off switch
+            "Enabled"         : False, # Address Books on/off switch
         }
     },
 
@@ -747,6 +748,7 @@
             "AttendeeRefreshBatch"                : 5, # Number of attendees to do batched refreshes: 0 - no batching
             "AttendeeRefreshBatchDelaySeconds"    : 5, # Time after an iTIP REPLY for first batched attendee refresh
             "AttendeeRefreshBatchIntervalSeconds" : 5, # Time between attendee batch refreshes
+            "AttendeeRefreshCountLimit"           : 50, # Number of attendees above which attendee refreshes are suppressed: 0 - no limit
             "UIDLockTimeoutSeconds"               : 60, # Time for implicit UID lock timeout
             "UIDLockExpirySeconds"                : 300, # Expiration time for UID lock,
             "PrincipalHostAliases"                : [], # Host names matched in http(s) CUAs
@@ -922,7 +924,7 @@
                 "ClientEnabled": True,
                 "ServerEnabled": True,
                 "BindAddress": "127.0.0.1",
-                "Port": 11211,
+                "Port": 11311,
                 "HandleCacheTypes": [
                     "Default",
 #                   "OpenDirectoryBacker",
@@ -988,7 +990,8 @@
         "Enabled": True,
         "MemcachedPool" : "Default",
         "UpdateSeconds" : 300,
-        "ExpireSeconds" : 3600,
+        "ExpireSeconds" : 86400,
+        "LockSeconds"   : 600,
         "EnableUpdater" : True,
         "UseExternalProxies" : False,
     },
@@ -1114,6 +1117,7 @@
     ("ConfigRoot", ("Scheduling", "iSchedule", "DKIM", "PrivateExchanges",)),
     ("LogRoot", "AccessLogFile"),
     ("LogRoot", "ErrorLogFile"),
+    ("LogRoot", "AgentLogFile"),
     ("LogRoot", ("Postgres", "LogFile",)),
     ("LogRoot", ("LogDatabase", "StatisticsLogFile",)),
     ("LogRoot", "AccountingLogRoot"),

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/storebridge.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/storebridge.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/storebridge.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -59,10 +59,10 @@
     InvalidPerUserDataMerge, \
     AttendeeAllowedError, ResourceDeletedError, InvalidAttachmentOperation, \
     ShareeAllowedError
-from txdav.carddav.iaddressbookstore import GroupWithUnsharedAddressNotAllowedError, \
-    GroupForSharedAddressBookDeleteNotAllowedError, SharedGroupDeleteNotAllowedError
+from txdav.carddav.iaddressbookstore import KindChangeNotAllowedError, \
+    GroupWithUnsharedAddressNotAllowedError
 from txdav.common.datastore.sql_tables import _BIND_MODE_READ, _BIND_MODE_WRITE, \
-    _BIND_MODE_DIRECT
+    _BIND_MODE_DIRECT, _BIND_STATUS_ACCEPTED
 from txdav.common.icommondatastore import NoSuchObjectResourceError, \
     TooManyObjectResourcesError, ObjectResourceTooBigError, \
     InvalidObjectResourceError, ObjectResourceNameNotAllowedError, \
@@ -82,6 +82,7 @@
 from twistedcaldav.customxml import calendarserver_namespace
 from twistedcaldav.instance import InvalidOverriddenInstanceError, \
     TooManyInstancesError
+import collections
 
 """
 Wrappers to translate between the APIs in L{txdav.caldav.icalendarstore} and
@@ -457,7 +458,7 @@
         # Check sharee collection first
         if self.isShareeResource():
             log.debug("Removing shared collection %s" % (self,))
-            yield self.removeShareeCollection(request)
+            yield self.removeShareeResource(request)
             returnValue(NO_CONTENT)
 
         log.debug("Deleting collection %s" % (self,))
@@ -630,20 +631,40 @@
             raise HTTPError(StatusResponse(BAD_REQUEST, "Could not parse valid data from request body"))
 
         # Build response
-        xmlresponses = []
-        for ctr, component in enumerate(components):
+        xmlresponses = [None] * len(components)
+        indexedComponents = [idxComponent for idxComponent in enumerate(components)]
+        yield self.bulkCreate(indexedComponents, request, return_changed, xmlresponses)
 
-            code = None
-            error = None
-            dataChanged = None
+        result = MultiStatusResponse(xmlresponses)
+
+        newctag = (yield self.getInternalSyncToken())
+        result.headers.setRawHeaders("CTag", (newctag,))
+
+        # Setup some useful logging
+        request.submethod = "Simple batch"
+        if not hasattr(request, "extendedLogItems"):
+            request.extendedLogItems = {}
+        request.extendedLogItems["rcount"] = len(xmlresponses)
+
+        returnValue(result)
+
+
+    @inlineCallbacks
+    def bulkCreate(self, indexedComponents, request, return_changed, xmlresponses):
+        """
+        Do create from simpleBatchPOST or crudCreate()
+        Subclasses may override
+        """
+        for index, component in indexedComponents:
+
             try:
                 # Create a new name if one was not provided
-                name = md5(str(ctr) + component.resourceUID() + str(time.time()) + request.path).hexdigest() + self.resourceSuffix()
+                name = md5(str(index) + component.resourceUID() + str(time.time()) + request.path).hexdigest() + self.resourceSuffix()
 
                 # Get a resource for the new item
                 newchildURL = joinURL(request.path, name)
                 newchild = (yield request.locateResource(newchildURL))
-                dataChanged = (yield self.storeResourceData(newchild, component, returnChangedData=return_changed))
+                changedData = (yield self.storeResourceData(newchild, component, returnChangedData=return_changed))
 
             except HTTPError, e:
                 # Extract the pre-condition
@@ -651,65 +672,70 @@
                 if isinstance(e.response, ErrorResponse):
                     error = e.response.error
                     error = (error.namespace, error.name,)
+
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, None, code, error)
+                )
+
             except Exception:
-                code = BAD_REQUEST
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, None, code=BAD_REQUEST, error=None)
+                )
 
-            if code is None:
+            else:
+                if not return_changed:
+                    changedData = None
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, changedData, code=None, error=None)
+                )
 
-                etag = (yield newchild.etag())
-                if not return_changed or dataChanged is None:
-                    xmlresponses.append(
-                        davxml.PropertyStatusResponse(
-                            davxml.HRef.fromString(newchildURL),
-                            davxml.PropertyStatus(
-                                davxml.PropertyContainer(
-                                    davxml.GETETag.fromString(etag.generate()),
-                                    customxml.UID.fromString(component.resourceUID()),
-                                ),
-                                davxml.Status.fromResponseCode(OK),
-                            )
+
+    @inlineCallbacks
+    def bulkCreateResponse(self, component, newchildURL, newchild, changedData, code, error):
+        """
+        generate one xmlresponse for bulk create
+        """
+        if code is None:
+            etag = (yield newchild.etag())
+            if changedData is None:
+                returnValue(
+                    davxml.PropertyStatusResponse(
+                        davxml.HRef.fromString(newchildURL),
+                        davxml.PropertyStatus(
+                            davxml.PropertyContainer(
+                                davxml.GETETag.fromString(etag.generate()),
+                                customxml.UID.fromString(component.resourceUID()),
+                            ),
+                            davxml.Status.fromResponseCode(OK),
                         )
                     )
-                else:
-                    xmlresponses.append(
-                        davxml.PropertyStatusResponse(
-                            davxml.HRef.fromString(newchildURL),
-                            davxml.PropertyStatus(
-                                davxml.PropertyContainer(
-                                    davxml.GETETag.fromString(etag.generate()),
-                                    self.xmlDataElementType().fromTextData(dataChanged),
-                                ),
-                                davxml.Status.fromResponseCode(OK),
-                            )
+                )
+            else:
+                returnValue(
+                    davxml.PropertyStatusResponse(
+                        davxml.HRef.fromString(newchildURL),
+                        davxml.PropertyStatus(
+                            davxml.PropertyContainer(
+                                davxml.GETETag.fromString(etag.generate()),
+                                self.xmlDataElementType().fromTextData(changedData),
+                            ),
+                            davxml.Status.fromResponseCode(OK),
                         )
                     )
-
-            else:
-                xmlresponses.append(
-                    davxml.StatusResponse(
-                        davxml.HRef.fromString(""),
-                        davxml.Status.fromResponseCode(code),
+                )
+        else:
+            returnValue(
+                davxml.StatusResponse(
+                    davxml.HRef.fromString(""),
+                    davxml.Status.fromResponseCode(code),
                     davxml.Error(
                         WebDAVUnknownElement.withName(*error),
                         customxml.UID.fromString(component.resourceUID()),
                     ) if error else None,
-                    )
                 )
+            )
 
-        result = MultiStatusResponse(xmlresponses)
 
-        newctag = (yield self.getInternalSyncToken())
-        result.headers.setRawHeaders("CTag", (newctag,))
-
-        # Setup some useful logging
-        request.submethod = "Simple batch"
-        if not hasattr(request, "extendedLogItems"):
-            request.extendedLogItems = {}
-        request.extendedLogItems["rcount"] = len(xmlresponses)
-
-        returnValue(result)
-
-
     @inlineCallbacks
     def crudBatchPOST(self, request, xmlroot):
 
@@ -722,14 +748,11 @@
         # Look for return changed data option
         return_changed = self.checkReturnChanged(request)
 
-        # Build response
-        xmlresponses = []
-        checkedBindPrivelege = None
-        checkedUnbindPrivelege = None
-        createCount = 0
-        updateCount = 0
-        deleteCount = 0
-        for xmlchild in xmlroot.children:
+        # setup for create, update, and delete
+        crudDeleteInfo = []
+        crudUpdateInfo = []
+        crudCreateInfo = []
+        for index, xmlchild in enumerate(xmlroot.children):
 
             # Determine the multiput operation: create, update, delete
             href = xmlchild.childOfType(davxml.HRef.qname())
@@ -742,17 +765,7 @@
                 if xmldata is None:
                     raise HTTPError(StatusResponse(BAD_REQUEST, "Could not parse valid data from request body without a DAV:Href present"))
 
-                # Do privilege check on collection once
-                if checkedBindPrivelege is None:
-                    try:
-                        yield self.authorize(request, (davxml.Bind(),))
-                        checkedBindPrivelege = True
-                    except HTTPError, e:
-                        checkedBindPrivelege = e
-
-                # Create operations
-                yield self.crudCreate(request, xmldata, xmlresponses, return_changed, checkedBindPrivelege)
-                createCount += 1
+                crudCreateInfo.append((index, xmldata))
             else:
                 delete = xmlchild.childOfType(customxml.Delete.qname())
                 ifmatch = xmlchild.childOfType(customxml.IfMatch.qname())
@@ -763,21 +776,17 @@
                         raise HTTPError(StatusResponse(BAD_REQUEST, "Could not parse valid data from request body - no set_items of delete operation"))
                     if xmldata is None:
                         raise HTTPError(StatusResponse(BAD_REQUEST, "Could not parse valid data from request body for set_items operation"))
-                    yield self.crudUpdate(request, str(href), xmldata, ifmatch, return_changed, xmlresponses)
-                    updateCount += 1
+                    crudUpdateInfo.append((index, str(href), xmldata, ifmatch))
                 else:
-                    # Do privilege check on collection once
-                    if checkedUnbindPrivelege is None:
-                        try:
-                            yield self.authorize(request, (davxml.Unbind(),))
-                            checkedUnbindPrivelege = True
-                        except HTTPError, e:
-                            checkedUnbindPrivelege = e
+                    crudDeleteInfo.append((index, str(href), ifmatch))
 
-                    yield self.crudDelete(request, str(href), ifmatch, xmlresponses, checkedUnbindPrivelege)
-                    deleteCount += 1
+        # now do the work
+        xmlresponses = [None] * len(xmlroot.children)
+        yield self.crudDelete(crudDeleteInfo, request, xmlresponses)
+        yield self.crudCreate(crudCreateInfo, request, xmlresponses, return_changed)
+        yield self.crudUpdate(crudUpdateInfo, request, xmlresponses, return_changed)
 
-        result = MultiStatusResponse(xmlresponses)
+        result = MultiStatusResponse(xmlresponses) #@UndefinedVariable
 
         newctag = (yield self.getInternalSyncToken())
         result.headers.setRawHeaders("CTag", (newctag,))
@@ -787,181 +796,171 @@
         if not hasattr(request, "extendedLogItems"):
             request.extendedLogItems = {}
         request.extendedLogItems["rcount"] = len(xmlresponses)
-        if createCount:
-            request.extendedLogItems["create"] = createCount
-        if updateCount:
-            request.extendedLogItems["update"] = updateCount
-        if deleteCount:
-            request.extendedLogItems["delete"] = deleteCount
+        if crudCreateInfo:
+            request.extendedLogItems["create"] = len(crudCreateInfo)
+        if crudUpdateInfo:
+            request.extendedLogItems["update"] = len(crudUpdateInfo)
+        if crudDeleteInfo:
+            request.extendedLogItems["delete"] = len(crudDeleteInfo)
 
         returnValue(result)
 
 
     @inlineCallbacks
-    def crudCreate(self, request, xmldata, xmlresponses, return_changed, hasPrivilege):
+    def crudCreate(self, crudCreateInfo, request, xmlresponses, return_changed):
 
-        code = None
-        error = None
-        try:
-            if isinstance(hasPrivilege, HTTPError):
-                raise hasPrivilege
+        if crudCreateInfo:
+            # Do privilege check on collection once
+            try:
+                yield self.authorize(request, (davxml.Bind(),))
+                hasPrivilege = True
+            except HTTPError, e:
+                hasPrivilege = e
 
-            componentdata = xmldata.textData()
-            component = xmldata.generateComponent()
+            #get components
+            indexedComponents = []
+            for index, xmldata in crudCreateInfo:
 
-            # Create a new name if one was not provided
-            name = md5(str(componentdata) + str(time.time()) + request.path).hexdigest() + self.resourceSuffix()
+                component = xmldata.generateComponent()
 
-            # Get a resource for the new item
-            newchildURL = joinURL(request.path, name)
-            newchild = (yield request.locateResource(newchildURL))
-            yield self.storeResourceData(newchild, component, componentdata)
+                if hasPrivilege is not True:
+                    e = hasPrivilege # use same code pattern as exception
+                    code = e.response.code
+                    if isinstance(e.response, ErrorResponse):
+                        error = e.response.error
+                        error = (error.namespace, error.name,)
 
-            # FIXME: figure out return_changed behavior
+                    xmlresponse = yield self.bulkCreateResponse(component, None, None, None, code, error)
+                    xmlresponses[index] = xmlresponse
 
-        except HTTPError, e:
-            # Extract the pre-condition
-            code = e.response.code
-            if isinstance(e.response, ErrorResponse):
-                error = e.response.error
-                error = (error.namespace, error.name,)
+                else:
+                    indexedComponents.append((index, component,))
 
-        except Exception:
-            code = BAD_REQUEST
+            yield self.bulkCreate(indexedComponents, request, return_changed, xmlresponses)
 
-        if code is None:
-            etag = (yield newchild.etag())
-            xmlresponses.append(
-                davxml.PropertyStatusResponse(
-                    davxml.HRef.fromString(newchildURL),
-                    davxml.PropertyStatus(
-                        davxml.PropertyContainer(
-                            davxml.GETETag.fromString(etag.generate()),
-                            customxml.UID.fromString(component.resourceUID()),
-                        ),
-                        davxml.Status.fromResponseCode(OK),
-                    )
-                )
-            )
-        else:
-            xmlresponses.append(
-                davxml.StatusResponse(
-                    davxml.HRef.fromString(""),
-                    davxml.Status.fromResponseCode(code),
-                    davxml.Error(
-                        WebDAVUnknownElement.withName(*error),
-                        customxml.UID.fromString(component.resourceUID()),
-                    ) if error else None,
-                )
-            )
 
-
     @inlineCallbacks
-    def crudUpdate(self, request, href, xmldata, ifmatch, return_changed, xmlresponses):
-        code = None
-        error = None
-        try:
-            componentdata = xmldata.textData()
-            component = xmldata.generateComponent()
+    def crudUpdate(self, crudUpdateInfo, request, xmlresponses, return_changed):
 
-            updateResource = (yield request.locateResource(href))
-            if not updateResource.exists():
-                raise HTTPError(NOT_FOUND)
+        for index, href, xmldata, ifmatch in crudUpdateInfo:
 
-            # Check privilege
-            yield updateResource.authorize(request, (davxml.Write(),))
+            code = None
+            error = None
+            try:
+                componentdata = xmldata.textData()
+                component = xmldata.generateComponent()
 
-            # Check if match
-            etag = (yield updateResource.etag())
-            if ifmatch and ifmatch != etag.generate():
-                raise HTTPError(PRECONDITION_FAILED)
+                updateResource = (yield request.locateResource(href))
+                if not updateResource.exists():
+                    raise HTTPError(NOT_FOUND)
 
-            yield self.storeResourceData(updateResource, component, componentdata)
+                # Check privilege
+                yield updateResource.authorize(request, (davxml.Write(),))
 
-            # FIXME: figure out return_changed behavior
+                # Check if match
+                etag = (yield updateResource.etag())
+                if ifmatch and ifmatch != etag.generate():
+                    raise HTTPError(PRECONDITION_FAILED)
 
-        except HTTPError, e:
-            # Extract the pre-condition
-            code = e.response.code
-            if isinstance(e.response, ErrorResponse):
-                error = e.response.error
-                error = (error.namespace, error.name,)
+                changedData = yield self.storeResourceData(updateResource, component, componentdata)
 
-        except Exception:
-            code = BAD_REQUEST
+            except HTTPError, e:
+                # Extract the pre-condition
+                code = e.response.code
+                if isinstance(e.response, ErrorResponse):
+                    error = e.response.error
+                    error = (error.namespace, error.name,)
 
-        if code is None:
-            xmlresponses.append(
-                davxml.PropertyStatusResponse(
-                    davxml.HRef.fromString(href),
-                    davxml.PropertyStatus(
-                        davxml.PropertyContainer(
-                            davxml.GETETag.fromString(etag.generate()),
-                        ),
-                        davxml.Status.fromResponseCode(OK),
+            except Exception:
+                code = BAD_REQUEST
+
+            if code is None:
+                if not return_changed or changedData is None:
+                    xmlresponses[index] = davxml.PropertyStatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.PropertyStatus(
+                            davxml.PropertyContainer(
+                                davxml.GETETag.fromString(etag.generate()),
+                            ),
+                            davxml.Status.fromResponseCode(OK),
+                        )
                     )
-                )
-            )
-        else:
-            xmlresponses.append(
-                davxml.StatusResponse(
-                    davxml.HRef.fromString(href),
-                    davxml.Status.fromResponseCode(code),
-                    davxml.Error(
-                        WebDAVUnknownElement.withName(*error),
-                    ) if error else None,
-                )
-            )
+                else:
+                    xmlresponses[index] = davxml.PropertyStatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.PropertyStatus(
+                            davxml.PropertyContainer(
+                                davxml.GETETag.fromString(etag.generate()),
+                                self.xmlDataElementType().fromTextData(changedData),
+                            ),
+                            davxml.Status.fromResponseCode(OK),
+                        )
+                    )
+            else:
+                xmlresponses[index] = davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(code),
+                        davxml.Error(
+                            WebDAVUnknownElement.withName(*error),
+                        ) if error else None,
+                    )
 
 
     @inlineCallbacks
-    def crudDelete(self, request, href, ifmatch, xmlresponses, hasPrivilege):
-        code = None
-        error = None
-        try:
-            if isinstance(hasPrivilege, HTTPError):
-                raise hasPrivilege
+    def crudDelete(self, crudDeleteInfo, request, xmlresponses):
 
-            deleteResource = (yield request.locateResource(href))
-            if not deleteResource.exists():
-                raise HTTPError(NOT_FOUND)
+        if crudDeleteInfo:
 
-            # Check if match
-            etag = (yield deleteResource.etag())
-            if ifmatch and ifmatch != etag.generate():
-                raise HTTPError(PRECONDITION_FAILED)
+            # Do privilege check on collection once
+            try:
+                yield self.authorize(request, (davxml.Unbind(),))
+                hasPrivilege = True
+            except HTTPError, e:
+                hasPrivilege = e
 
-            yield deleteResource.storeRemove(request)
+            for index, href, ifmatch in crudDeleteInfo:
+                code = None
+                error = None
+                try:
+                    if hasPrivilege is not True:
+                        raise hasPrivilege
 
-        except HTTPError, e:
-            # Extract the pre-condition
-            code = e.response.code
-            if isinstance(e.response, ErrorResponse):
-                error = e.response.error
-                error = (error.namespace, error.name,)
+                    deleteResource = (yield request.locateResource(href))
+                    if not deleteResource.exists():
+                        raise HTTPError(NOT_FOUND)
 
-        except Exception:
-            code = BAD_REQUEST
+                    # Check if match
+                    etag = (yield deleteResource.etag())
+                    if ifmatch and ifmatch != etag.generate():
+                        raise HTTPError(PRECONDITION_FAILED)
 
-        if code is None:
-            xmlresponses.append(
-                davxml.StatusResponse(
-                    davxml.HRef.fromString(href),
-                    davxml.Status.fromResponseCode(OK),
-                )
-            )
-        else:
-            xmlresponses.append(
-                davxml.StatusResponse(
-                    davxml.HRef.fromString(href),
-                    davxml.Status.fromResponseCode(code),
-                    davxml.Error(
-                        WebDAVUnknownElement.withName(*error),
-                    ) if error else None,
-                )
-            )
+                    yield deleteResource.storeRemove(request)
 
+                except HTTPError, e:
+                    # Extract the pre-condition
+                    code = e.response.code
+                    if isinstance(e.response, ErrorResponse):
+                        error = e.response.error
+                        error = (error.namespace, error.name,)
 
+                except Exception:
+                    code = BAD_REQUEST
+
+                if code is None:
+                    xmlresponses[index] = davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(OK),
+                    )
+                else:
+                    xmlresponses[index] = davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(code),
+                        davxml.Error(
+                            WebDAVUnknownElement.withName(*error),
+                        ) if error else None,
+                    )
+
+
     def notifierID(self):
         return "%s/%s" % self._newStoreObject.notifierID()
 
@@ -1163,7 +1162,7 @@
         except InvalidICalendarDataError:
             return None
 
-        by_uid = {}
+        by_uid = collections.OrderedDict()
         by_tzid = {}
         for subcomponent in vcal.subcomponents():
             if subcomponent.name() == "VTIMEZONE":
@@ -1425,7 +1424,7 @@
 
 
     def resourceType(self,):
-        return davxml.ResourceType.dropboxhome  # @UndefinedVariable
+        return davxml.ResourceType.dropboxhome #@UndefinedVariable
 
 
     def listChildren(self):
@@ -1477,7 +1476,7 @@
 
 
     def resourceType(self):
-        return davxml.ResourceType.dropbox  # @UndefinedVariable
+        return davxml.ResourceType.dropbox #@UndefinedVariable
 
 
     @inlineCallbacks
@@ -1627,23 +1626,28 @@
     def sharedDropboxACEs(self):
 
         aces = ()
-        calendars = yield self._newStoreCalendarObject._parentCollection.asShared()
-        for calendar in calendars:
 
+        invites = yield self._newStoreCalendarObject._parentCollection.sharingInvites()
+        for invite in invites:
+
+            # Only want accepted invites
+            if invite.status() != _BIND_STATUS_ACCEPTED:
+                continue
+
             userprivs = [
             ]
-            if calendar.shareMode() in (_BIND_MODE_READ, _BIND_MODE_WRITE,):
+            if invite.mode() in (_BIND_MODE_READ, _BIND_MODE_WRITE,):
                 userprivs.append(davxml.Privilege(davxml.Read()))
                 userprivs.append(davxml.Privilege(davxml.ReadACL()))
                 userprivs.append(davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()))
-            if calendar.shareMode() in (_BIND_MODE_READ,):
+            if invite.mode() in (_BIND_MODE_READ,):
                 userprivs.append(davxml.Privilege(davxml.WriteProperties()))
-            if calendar.shareMode() in (_BIND_MODE_WRITE,):
+            if invite.mode() in (_BIND_MODE_WRITE,):
                 userprivs.append(davxml.Privilege(davxml.Write()))
             proxyprivs = list(userprivs)
             proxyprivs.remove(davxml.Privilege(davxml.ReadACL()))
 
-            principal = self.principalForUID(calendar._home.uid())
+            principal = self.principalForUID(invite.shareeUID())
             aces += (
                 # Inheritable specific access for the resource's associated principal.
                 davxml.ACE(
@@ -1722,7 +1726,7 @@
 
 
     def resourceType(self,):
-        return davxml.ResourceType.dropboxhome  # @UndefinedVariable
+        return davxml.ResourceType.dropboxhome #@UndefinedVariable
 
 
     def listChildren(self):
@@ -1825,7 +1829,7 @@
 
 
     def resourceType(self,):
-        return davxml.ResourceType.dropbox  # @UndefinedVariable
+        return davxml.ResourceType.dropbox #@UndefinedVariable
 
 
     @inlineCallbacks
@@ -1922,7 +1926,7 @@
 
 
     @inlineCallbacks
-    def _sharedAccessControl(self, calendar, shareMode):
+    def _sharedAccessControl(self, invite):
         """
         Check the shared access mode of this resource, potentially consulting
         an external access method if necessary.
@@ -1938,10 +1942,10 @@
             access control mechanism has dictate the home should no longer have
             any access at all.
         """
-        if shareMode in (_BIND_MODE_DIRECT,):
-            ownerUID = calendar.ownerHome().uid()
+        if invite.mode() in (_BIND_MODE_DIRECT,):
+            ownerUID = invite.ownerUID()
             owner = self.principalForUID(ownerUID)
-            shareeUID = calendar.viewerHome().uid()
+            shareeUID = invite.shareeUID()
             if owner.record.recordType == WikiDirectoryService.recordType_wikis:
                 # Access level comes from what the wiki has granted to the
                 # sharee
@@ -1957,9 +1961,9 @@
                     returnValue(None)
             else:
                 returnValue("original")
-        elif shareMode in (_BIND_MODE_READ,):
+        elif invite.mode() in (_BIND_MODE_READ,):
             returnValue("read-only")
-        elif shareMode in (_BIND_MODE_WRITE,):
+        elif invite.mode() in (_BIND_MODE_WRITE,):
             returnValue("read-write")
         returnValue("original")
 
@@ -1968,19 +1972,23 @@
     def sharedDropboxACEs(self):
 
         aces = ()
-        calendars = yield self._newStoreCalendarObject._parentCollection.asShared()
-        for calendar in calendars:
+        invites = yield self._newStoreCalendarObject._parentCollection.sharingInvites()
+        for invite in invites:
 
+            # Only want accepted invites
+            if invite.status() != _BIND_STATUS_ACCEPTED:
+                continue
+
             privileges = [
                 davxml.Privilege(davxml.Read()),
                 davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
             ]
             userprivs = []
-            access = (yield self._sharedAccessControl(calendar, calendar.shareMode()))
+            access = (yield self._sharedAccessControl(invite))
             if access in ("read-only", "read-write",):
                 userprivs.extend(privileges)
 
-            principal = self.principalForUID(calendar._home.uid())
+            principal = self.principalForUID(invite.shareeUID())
             aces += (
                 # Inheritable specific access for the resource's associated principal.
                 davxml.ACE(
@@ -2896,8 +2904,8 @@
         self._name = addressbook.name() if addressbook else name
 
         if config.EnableBatchUpload:
-            self._postHandlers[("text", "vcard")] = _CommonHomeChildCollectionMixin.simpleBatchPOST
-            self.xmlDocHandlers[customxml.Multiput] = _CommonHomeChildCollectionMixin.crudBatchPOST
+            self._postHandlers[("text", "vcard")] = AddressBookCollectionResource.simpleBatchPOST
+            self.xmlDocHandlers[customxml.Multiput] = AddressBookCollectionResource.crudBatchPOST
 
 
     def __repr__(self):
@@ -2963,7 +2971,186 @@
         return FORBIDDEN
 
 
+    @inlineCallbacks
+    def makeChild(self, name):
+        """
+        call super and provision group share
+        """
+        abObjectResource = yield super(AddressBookCollectionResource, self).makeChild(name)
+        if abObjectResource.exists() and abObjectResource._newStoreObject.shareUID() is not None:
+            abObjectResource = yield self.parentResource().provisionShare(abObjectResource)
+        returnValue(abObjectResource)
 
+
+    @inlineCallbacks
+    def storeRemove(self, request):
+        """
+        handle remove of partially shared addressbook, else call super
+        """
+        if self.isShareeResource() and self._newStoreObject.shareUID() is None:
+            log.debug("Removing shared collection %s" % (self,))
+            for childname in (yield self.listChildren()):
+                child = (yield request.locateChildResource(self, childname))
+                if child.isShareeResource():
+                    yield child.storeRemove(request)
+
+            returnValue(NO_CONTENT)
+
+        returnValue((yield super(AddressBookCollectionResource, self).storeRemove(request)))
+
+
+    @inlineCallbacks
+    def bulkCreate(self, indexedComponents, request, return_changed, xmlresponses):
+        """
+        bulk create allowing groups to contain member UIDs added during the same bulk create
+        """
+        groupRetries = []
+        coaddedUIDs = set()
+        for index, component in indexedComponents:
+
+            try:
+                # Create a new name if one was not provided
+                name = md5(str(index) + component.resourceUID() + str(time.time()) + request.path).hexdigest() + self.resourceSuffix()
+
+                # Get a resource for the new item
+                newchildURL = joinURL(request.path, name)
+                newchild = (yield request.locateResource(newchildURL))
+                changedData = (yield self.storeResourceData(newchild, component, returnChangedData=return_changed))
+
+            except GroupWithUnsharedAddressNotAllowedError, e:
+                # save off info and try again below
+                missingUIDs = set(e.message)
+                groupRetries.append((index, component, newchildURL, newchild, missingUIDs,))
+
+            except HTTPError, e:
+                # Extract the pre-condition
+                code = e.response.code
+                if isinstance(e.response, ErrorResponse):
+                    error = e.response.error
+                    error = (error.namespace, error.name,)
+
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, None, code, error)
+                )
+
+            except Exception:
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, None, code=BAD_REQUEST, error=None)
+                )
+
+            else:
+                if not return_changed:
+                    changedData = None
+                coaddedUIDs |= set([component.resourceUID()])
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, changedData, code=None, error=None)
+                )
+
+        if groupRetries:
+            # get set of UIDs added
+            coaddedUIDs |= set([groupRetry[1].resourceUID() for groupRetry in groupRetries])
+
+            # check each group add to see if it will succeed if coaddedUIDs are allowed
+            while(True):
+                for groupRetry in groupRetries:
+                    if bool(groupRetry[4] - coaddedUIDs):
+                        break
+                else:
+                    break
+
+                # give FORBIDDEN response
+                index, component, newchildURL, newchild, missingUIDs = groupRetry
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, changedData=None, code=FORBIDDEN, error=None)
+                )
+                coaddedUIDs -= set([component.resourceUID()]) # group uid not added
+                groupRetries.remove(groupRetry) # remove this retry
+
+            for index, component, newchildURL, newchild, missingUIDs in groupRetries:
+                # newchild._metadata -> newchild._options during store
+                newchild._metadata["coaddedUIDs"] = coaddedUIDs
+
+                # don't catch errors, abort the whole transaction
+                changedData = yield self.storeResourceData(newchild, component, returnChangedData=return_changed)
+                if not return_changed:
+                    changedData = None
+                xmlresponses[index] = (
+                    yield self.bulkCreateResponse(component, newchildURL, newchild, changedData, code=None, error=None)
+                )
+
+
+    @inlineCallbacks
+    def crudDelete(self, crudDeleteInfo, request, xmlresponses):
+        """
+        Change handling of privileges
+        """
+        if crudDeleteInfo:
+            # Do privilege check on collection once
+            try:
+                yield self.authorize(request, (davxml.Unbind(),))
+                hasPrivilege = True
+            except HTTPError, e:
+                hasPrivilege = e
+
+            for index, href, ifmatch in crudDeleteInfo:
+                code = None
+                error = None
+                try:
+                    deleteResource = (yield request.locateResource(href))
+                    if not deleteResource.exists():
+                        raise HTTPError(NOT_FOUND)
+
+                    # Check if match
+                    etag = (yield deleteResource.etag())
+                    if ifmatch and ifmatch != etag.generate():
+                        raise HTTPError(PRECONDITION_FAILED)
+
+                    #===========================================================
+                    # # If unshared is allowed deletes fails but crud adds works work!
+                    # if (hasPrivilege is not True and not (
+                    #             deleteResource.isShareeResource() or
+                    #             deleteResource._newStoreObject.isGroupForSharedAddressBook()
+                    #         )
+                    #     ):
+                    #     raise hasPrivilege
+                    #===========================================================
+
+                    # don't allow shared group deletion -> unshare
+                    if (deleteResource.isShareeResource() or
+                        deleteResource._newStoreObject.isGroupForSharedAddressBook()):
+                        raise HTTPError(FORBIDDEN)
+
+                    if hasPrivilege is not True:
+                        raise hasPrivilege
+
+                    yield deleteResource.storeRemove(request)
+
+                except HTTPError, e:
+                    # Extract the pre-condition
+                    code = e.response.code
+                    if isinstance(e.response, ErrorResponse):
+                        error = e.response.error
+                        error = (error.namespace, error.name,)
+
+                except Exception:
+                    code = BAD_REQUEST
+
+                if code is None:
+                    xmlresponses[index] = davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(OK),
+                    )
+                else:
+                    xmlresponses[index] = davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(code),
+                        davxml.Error(
+                            WebDAVUnknownElement.withName(*error),
+                        ) if error else None,
+                    )
+
+
+
 class GlobalAddressBookCollectionResource(GlobalAddressBookResource, AddressBookCollectionResource):
     """
     Wrapper around a L{txdav.carddav.iaddressbook.IAddressBook}.
@@ -3030,8 +3217,17 @@
         """
         Remove this address book object
         """
+
         # Handle sharing
-        if self.isShared():
+        if self.isShareeResource():
+            log.debug("Removing shared resource %s" % (self,))
+            yield self.removeShareeResource(request)
+            returnValue(NO_CONTENT)
+        elif self._newStoreObject.isGroupForSharedAddressBook():
+            abCollectionResource = (yield request.locateResource(parentForURL(request.uri)))
+            returnValue((yield abCollectionResource.storeRemove(request)))
+
+        elif self.isShared():
             yield self.downgradeFromShare(request)
 
         response = (
@@ -3131,6 +3327,13 @@
             returnValue(response)
 
         # Handle the various store errors
+        except KindChangeNotAllowedError:
+            raise HTTPError(StatusResponse(
+                FORBIDDEN,
+                "vCard kind may not be changed",)
+            )
+
+        # Handle the various store errors
         except GroupWithUnsharedAddressNotAllowedError:
             raise HTTPError(StatusResponse(
                 FORBIDDEN,
@@ -3148,23 +3351,16 @@
 
     @inlineCallbacks
     def http_DELETE(self, request):
+        """
+        Override http_DELETE handle shared group deletion without fromParent=[davxml.Unbind()]
+        """
+        if (self.isShareeResource() or
+            self.exists() and self._newStoreObject.isGroupForSharedAddressBook()):
+            returnValue((yield self.storeRemove(request)))
 
-        try:
-            returnValue((yield super(AddressBookObjectResource, self).http_DELETE(request)))
+        returnValue((yield super(AddressBookObjectResource, self).http_DELETE(request)))
 
-        except GroupForSharedAddressBookDeleteNotAllowedError:
-            raise HTTPError(StatusResponse(
-                FORBIDDEN,
-                "Sharee cannot delete the group for a shared address book",)
-            )
 
-        except SharedGroupDeleteNotAllowedError:
-            raise HTTPError(StatusResponse(
-                FORBIDDEN,
-                "Sharee cannot delete a shared group",)
-            )
-
-
     @inlineCallbacks
     def accessControlList(self, request, *a, **kw):
         """
@@ -3193,7 +3389,7 @@
             log.debug("Resource not found: %s" % (self,))
             raise HTTPError(NOT_FOUND)
 
-        if self._newStoreObject.addressbook().owned():
+        if not self._parentResource.isShareeResource():
             returnValue((yield super(AddressBookObjectResource, self).accessControlList(request, *a, **kw)))
 
         # Direct shares use underlying privileges of shared collection

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_sharing.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_sharing.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_sharing.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -24,6 +24,7 @@
 from twistedcaldav import customxml
 from twistedcaldav import sharing
 from twistedcaldav.config import config
+from twistedcaldav.directory.principal import DirectoryCalendarPrincipalResource
 from twistedcaldav.resource import CalDAVResource
 from twistedcaldav.sharing import WikiDirectoryService
 from twistedcaldav.test.test_cache import StubResponseCacheResource
@@ -71,8 +72,10 @@
 
 
 
-class FakePrincipal(object):
+class FakePrincipal(DirectoryCalendarPrincipalResource):
 
+    invalid_names = set()
+
     def __init__(self, cuaddr, test):
         if cuaddr.startswith("mailto:"):
             name = cuaddr[7:].split('@')[0]
@@ -91,6 +94,8 @@
 
     @inlineCallbacks
     def calendarHome(self, request):
+        if self._name in self.invalid_names:
+            returnValue(None)
         a, _ignore_seg = yield self._test.calendarCollection.locateChild(request, ["__uids__"])
         b, _ignore_seg = yield a.locateChild(request, [self._name])
         if b is None:
@@ -215,6 +220,38 @@
         returnValue(resource)
 
 
+    @inlineCallbacks
+    def _doPOSTSharerAccept(self, body, resultcode=responsecode.OK):
+        request = SimpleStoreRequest(self, "POST", "/calendars/__uids__/user02/", content=body, authid="user02")
+        request.headers.setHeader("content-type", MimeType("text", "xml"))
+        response = yield self.send(request)
+        response = IResponse(response)
+        self.assertEqual(response.code, resultcode)
+
+        if response.stream:
+            xmldata = yield allDataFromStream(response.stream)
+            doc = WebDAVDocument.fromString(xmldata)
+            returnValue(doc)
+        else:
+            returnValue(None)
+
+
+    @inlineCallbacks
+    def _getResourceSharer(self, name):
+        request = SimpleStoreRequest(self, "GET", "%s" % (name,))
+        resource = yield request.locateResource("%s" % (name,))
+        returnValue(resource)
+
+
+    def _getUIDElementValue(self, xml):
+
+        for user in xml.children:
+            for element in user.children:
+                if type(element) == customxml.UID:
+                    return element.children[0].data
+        return None
+
+
     def _clearUIDElementValue(self, xml):
 
         for user in xml.children:
@@ -224,6 +261,14 @@
         return xml
 
 
+    def _getHRefElementValue(self, xml):
+
+        for href in xml.root_element.children:
+            if type(href) == davxml.HRef:
+                return href.children[0].data
+        return None
+
+
     @inlineCallbacks
     def test_upgradeToShare(self):
 
@@ -683,7 +728,7 @@
                 davxml.HRef.fromString("urn:uuid:user02"),
                 customxml.CommonName.fromString("user02"),
                 customxml.InviteAccess(customxml.ReadWriteAccess()),
-                customxml.InviteStatusInvalid(),
+                customxml.InviteStatusNoResponse(),
             )
         ))
 
@@ -780,3 +825,129 @@
         access = "no-access"
         childNames = yield listChildrenViaPropfind()
         self.assertNotIn(sharedName, childNames)
+
+
+    @inlineCallbacks
+    def test_POSTDowngradeWithDisabledInvitee(self):
+
+        yield self.resource.upgradeToShare()
+
+        yield self._doPOST("""<?xml version="1.0" encoding="utf-8" ?>
+            <CS:share xmlns:D="DAV:" xmlns:CS="http://calendarserver.org/ns/">
+                <CS:set>
+                    <D:href>mailto:user02 at example.com</D:href>
+                    <CS:summary>My Shared Calendar</CS:summary>
+                    <CS:read-write/>
+                </CS:set>
+            </CS:share>
+            """)
+
+        propInvite = (yield self.resource.readProperty(customxml.Invite, None))
+        self.assertEquals(self._clearUIDElementValue(propInvite), customxml.Invite(
+            customxml.InviteUser(
+                customxml.UID.fromString(""),
+                davxml.HRef.fromString("urn:uuid:user02"),
+                customxml.CommonName.fromString("USER02"),
+                customxml.InviteAccess(customxml.ReadWriteAccess()),
+                customxml.InviteStatusNoResponse(),
+            ),
+        ))
+
+        self.patch(FakePrincipal, "invalid_names", set(("user02",)))
+        yield self.resource.downgradeFromShare(norequest())
+
+
+    @inlineCallbacks
+    def test_POSTRemoveWithDisabledInvitee(self):
+
+        yield self.resource.upgradeToShare()
+
+        yield self._doPOST("""<?xml version="1.0" encoding="utf-8" ?>
+            <CS:share xmlns:D="DAV:" xmlns:CS="http://calendarserver.org/ns/">
+                <CS:set>
+                    <D:href>mailto:user02 at example.com</D:href>
+                    <CS:summary>My Shared Calendar</CS:summary>
+                    <CS:read-write/>
+                </CS:set>
+            </CS:share>
+            """)
+
+        propInvite = (yield self.resource.readProperty(customxml.Invite, None))
+        self.assertEquals(self._clearUIDElementValue(propInvite), customxml.Invite(
+            customxml.InviteUser(
+                customxml.UID.fromString(""),
+                davxml.HRef.fromString("urn:uuid:user02"),
+                customxml.CommonName.fromString("USER02"),
+                customxml.InviteAccess(customxml.ReadWriteAccess()),
+                customxml.InviteStatusNoResponse(),
+            ),
+        ))
+
+        self.patch(FakePrincipal, "invalid_names", set(("user02",)))
+
+        yield self._doPOST("""<?xml version="1.0" encoding="utf-8" ?>
+            <CS:share xmlns:D="DAV:" xmlns:CS="http://calendarserver.org/ns/">
+                <CS:remove>
+                    <D:href>mailto:user02 at example.com</D:href>
+                </CS:remove>
+            </CS:share>
+            """)
+
+        isShared = self.resource.isShared()
+        self.assertFalse(isShared)
+
+        propInvite = (yield self.resource.readProperty(customxml.Invite, None))
+        self.assertEquals(propInvite, None)
+
+
+    @inlineCallbacks
+    def test_POSTShareeRemoveWithDisabledSharer(self):
+
+        yield self.resource.upgradeToShare()
+
+        yield self._doPOST("""<?xml version="1.0" encoding="utf-8" ?>
+            <CS:share xmlns:D="DAV:" xmlns:CS="http://calendarserver.org/ns/">
+                <CS:set>
+                    <D:href>mailto:user02 at example.com</D:href>
+                    <CS:summary>My Shared Calendar</CS:summary>
+                    <CS:read-write/>
+                </CS:set>
+            </CS:share>
+            """)
+
+        propInvite = (yield self.resource.readProperty(customxml.Invite, None))
+        uid = self._getUIDElementValue(propInvite)
+        self.assertEquals(self._clearUIDElementValue(propInvite), customxml.Invite(
+            customxml.InviteUser(
+                customxml.UID.fromString(""),
+                davxml.HRef.fromString("urn:uuid:user02"),
+                customxml.CommonName.fromString("USER02"),
+                customxml.InviteAccess(customxml.ReadWriteAccess()),
+                customxml.InviteStatusNoResponse(),
+            ),
+        ))
+
+        result = (yield self._doPOSTSharerAccept("""<?xml version='1.0' encoding='UTF-8'?>
+            <invite-reply xmlns='http://calendarserver.org/ns/'>
+              <href xmlns='DAV:'>mailto:user01 at example.com</href>
+              <invite-accepted/>
+              <hosturl>
+                <href xmlns='DAV:'>/calendars/__uids__/user01/calendar/</href>
+              </hosturl>
+              <in-reply-to>%s</in-reply-to>
+              <summary>The Shared Calendar</summary>
+              <common-name>User 02</common-name>
+              <first-name>user</first-name>
+              <last-name>02</last-name>
+            </invite-reply>
+            """ % (uid,))
+        )
+        href = self._getHRefElementValue(result) + "/"
+
+        self.patch(FakePrincipal, "invalid_names", set(("user01",)))
+
+        resource = (yield self._getResourceSharer(href))
+        yield resource.removeShareeResource(SimpleStoreRequest(self, "DELETE", href))
+
+        resource = (yield self._getResourceSharer(href))
+        self.assertFalse(resource.exists())

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_timezones.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_timezones.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_timezones.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -24,6 +24,7 @@
 
 import os
 import threading
+from twisted.python.failure import Failure
 
 class TimezoneProblemTest (twistedcaldav.test.util.TestCase):
     """
@@ -286,12 +287,13 @@
         self.patch(config, "UsePackageTimezones", False)
         TimezoneCache.clear()
 
-        ex = [False, False]
+        ex = [None, None]
         def _try(n):
             try:
                 TimezoneCache.create()
             except:
-                ex[n] = True
+                f = Failure()
+                ex[n] = str(f)
 
         t1 = threading.Thread(target=_try, args=(0,))
         t2 = threading.Thread(target=_try, args=(1,))
@@ -300,8 +302,8 @@
         t1.join()
         t2.join()
 
-        self.assertFalse(ex[0])
-        self.assertFalse(ex[1])
+        self.assertTrue(ex[0] is None, msg=ex[0])
+        self.assertTrue(ex[1] is None, msg=ex[1])
 
         self.assertTrue(os.path.exists(os.path.join(config.DataRoot, "zoneinfo")))
         self.assertTrue(os.path.exists(os.path.join(config.DataRoot, "zoneinfo", "America", "New_York.ics")))

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_wrapping.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_wrapping.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/test/test_wrapping.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -380,7 +380,7 @@
         if not hasattr(self._sqlCalendarStore, "_dropbox_ok"):
             self._sqlCalendarStore._dropbox_ok = False
         self.patch(self._sqlCalendarStore, "_dropbox_ok", True)
-        self.patch(Calendar, "asShared", lambda self: [])
+        self.patch(Calendar, "sharingInvites", lambda self: [])
 
         yield self.populateOneObject("1.ics", test_event_text)
         calendarObject = yield self.getResource(

Modified: CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/upgrade.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/upgrade.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/twistedcaldav/upgrade.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -38,7 +38,6 @@
 
 from twistedcaldav import caldavxml
 from twistedcaldav.directory import calendaruserproxy
-from twistedcaldav.directory.appleopendirectory import OpenDirectoryService
 from twistedcaldav.directory.calendaruserproxyloader import XMLCalendarUserProxyLoader
 from twistedcaldav.directory.directory import DirectoryService
 from twistedcaldav.directory.directory import GroupMembershipCacheUpdater
@@ -62,12 +61,10 @@
 from twisted.protocols.amp import AMP, Command, String, Boolean
 
 from calendarserver.tap.util import getRootResource, FakeRequest, directoryFromConfig
-from calendarserver.tools.resources import migrateResources
 from calendarserver.tools.util import getDirectory
 
 from txdav.caldav.datastore.scheduling.imip.mailgateway import migrateTokensToStore
 
-
 deadPropertyXattrPrefix = namedAny(
     "txdav.base.propertystore.xattr.PropertyStore.deadPropertyXattrPrefix"
 )
@@ -912,6 +909,12 @@
     #
     # Migrates locations and resources from OD
     #
+    try:
+        from twistedcaldav.directory.appleopendirectory import OpenDirectoryService
+        from calendarserver.tools.resources import migrateResources
+    except ImportError:
+        return succeed(None)
+
     log.warn("Migrating locations and resources")
 
     userService = directory.serviceForRecordType("users")
@@ -1044,6 +1047,7 @@
                     directory,
                     self.config.GroupCaching.UpdateSeconds,
                     self.config.GroupCaching.ExpireSeconds,
+                    self.config.GroupCaching.LockSeconds,
                     namespace=self.config.GroupCaching.MemcachedPool,
                     useExternalProxies=self.config.GroupCaching.UseExternalProxies)
                 yield updater.updateCache(fast=True)

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/subpostgres.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/subpostgres.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -21,6 +21,8 @@
 
 import os
 import pwd
+import re
+import signal
 
 from hashlib import md5
 
@@ -250,6 +252,7 @@
         self._pgCtl = pgCtl
         self._initdb = initDB
         self._reactor = reactor
+        self._postgresPid = None
 
 
     @property
@@ -363,7 +366,7 @@
 
         if self.shutdownDeferred is None:
             # Only continue startup if we've not begun shutdown
-            self.subServiceFactory(self.produceConnection).setServiceParent(self)
+            self.subServiceFactory(self.produceConnection, self).setServiceParent(self)
 
 
     def pauseMonitor(self):
@@ -402,22 +405,6 @@
             createDatabaseCursor.execute("commit")
             return createDatabaseConn, createDatabaseCursor
 
-        # TODO: always go through pg_ctl start
-        try:
-            createDatabaseConn, createDatabaseCursor = createConnection()
-        except pgdb.DatabaseError:
-            # We could not connect the database, so attempt to start it
-            pass
-        except Exception, e:
-            # Some other unexpected error is preventing us from connecting
-            # to the database
-            log.warn("Failed to connect to Postgres: {e}", e=e)
-        else:
-            # Database is running, so just use our connection
-            self.ready(createDatabaseConn, createDatabaseCursor)
-            self.deactivateDelayedShutdown()
-            return
-
         monitor = _PostgresMonitor(self)
         pgCtl = self.pgCtl()
         # check consistency of initdb and postgres?
@@ -452,15 +439,37 @@
             uid=self.uid, gid=self.gid,
         )
         self.monitor = monitor
+
+        def gotStatus(result):
+            """
+            Grab the postgres pid from the pgCtl status call in case we need
+            to kill it directly later on in hardStop().  Useful in conjunction
+            with the DataStoreMonitor so we can shut down if DataRoot has been
+            removed/renamed/unmounted.
+            """
+            reResult = re.search("PID: (\d+)\D", result)
+            if reResult != None:
+                self._postgresPid = int(reResult.group(1))
+            self.ready(*createConnection())
+            self.deactivateDelayedShutdown()
+
         def gotReady(result):
             log.warn("{cmd} exited", cmd=pgCtl)
             self.shouldStopDatabase = True
-            self.ready(*createConnection())
-            self.deactivateDelayedShutdown()
+            d = Deferred()
+            statusMonitor = CapturingProcessProtocol(d, None)
+            self.reactor.spawnProcess(
+                statusMonitor, pgCtl, [pgCtl, "status"],
+                env=self.env, path=self.workingDir.path,
+                uid=self.uid, gid=self.gid,
+            )
+            d.addCallback(gotStatus)
+
         def reportit(f):
             log.failure("starting postgres", f)
             self.deactivateDelayedShutdown()
             self.reactor.stop()
+            
         self.monitor.completionDeferred.addCallback(
             gotReady).addErrback(reportit)
 
@@ -539,3 +548,13 @@
 #            return result
 #        d.addCallback(maybeStopSubprocess)
 #        return d
+
+    def hardStop(self):
+        """
+        Stop postgres quickly by sending it SIGQUIT
+        """
+        if self._postgresPid is not None:
+            try:
+                os.kill(self._postgresPid, signal.SIGQUIT)
+            except OSError: 
+                pass

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/test/test_subpostgres.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/test/test_subpostgres.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/base/datastore/test/test_subpostgres.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -22,17 +22,13 @@
 
 # NOTE: This import will fail eventually when this functionality is added to
 # MemoryReactor:
-from twisted.runner.test.test_procmon import DummyProcessReactor
 
-from twisted.python.filepath import FilePath
 from twext.python.filepath import CachingFilePath
 
 from txdav.base.datastore.subpostgres import PostgresService
 from twisted.internet.defer import inlineCallbacks, Deferred
 from twisted.application.service import Service
 
-import pgdb
-
 class SubprocessStartup(TestCase):
     """
     Tests for starting and stopping the subprocess.
@@ -53,7 +49,7 @@
             instances = []
             ready = Deferred()
 
-            def __init__(self, connectionFactory):
+            def __init__(self, connectionFactory, storageService):
                 self.connection = connectionFactory()
                 test.addCleanup(self.connection.close)
                 self.instances.append(self)
@@ -104,7 +100,7 @@
             instances = []
             ready = Deferred()
 
-            def __init__(self, connectionFactory):
+            def __init__(self, connectionFactory, storageService):
                 self.connection = connectionFactory()
                 test.addCleanup(self.connection.close)
                 self.instances.append(self)
@@ -156,7 +152,7 @@
             instances = []
             ready = Deferred()
 
-            def __init__(self, connectionFactory):
+            def __init__(self, connectionFactory, storageService):
                 self.connection = connectionFactory()
                 test.addCleanup(self.connection.close)
                 self.instances.append(self)
@@ -195,73 +191,3 @@
         self.assertEquals(values, [["value1"], ["value2"]])
 
 
-    def test_startDatabaseRunning(self):
-        """ Ensure that if we can connect to postgres we don't spawn pg_ctl """
-
-        self.cursorHistory = []
-
-        class DummyCursor(object):
-            def __init__(self, historyHolder):
-                self.historyHolder = historyHolder
-
-            def execute(self, *args):
-                self.historyHolder.cursorHistory.append(args)
-
-            def close(self):
-                pass
-
-        class DummyConnection(object):
-            def __init__(self, historyHolder):
-                self.historyHolder = historyHolder
-
-            def cursor(self):
-                return DummyCursor(self.historyHolder)
-
-            def commit(self):
-                pass
-
-            def close(self):
-                pass
-
-        def produceConnection(*args):
-            return DummyConnection(self)
-
-        dummyReactor = DummyProcessReactor()
-        svc = PostgresService(
-            FilePath("postgres_4.pgdb"),
-            lambda x : Service(),
-            "",
-             reactor=dummyReactor,
-        )
-        svc.produceConnection = produceConnection
-        svc.env = {}
-        svc.startDatabase()
-        self.assertEquals(
-            self.cursorHistory,
-            [
-                ('commit',),
-                ("create database subpostgres with encoding 'UTF8'",),
-                ('',)
-            ]
-        )
-        self.assertEquals(dummyReactor.spawnedProcesses, [])
-
-
-    def test_startDatabaseNotRunning(self):
-        """ Ensure that if we can't connect to postgres we spawn pg_ctl """
-
-        def produceConnection(*args):
-            raise pgdb.DatabaseError
-
-        dummyReactor = DummyProcessReactor()
-        svc = PostgresService(
-            FilePath("postgres_4.pgdb"),
-            lambda x : Service(),
-            "",
-             reactor=dummyReactor,
-        )
-        svc.produceConnection = produceConnection
-        svc.env = {}
-        svc.startDatabase()
-        self.assertEquals(len(dummyReactor.spawnedProcesses), 1)
-        self.assertTrue(dummyReactor.spawnedProcesses[0]._executable.endswith("pg_ctl"))

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/sql.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/sql.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -86,15 +86,15 @@
         def _cache_user_props(uid):
 
             # First check whether uid already has a valid cached entry
-            valid_cached_users = yield self._cacher.get(str(self._resourceID))
-            if valid_cached_users is None:
-                valid_cached_users = set()
+            rows = None
+            if self._cacher is not None:
+                valid_cached_users = yield self._cacher.get(str(self._resourceID))
+                if valid_cached_users is None:
+                    valid_cached_users = set()
 
-            # Fetch cached user data if valid and present
-            if uid in valid_cached_users:
-                rows = yield self._cacher.get(self._cacheToken(uid))
-            else:
-                rows = None
+                # Fetch cached user data if valid and present
+                if uid in valid_cached_users:
+                    rows = yield self._cacher.get(self._cacheToken(uid))
 
             # If no cached data, fetch from SQL DB and cache
             if rows is None:
@@ -103,11 +103,12 @@
                     resourceID=self._resourceID,
                     viewerID=uid,
                 )
-                yield self._cacher.set(self._cacheToken(uid), rows if rows is not None else ())
+                if self._cacher is not None:
+                    yield self._cacher.set(self._cacheToken(uid), rows if rows is not None else ())
 
-                # Mark this uid as valid
-                valid_cached_users.add(uid)
-                yield self._cacher.set(str(self._resourceID), valid_cached_users)
+                    # Mark this uid as valid
+                    valid_cached_users.add(uid)
+                    yield self._cacher.set(str(self._resourceID), valid_cached_users)
 
             for name, value in rows:
                 self._cached[(name, uid)] = value
@@ -129,6 +130,8 @@
         super(PropertyStore, self).__init__(defaultuser, shareUser)
         self._txn = txn
         self._resourceID = resourceID
+        if not self._txn.store().queryCachingEnabled():
+            self._cacher = None
         self._cached = {}
         if not created:
             yield self._refresh(txn)
@@ -305,7 +308,8 @@
                 yield self._insertQuery.on(
                     txn, resourceID=self._resourceID, value=value_str,
                     name=key_str, uid=uid)
-            self._cacher.delete(self._cacheToken(uid))
+            if self._cacher is not None:
+                self._cacher.delete(self._cacheToken(uid))
 
         # Call the registered notification callback - we need to do this as a preCommit since it involves
         # a bunch of deferred operations, but this propstore api is not deferred. preCommit will execute
@@ -337,7 +341,8 @@
                                  resourceID=self._resourceID,
                                  name=key_str, uid=uid
                                 )
-            self._cacher.delete(self._cacheToken(uid))
+            if self._cacher is not None:
+                self._cacher.delete(self._cacheToken(uid))
 
         # Call the registered notification callback - we need to do this as a preCommit since it involves
         # a bunch of deferred operations, but this propstore api is not deferred. preCommit will execute
@@ -368,7 +373,8 @@
         yield self._deleteResourceQuery.on(self._txn, resourceID=self._resourceID)
 
         # Invalidate entire set of cached per-user data for this resource
-        self._cacher.delete(str(self._resourceID))
+        if self._cacher is not None:
+            self._cacher.delete(str(self._resourceID))
 
 
     @inlineCallbacks
@@ -392,5 +398,6 @@
 
         # Invalidate entire set of cached per-user data for this resource and reload
         self._cached = {}
-        self._cacher.delete(str(self._resourceID))
+        if self._cacher is not None:
+            self._cacher.delete(str(self._resourceID))
         yield self._refresh(self._txn)

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/test/test_sql.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/base/propertystore/test/test_sql.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -269,5 +269,38 @@
         self.assertEqual(len(store1_user1._cached), 0)
         self.assertFalse("SQL.props:10/user01" in store1_user1._cacher._memcacheProtocol._cache)
 
+
+    @inlineCallbacks
+    def test_cacher_off(self):
+        """
+        Test that properties can still be read and written when the cacher is disabled.
+        """
+
+        self.patch(self.store, "queryCacher", None)
+
+        # Existing store - add a normal property
+        self.assertFalse("SQL.props:10/user01" in PropertyStore._cacher._memcacheProtocol._cache)
+        store1_user1 = yield PropertyStore.load("user01", None, self._txn, 10)
+        self.assertFalse("SQL.props:10/user01" in PropertyStore._cacher._memcacheProtocol._cache)
+
+        pname1 = propertyName("dummy1")
+        pvalue1 = propertyValue("*")
+
+        yield store1_user1.__setitem__(pname1, pvalue1)
+        self.assertEqual(store1_user1[pname1], pvalue1)
+
+        self.assertEqual(len(store1_user1._cached), 1)
+        self.assertFalse("SQL.props:10/user01" in PropertyStore._cacher._memcacheProtocol._cache)
+
+        yield self._txn.commit()
+        self._txn = self.store.newTransaction()
+
+        # Existing store - check a normal property
+        self.assertFalse("SQL.props:10/user01" in PropertyStore._cacher._memcacheProtocol._cache)
+        store1_user1 = yield PropertyStore.load("user01", None, self._txn, 10)
+        self.assertFalse("SQL.props:10/user01" in PropertyStore._cacher._memcacheProtocol._cache)
+        self.assertEqual(store1_user1[pname1], pvalue1)
+
+
 if PropertyStore is None:
     PropertyStoreTest.skip = importErrorMessage

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/sql.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/sql.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -1047,7 +1047,7 @@
 
 
     @inlineCallbacks
-    def _createCalendarObjectWithNameInternal(self, name, component, internal_state, options=None):
+    def _createCalendarObjectWithNameInternal(self, name, component, internal_state, options=None, split_details=None):
 
         # Create => a new resource name
         if name in self._objects and self._objects[name]:
@@ -1060,7 +1060,7 @@
                 raise TooManyObjectResourcesError()
 
         objectResource = (
-            yield self._objectResourceClass._createInternal(self, name, component, internal_state, options)
+            yield self._objectResourceClass._createInternal(self, name, component, internal_state, options, split_details)
         )
         self._objects[objectResource.name()] = objectResource
         self._objects[objectResource.uid()] = objectResource
@@ -1414,7 +1414,7 @@
 
 
     @classproperty
-    def _moveTimeRangeUpdateQuery(cls):  # @NoSelf
+    def _moveTimeRangeUpdateQuery(cls): #@NoSelf
         """
         DAL query to update a child to be in a new parent.
         """
@@ -1509,7 +1509,7 @@
 
     @classmethod
     @inlineCallbacks
-    def _createInternal(cls, parent, name, component, internal_state, options=None):
+    def _createInternal(cls, parent, name, component, internal_state, options=None, split_details=None):
 
         child = (yield cls.objectWithName(parent, name, None))
         if child:
@@ -1519,7 +1519,7 @@
             raise ObjectResourceNameNotAllowedError(name)
 
         objectResource = cls(parent, name, None, None, options=options)
-        yield objectResource._setComponentInternal(component, inserting=True, internal_state=internal_state)
+        yield objectResource._setComponentInternal(component, inserting=True, internal_state=internal_state, split_details=split_details)
         yield objectResource._loadPropertyStore(created=True)
 
         # Note: setComponent triggers a notification, so we don't need to
@@ -1931,17 +1931,28 @@
 
 
     @inlineCallbacks
-    def doImplicitScheduling(self, component, inserting, internal_state):
+    def doImplicitScheduling(self, component, inserting, internal_state, split_details=None):
 
         new_component = None
         did_implicit_action = False
         is_scheduling_resource = False
         schedule_state = None
 
-        is_internal = internal_state not in (ComponentUpdateState.NORMAL, ComponentUpdateState.ATTACHMENT_UPDATE,)
+        is_internal = internal_state not in (
+            ComponentUpdateState.NORMAL,
+            ComponentUpdateState.ATTACHMENT_UPDATE,
+            ComponentUpdateState.SPLIT_OWNER,
+        )
 
         # Do scheduling
         if not self.calendar().isInbox():
+            # For splitting we are passed a "raw" component - one with the per-user data pieces in it.
+            # We need to filter that down just to the owner's view to do scheduling, but still ensure the
+            # raw component is written out.
+            if split_details is not None:
+                user_uuid = self._parentCollection.viewerHome().uid()
+                component = PerUserDataFilter(user_uuid).filter(component.duplicate())
+
             scheduler = ImplicitScheduler()
 
             # PUT
@@ -1961,7 +1972,7 @@
                         "Sharee's cannot schedule",
                     )
 
-                new_calendar = (yield scheduler.doImplicitScheduling(self.schedule_tag_match))
+                new_calendar = (yield scheduler.doImplicitScheduling(self.schedule_tag_match, split_details))
                 if new_calendar:
                     if isinstance(new_calendar, int):
                         returnValue(new_calendar)
@@ -2076,7 +2087,7 @@
 
 
     @inlineCallbacks
-    def _setComponentInternal(self, component, inserting=False, internal_state=ComponentUpdateState.NORMAL, smart_merge=False):
+    def _setComponentInternal(self, component, inserting=False, internal_state=ComponentUpdateState.NORMAL, smart_merge=False, split_details=None):
         """
         Setting the component internally to the store itself. This will bypass a whole bunch of data consistency checks
         on the assumption that those have been done prior to the component data being provided, provided the flag is set.
@@ -2087,9 +2098,9 @@
         self.schedule_tag_match = not self.calendar().isInbox() and internal_state == ComponentUpdateState.NORMAL and smart_merge
         schedule_state = None
 
-        if internal_state == ComponentUpdateState.SPLIT:
+        if internal_state in (ComponentUpdateState.SPLIT_OWNER, ComponentUpdateState.SPLIT_ATTENDEE,):
             # When splitting, some state from the previous resource needs to be properly
-            # preserved in thus new one when storing the component. Since we don't do the "full"
+            # preserved in the new one when storing the component. Since we don't do the "full"
             # store here, we need to add the explicit pieces we need for state preservation.
 
             # Check access
@@ -2101,6 +2112,10 @@
 
             managed_copied, managed_removed = (yield self.resourceCheckAttachments(component, inserting))
 
+            # Do scheduling only for owner split
+            if internal_state == ComponentUpdateState.SPLIT_OWNER:
+                yield self.doImplicitScheduling(component, inserting, internal_state, split_details)
+
             self.isScheduleObject = True
             self.processScheduleTags(component, inserting, internal_state)
 
@@ -2165,7 +2180,11 @@
         yield self.updateDatabase(component, inserting=inserting)
 
         # Post process managed attachments
-        if internal_state in (ComponentUpdateState.NORMAL, ComponentUpdateState.SPLIT):
+        if internal_state in (
+            ComponentUpdateState.NORMAL,
+            ComponentUpdateState.SPLIT_OWNER,
+            ComponentUpdateState.SPLIT_ATTENDEE,
+        ):
             if managed_copied:
                 yield self.copyResourceAttachments(managed_copied)
             if managed_removed:
@@ -2179,7 +2198,7 @@
         yield self._calendar.notifyChanged()
 
         # Finally check if a split is needed
-        if internal_state != ComponentUpdateState.SPLIT and schedule_state == "organizer":
+        if internal_state not in (ComponentUpdateState.SPLIT_OWNER, ComponentUpdateState.SPLIT_ATTENDEE,) and schedule_state == "organizer":
             yield self.checkSplit()
 
         returnValue(self._componentChanged)
@@ -2613,7 +2632,7 @@
 
 
     @classproperty
-    def _recurrenceMinMaxByIDQuery(cls):  # @NoSelf
+    def _recurrenceMinMaxByIDQuery(cls): #@NoSelf
         """
         DAL query to load RECURRANCE_MIN, RECURRANCE_MAX via an object's resource ID.
         """
@@ -2647,7 +2666,7 @@
 
 
     @classproperty
-    def _instanceQuery(cls):  # @NoSelf
+    def _instanceQuery(cls): #@NoSelf
         """
         DAL query to load TIME_RANGE data via an object's resource ID.
         """
@@ -3282,59 +3301,92 @@
 
 
     @inlineCallbacks
-    def split(self):
+    def split(self, onlyThis=False, rid=None, olderUID=None):
         """
         Split this and all matching UID calendar objects as per L{iCalSplitter}.
+
+        We need to handle scheduling with non-hosted users here. Here is what we will do:
+
+        1) Send an iTIP message for the original event (in its now future-truncated state) and
+        include a special X- parameter in the iTIP message to indicate a split was done and
+        what the RECURRENCE-ID was where the split was made. This will allow "smart" clients/servers
+        to spot the split action and apply that locally upon receipt and processing of the iTIP
+        message. That way they get to preserve the existing per-user data for the old instances. Other
+        clients/servers will just apply the change via normal iTIP processing.
+
+        2) Send an iTIP message for the new event (which will be for the old instances). "Smart"
+        clients that already got and processed the message from #1 will simply apply this on top
+        of their split copy - it should be identical, part from per-user data, so it will apply
+        cleanly. We can include an X- headers to indicate the split R-ID so "smart" clients/servers
+        can simply ignore this message.
         """
 
         # First job is to grab a UID lock on this entire series of events
         yield NamedLock.acquire(self._txn, "ImplicitUIDLock:%s" % (hashlib.md5(self._uid).hexdigest(),))
 
         # Find all other calendar objects on this server with the same UID
-        resources = (yield CalendarStoreFeatures(self._txn._store).calendarObjectsWithUID(self._txn, self._uid))
+        if onlyThis:
+            resources = ()
+        else:
+            resources = (yield CalendarStoreFeatures(self._txn._store).calendarObjectsWithUID(self._txn, self._uid))
 
         splitter = iCalSplitter(config.Scheduling.Options.Splitting.Size, config.Scheduling.Options.Splitting.PastDays)
 
         # Determine the recurrence-id of the split and create a new UID for it
         calendar = (yield self.component())
-        rid = splitter.whereSplit(calendar)
-        newUID = str(uuid.uuid4())
+        if rid is None:
+            rid = splitter.whereSplit(calendar)
+        newerUID = calendar.resourceUID()
+        if olderUID is None:
+            olderUID = str(uuid.uuid4())
 
         # Now process this resource, but do implicit scheduling for attendees not hosted on this server.
         # We need to do this before processing attendee copies.
-        calendar_old = splitter.split(calendar, rid=rid, newUID=newUID)
+        calendar_old, calendar_new = splitter.split(calendar, rid=rid, olderUID=olderUID)
+        calendar_new.bumpiTIPInfo(oldcalendar=calendar, doSequence=True)
+        calendar_old.bumpiTIPInfo(oldcalendar=None, doSequence=True)
 
+        # If the split results in nothing either resource, then there is really nothing
+        # to actually split
+        if calendar_new.mainType() is None or calendar_old.mainType() is None:
+            returnValue(None)
+
         # Store changed data
-        if calendar.mainType() is not None:
-            yield self._setComponentInternal(calendar, internal_state=ComponentUpdateState.SPLIT)
-        else:
-            yield self._removeInternal(internal_state=ComponentUpdateState.SPLIT)
-        if calendar_old.mainType() is not None:
-            yield self.calendar()._createCalendarObjectWithNameInternal("%s.ics" % (newUID,), calendar_old, ComponentUpdateState.SPLIT)
+        yield self._setComponentInternal(calendar_new, internal_state=ComponentUpdateState.SPLIT_OWNER, split_details=(rid, olderUID, True,))
+        yield self.calendar()._createCalendarObjectWithNameInternal("%s.ics" % (olderUID,), calendar_old, ComponentUpdateState.SPLIT_OWNER, split_details=(rid, newerUID, False,))
 
         # Split each one - but not this resource
         for resource in resources:
             if resource._resourceID == self._resourceID:
                 continue
-            ical = (yield resource.component())
-            ical_old = splitter.split(ical, rid=rid, newUID=newUID)
+            yield resource.splitForAttendee(rid, olderUID)
 
-            # Store changed data
-            if ical.mainType() is not None:
-                yield resource._setComponentInternal(ical, internal_state=ComponentUpdateState.SPLIT)
-            else:
-                # The split removed all components from this object - remove it
-                yield resource._removeInternal(internal_state=ComponentUpdateState.SPLIT)
+        returnValue(olderUID)
 
-            # Create a new resource and store its data (but not if the parent is "inbox", or if it is empty)
-            if not resource.calendar().isInbox() and ical_old.mainType() is not None:
-                yield resource.calendar()._createCalendarObjectWithNameInternal("%s.ics" % (newUID,), ical_old, ComponentUpdateState.SPLIT)
 
-        # TODO: scheduling currently turned off until we figure out how to properly do that
+    @inlineCallbacks
+    def splitForAttendee(self, rid=None, olderUID=None):
+        """
+        Split this attendee resource as per L{split}.
+        """
+        splitter = iCalSplitter(config.Scheduling.Options.Splitting.Size, config.Scheduling.Options.Splitting.PastDays)
+        ical = (yield self.component())
+        ical_old, ical_new = splitter.split(ical, rid=rid, olderUID=olderUID)
+        ical_new.bumpiTIPInfo(oldcalendar=ical, doSequence=True)
+        ical_old.bumpiTIPInfo(oldcalendar=None, doSequence=True)
 
-        returnValue(newUID)
+        # Store changed data
+        if ical_new.mainType() is not None:
+            yield self._setComponentInternal(ical_new, internal_state=ComponentUpdateState.SPLIT_ATTENDEE)
+        else:
+            # The split removed all components from this object - remove it
+            yield self._removeInternal(internal_state=ComponentRemoveState.INTERNAL)
 
+        # Create a new resource and store its data (but not if the parent is "inbox", or if it is empty)
+        if not self.calendar().isInbox() and ical_old.mainType() is not None:
+            yield self.calendar()._createCalendarObjectWithNameInternal("%s.ics" % (olderUID,), ical_old, ComponentUpdateState.SPLIT_ATTENDEE)
 
+
     class CalendarObjectSplitterWork(WorkItem, fromTable(schema.CALENDAR_OBJECT_SPLITTER_WORK)):
 
         group = property(lambda self: "CalendarObjectSplitterWork:%s" % (self.resourceID,))
@@ -3589,6 +3641,13 @@
         Remove the actual file and up to attachment parent directory if empty.
         """
         self._path.remove()
+        self.removeParentPaths()
+
+
+    def removeParentPaths(self):
+        """
+        Remove up to attachment parent directory if empty.
+        """
         parent = self._path.parent()
         toppath = self._attachmentPathRoot().path
         while parent.path != toppath:
@@ -3833,6 +3892,7 @@
         oldpath = self._path
         newpath = mattach._path
         oldpath.moveTo(newpath)
+        self.removeParentPaths()
 
         returnValue(mattach)
 

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/common.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/common.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -988,9 +988,9 @@
         self.assertEqual(newName, self.sharedName)
         self.assertNotIdentical(otherCal, None)
 
-        invitedCals = yield cal.asShared()
+        invitedCals = yield cal.sharingInvites()
         self.assertEqual(len(invitedCals), 1)
-        self.assertEqual(invitedCals[0].shareMode(), _BIND_MODE_READ)
+        self.assertEqual(invitedCals[0].mode(), _BIND_MODE_READ)
 
 
     @inlineCallbacks
@@ -1007,7 +1007,7 @@
         newName = yield cal.unshareWith(other)
         otherCal = yield other.childWithName(newName)
         self.assertIdentical(otherCal, None)
-        invitedCals = yield cal.asShared()
+        invitedCals = yield cal.sharingInvites()
         self.assertEqual(len(invitedCals), 0)
 
 
@@ -1027,7 +1027,7 @@
         yield cal.unshare()
         otherCal = yield other.childWithName(self.sharedName)
         self.assertEqual(otherCal, None)
-        invitedCals = yield cal.asShared()
+        invitedCals = yield cal.sharingInvites()
         self.assertEqual(len(invitedCals), 0)
 
 
@@ -1047,7 +1047,7 @@
         yield otherCal.unshare()
         otherCal = yield other.childWithName(self.sharedName)
         self.assertEqual(otherCal, None)
-        invitedCals = yield cal.asShared()
+        invitedCals = yield cal.sharingInvites()
         self.assertEqual(len(invitedCals), 0)
 
 
@@ -1062,24 +1062,23 @@
 
 
     @inlineCallbacks
-    def test_asShared(self):
+    def test_sharingInvites(self):
         """
-        L{ICalendar.asShared} returns an iterable of all versions of a shared
+        L{ICalendar.sharingInvites} returns an iterable of all versions of a shared
         calendar.
         """
         cal = yield self.calendarUnderTest()
-        sharedBefore = yield cal.asShared()
-        # It's not shared yet; make sure asShared doesn't include owner version.
+        sharedBefore = yield cal.sharingInvites()
+        # It's not shared yet; make sure sharingInvites doesn't include owner version.
         self.assertEqual(len(sharedBefore), 0)
         yield self.test_shareWith()
         # FIXME: don't know why this separate transaction is needed; remove it.
         yield self.commit()
         cal = yield self.calendarUnderTest()
-        sharedAfter = yield cal.asShared()
+        sharedAfter = yield cal.sharingInvites()
         self.assertEqual(len(sharedAfter), 1)
-        self.assertEqual(sharedAfter[0].shareMode(), _BIND_MODE_WRITE)
-        self.assertEqual(sharedAfter[0].viewerCalendarHome().uid(),
-                         OTHER_HOME_UID)
+        self.assertEqual(sharedAfter[0].mode(), _BIND_MODE_WRITE)
+        self.assertEqual(sharedAfter[0].shareeUID(), OTHER_HOME_UID)
 
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_attachments.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_attachments.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_attachments.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -1404,7 +1404,9 @@
         self._sqlCalendarStore = yield buildCalendarStore(self, self.notifierFactory, directoryFromConfig(config))
         yield self.populate()
 
+        self.paths = {}
 
+
     @inlineCallbacks
     def populate(self):
         yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
@@ -1446,6 +1448,8 @@
         t.write(" attachment")
         yield t.loseConnection()
 
+        self.paths[name] = attachment._path
+
         cal = (yield event.componentForUser())
         cal.mainComponent().addProperty(Property(
             "ATTACH",
@@ -1834,3 +1838,9 @@
         yield self._verifyConversion("home2", "calendar2", "2-2.3.ics", ("attach_1_3.txt",))
         yield self._verifyConversion("home2", "calendar3", "2-3.2.ics", ("attach_1_4.txt",))
         yield self._verifyConversion("home2", "calendar3", "2-3.3.ics", ("attach_1_4.txt",))
+
+        # Paths do not exist
+        for path in self.paths.values():
+            for _ignore in range(4):
+                self.assertFalse(path.exists(), msg="Still exists: %s" % (path,))
+                path = path.parent()

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_file.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_file.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_file.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -467,7 +467,7 @@
     test_shareAgainChangesMode = test_shareWith
     test_unshareWith = test_shareWith
     test_unshareWithInDifferentTransaction = test_shareWith
-    test_asShared = test_shareWith
+    test_sharingInvites = test_shareWith
     test_unshareSharerSide = test_shareWith
     test_unshareShareeSide = test_shareWith
     test_sharedNotifierID = test_shareWith

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_sql.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_sql.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -13,6 +13,14 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 ##
+from txdav.caldav.datastore.scheduling.processing import ImplicitProcessor
+from txdav.caldav.datastore.scheduling.cuaddress import RemoteCalendarUser, \
+    LocalCalendarUser
+from txdav.caldav.datastore.scheduling.caldav.scheduler import CalDAVScheduler
+from txdav.caldav.datastore.scheduling.scheduler import ScheduleResponseQueue
+from twext.web2 import responsecode
+from txdav.caldav.datastore.scheduling.itip import iTIPRequestStatus
+from twistedcaldav.instance import InvalidOverriddenInstanceError
 
 """
 Tests for txdav.caldav.datastore.postgres, mostly based on
@@ -22,13 +30,15 @@
 from pycalendar.datetime import PyCalendarDateTime
 from pycalendar.timezone import PyCalendarTimezone
 
-from twext.enterprise.dal.syntax import Select, Parameter, Insert, Delete
+from twext.enterprise.dal.syntax import Select, Parameter, Insert, Delete, \
+    Update
 from twext.python.vcomponent import VComponent
 from twext.web2.http_headers import MimeType
 from twext.web2.stream import MemoryStream
 
 from twisted.internet import reactor
-from twisted.internet.defer import inlineCallbacks, returnValue, DeferredList
+from twisted.internet.defer import inlineCallbacks, returnValue, DeferredList, \
+    succeed
 from twisted.internet.task import deferLater
 from twisted.trial import unittest
 
@@ -36,7 +46,7 @@
 from twistedcaldav.caldavxml import CalendarDescription
 from twistedcaldav.config import config
 from twistedcaldav.dateops import datetimeMktime
-from twistedcaldav.ical import Component
+from twistedcaldav.ical import Component, normalize_iCalStr, diff_iCalStrs
 from twistedcaldav.query import calendarqueryfilter
 
 from txdav.base.propertystore.base import PropertyName
@@ -434,7 +444,7 @@
         )
         yield migrateHome(fromHome, toHome, lambda x: x.component())
         toCalendars = yield toHome.calendars()
-        self.assertEquals(set([c.name() for c in toCalendars]),
+        self.assertEquals(set([c.name() for c in toCalendars if c.name() != "inbox"]),
                           set([k for k in self.requirements['home1'].keys()
                                if self.requirements['home1'][k] is not None]))
         fromCalendars = yield fromHome.calendars()
@@ -464,7 +474,7 @@
             )
 
         supported_components = set()
-        self.assertEqual(len(toCalendars), 3)
+        self.assertEqual(len(toCalendars), 4)
         for calendar in toCalendars:
             if calendar.name() == "inbox":
                 continue
@@ -492,7 +502,7 @@
             )
 
         supported_components = set()
-        self.assertEqual(len(toCalendars), 2)
+        self.assertEqual(len(toCalendars), 3)
         for calendar in toCalendars:
             if calendar.name() == "inbox":
                 continue
@@ -1942,6 +1952,189 @@
 
 
 
+class SchedulingTests(CommonCommonTests, unittest.TestCase):
+    """
+    CalendarObject splitting tests
+    """
+
+    @inlineCallbacks
+    def setUp(self):
+        yield super(SchedulingTests, self).setUp()
+        self._sqlCalendarStore = yield buildCalendarStore(self, self.notifierFactory)
+
+        # Make sure homes are provisioned
+        txn = self.transactionUnderTest()
+        for ctr in range(1, 5):
+            home_uid = yield txn.homeWithUID(ECALENDARTYPE, "user%02d" % (ctr,), create=True)
+            self.assertNotEqual(home_uid, None)
+        yield self.commit()
+
+
+    @inlineCallbacks
+    def populate(self):
+        yield populateCalendarsFrom(self.requirements, self.storeUnderTest())
+        self.notifierFactory.reset()
+
+
+    def storeUnderTest(self):
+        """
+        Create and return a L{CalendarStore} for testing.
+        """
+        return self._sqlCalendarStore
+
+
+    @inlineCallbacks
+    def test_doImplicitAttendeeEventFix(self):
+        """
+        Test that processing.doImplicitAttendeeEventFix.
+        """
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20130806T000000Z
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+ATTENDEE:mailto:user02 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:user01 at example.com
+RRULE:FREQ=DAILY
+SUMMARY:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_broken = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20130806T000000Z
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RRULE:FREQ=DAILY
+SUMMARY:1
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:20130807T120000Z
+DTSTART:20130807T000000Z
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+SUMMARY:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_update1 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20130806T000000Z
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1-2
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:20130807T000000Z
+DTSTART:20130807T000000Z
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+SEQUENCE:1
+SUMMARY:1-3
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_fixed2 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20130806T000000Z
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1-2
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:20130807T000000Z
+DTSTART:20130807T000000Z
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+SEQUENCE:1
+SUMMARY:1-3
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:12345-67890
+X-CALENDARSERVER-PERUSER-UID:user02
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+TRANSP:TRANSPARENT
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        # Create one event
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+        yield calendar.createCalendarObjectWithName("data1.ics", Component.fromString(data))
+        yield self.commit()
+
+        # Write corrupt user02 data directly to trigger fix later
+        cal = yield self.calendarUnderTest(name="calendar", home="user02")
+        cobjs = yield cal.calendarObjects()
+        self.assertEqual(len(cobjs), 1)
+        cobj = cobjs[0]
+        name02 = cobj.name()
+        co = schema.CALENDAR_OBJECT
+        yield Update(
+            {co.ICALENDAR_TEXT: str(Component.fromString(data_broken))},
+            Where=co.RESOURCE_NAME == name02,
+        ).on(self.transactionUnderTest())
+        yield self.commit()
+
+        # Write user01 data - will trigger fix
+        cobj = yield self.calendarObjectUnderTest(name="data1.ics", calendar_name="calendar", home="user01")
+        yield cobj.setComponent(Component.fromString(data_update1))
+        yield self.commit()
+
+        # Verify user02 data is now fixed
+        cobj = yield self.calendarObjectUnderTest(name=name02, calendar_name="calendar", home="user02")
+        ical = yield cobj.component()
+
+        self.assertEqual(normalize_iCalStr(ical), normalize_iCalStr(data_fixed2), "Failed attendee fix:\n%s" % (diff_iCalStrs(ical, data_fixed2),))
+        yield self.commit()
+
+        self.assertEqual(len(self.flushLoggedErrors(InvalidOverriddenInstanceError)), 1)
+
+
+
 class CalendarObjectSplitting(CommonCommonTests, unittest.TestCase):
     """
     CalendarObject splitting tests
@@ -2117,6 +2310,7 @@
 ORGANIZER;SCHEDULE-AGENT=NONE;SCHEDULE-STATUS=5.3:mailto:user1 at example.org
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
+SEQUENCE:1
 END:VEVENT
 END:VCALENDAR
 """
@@ -2157,6 +2351,7 @@
 ORGANIZER;SCHEDULE-AGENT=NONE;SCHEDULE-STATUS=5.3:mailto:user1 at example.org
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:VEVENT
 UID:%(relID)s
@@ -2168,6 +2363,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;SCHEDULE-AGENT=NONE;SCHEDULE-STATUS=5.3:mailto:user1 at example.org
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:VEVENT
 UID:%(relID)s
@@ -2179,6 +2375,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;SCHEDULE-AGENT=NONE;SCHEDULE-STATUS=5.3:mailto:user1 at example.org
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 END:VCALENDAR
 """
@@ -2214,8 +2411,8 @@
         title = "temp"
         relsubs = dict(self.subs)
         relsubs["relID"] = newUID
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future.replace("\n", "\r\n") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past.replace("\n", "\r\n") % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % relsubs, "Failed past: %s" % (title,))
 
 
     @inlineCallbacks
@@ -2297,6 +2494,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2312,6 +2510,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 END:VCALENDAR
 """
@@ -2330,6 +2529,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2346,6 +2546,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:VEVENT
 UID:%(relID)s
@@ -2357,6 +2558,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 END:VCALENDAR
 """
@@ -2376,6 +2578,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2405,6 +2608,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2421,6 +2625,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:VEVENT
 UID:%(relID)s
@@ -2432,6 +2637,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:X-CALENDARSERVER-PERUSER
 UID:%(relID)s
@@ -2459,6 +2665,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2482,6 +2689,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2513,6 +2721,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2544,6 +2753,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
+SEQUENCE:1
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -2566,6 +2776,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:X-CALENDARSERVER-PERUSER
 UID:%(relID)s
@@ -2591,6 +2802,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 BEGIN:X-CALENDARSERVER-PERUSER
 UID:12345-67890
@@ -2617,6 +2829,7 @@
 DTSTAMP:20051222T210507Z
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
 END:VEVENT
 END:VCALENDAR
 """
@@ -2663,8 +2876,8 @@
         title = "user01"
         relsubs = dict(self.subs)
         relsubs["relID"] = newUID
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % relsubs, "Failed past: %s" % (title,))
 
         # Get user02 data
         cal = yield self.calendarUnderTest(name="calendar", home="user02")
@@ -2684,9 +2897,9 @@
 
         # Verify user02 data
         title = "user02"
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future2.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past2.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed past: %s" % (title,))
-        self.assertEqual(str(ical_inbox).replace("\r\n ", ""), data_inbox2.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed inbox: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future2) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past2) % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_inbox), normalize_iCalStr(data_inbox2) % relsubs, "Failed inbox: %s" % (title,))
 
         # Get user03 data
         cal = yield self.calendarUnderTest(name="calendar", home="user03")
@@ -2707,9 +2920,9 @@
 
         # Verify user03 data
         title = "user03"
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future3.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past3.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed past: %s" % (title,))
-        self.assertEqual(str(ical_inbox).replace("\r\n ", ""), data_inbox3.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed inbox: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future3) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past3) % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_inbox), normalize_iCalStr(data_inbox3) % relsubs, "Failed inbox: %s" % (title,))
 
         # Get user04 data
         cal = yield self.calendarUnderTest(name="calendar", home="user04")
@@ -2724,7 +2937,7 @@
 
         # Verify user04 data
         title = "user04"
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past4.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past4) % relsubs, "Failed past: %s" % (title,))
 
         # Get user05 data
         cal = yield self.calendarUnderTest(name="calendar", home="user05")
@@ -2740,8 +2953,8 @@
 
         # Verify user05 data
         title = "user05"
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future5.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_inbox).replace("\r\n ", ""), data_inbox5.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed inbox: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future5) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_inbox), normalize_iCalStr(data_inbox5) % relsubs, "Failed inbox: %s" % (title,))
 
 
     @inlineCallbacks
@@ -3045,16 +3258,16 @@
         cobj = cobjs[0]
         cname2 = cobj.name()
         ical = yield cobj.component()
-        self.assertEqual(str(ical).replace("\r\n ", ""), data_2.replace("\n", "\r\n").replace("\r\n ", "") % self.subs, "Failed 2")
+        self.assertEqual(normalize_iCalStr(ical), normalize_iCalStr(data_2) % self.subs, "Failed 2")
         yield cobj.setComponent(Component.fromString(data_2_update % self.subs))
         yield self.commit()
 
         cobj = yield self.calendarObjectUnderTest(name="data1.ics", calendar_name="calendar", home="user01")
         ical = yield cobj.component()
-        self.assertEqual(str(ical).replace("\r\n ", ""), data_1.replace("\n", "\r\n").replace("\r\n ", "") % self.subs, "Failed 2")
+        self.assertEqual(normalize_iCalStr(ical), normalize_iCalStr(data_1) % self.subs, "Failed 2")
         cobj = yield self.calendarObjectUnderTest(name=cname2, calendar_name="calendar", home="user02")
         ical = yield cobj.component()
-        self.assertEqual(str(ical).replace("\r\n ", ""), data_2_changed.replace("\n", "\r\n").replace("\r\n ", "") % self.subs, "Failed 2")
+        self.assertEqual(normalize_iCalStr(ical), normalize_iCalStr(data_2_changed) % self.subs, "Failed 2")
         yield self.commit()
 
 
@@ -3244,7 +3457,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
-SEQUENCE:2
+SEQUENCE:3
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -3261,7 +3474,7 @@
 DTSTAMP:%(dtstamp)s
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
-SEQUENCE:2
+SEQUENCE:3
 END:VEVENT
 END:VCALENDAR
 """
@@ -3280,7 +3493,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
-SEQUENCE:2
+SEQUENCE:3
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -3297,7 +3510,7 @@
 DTSTAMP:%(dtstamp)s
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
-SEQUENCE:2
+SEQUENCE:3
 END:VEVENT
 BEGIN:VEVENT
 UID:%(relID)s
@@ -3310,7 +3523,7 @@
 DTSTAMP:%(dtstamp)s
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
-SEQUENCE:2
+SEQUENCE:3
 END:VEVENT
 END:VCALENDAR
 """
@@ -3329,7 +3542,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY
-SEQUENCE:2
+SEQUENCE:3
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -3346,7 +3559,7 @@
 DTSTAMP:%(dtstamp)s
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
-SEQUENCE:2
+SEQUENCE:3
 END:VEVENT
 BEGIN:X-CALENDARSERVER-PERUSER
 UID:12345-67890
@@ -3372,7 +3585,7 @@
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
 RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
-SEQUENCE:2
+SEQUENCE:3
 SUMMARY:1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
  1234567890123456789012345678901234567890
@@ -3389,7 +3602,7 @@
 DTSTAMP:%(dtstamp)s
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
-SEQUENCE:2
+SEQUENCE:3
 END:VEVENT
 BEGIN:VEVENT
 UID:%(relID)s
@@ -3402,7 +3615,7 @@
 DTSTAMP:%(dtstamp)s
 ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
 RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
-SEQUENCE:2
+SEQUENCE:3
 END:VEVENT
 BEGIN:X-CALENDARSERVER-PERUSER
 UID:%(relID)s
@@ -3436,7 +3649,7 @@
         relsubs["mid"] = mid
         relsubs["att_uri"] = location
         relsubs["dtstamp"] = str(ical.masterComponent().propertyValue("DTSTAMP"))
-        self.assertEqual(str(ical).replace("\r\n ", ""), data_attach_1.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed attachment user01")
+        self.assertEqual(normalize_iCalStr(ical), normalize_iCalStr(data_attach_1) % relsubs, "Failed attachment user01")
         yield self.commit()
 
         # Add overrides to cause a split
@@ -3468,8 +3681,8 @@
 
         # Verify user01 data
         title = "user01"
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % relsubs, "Failed past: %s" % (title,))
 
         # Get user02 data
         cal = yield self.calendarUnderTest(name="calendar", home="user02")
@@ -3484,5 +3697,1433 @@
 
         # Verify user02 data
         title = "user02"
-        self.assertEqual(str(ical_future).replace("\r\n ", ""), data_future2.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed future: %s" % (title,))
-        self.assertEqual(str(ical_past).replace("\r\n ", ""), data_past2.replace("\n", "\r\n").replace("\r\n ", "") % relsubs, "Failed past: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future2) % relsubs, "Failed future: %s" % (title,))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past2) % relsubs, "Failed past: %s" % (title,))
+
+
+    @inlineCallbacks
+    def test_calendarObjectSplit_processing_simple(self):
+        """
+        Test that splitting of calendar objects works when outside invites are processed.
+        """
+        self.patch(config.Scheduling.Options.Splitting, "Enabled", True)
+        self.patch(config.Scheduling.Options.Splitting, "Size", 1024)
+        self.patch(config.Scheduling.Options.Splitting, "PastDays", 14)
+        self.patch(config.Scheduling.Options.Splitting, "Delay", 2)
+
+        # Create one event from outside organizer that will not split
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RRULE:FREQ=DAILY
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Master
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=NEEDS-ACTION:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=DECLINED:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=DECLINED:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back24
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+"""
+
+        itip1 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-OLDER-UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:cuser01 at example.org
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:cuser01 at example.org
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_future = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=TENTATIVE:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:12345-67890
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Master
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_fwd10)s
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        data_past = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=NEEDS-ACTION:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=DECLINED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=DECLINED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Master
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_back25)s
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_back24)s
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back24
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        itip2 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-NEWER-UID:12345-67890
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=NEEDS-ACTION:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=DECLINED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=DECLINED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        component = Component.fromString(data % self.subs)
+        cobj = yield calendar.createCalendarObjectWithName("data.ics", component)
+        self.assertFalse(hasattr(cobj, "_workItems"))
+        yield self.commit()
+
+        # Now inject an iTIP with split
+        processor = ImplicitProcessor()
+        processor.getRecipientsCopy = lambda : succeed(None)
+
+        cobj = yield self.calendarObjectUnderTest(name="data.ics", calendar_name="calendar", home="user01")
+        processor.recipient_calendar_resource = cobj
+        processor.recipient_calendar = (yield cobj.componentForUser("user01"))
+        processor.message = Component.fromString(itip1 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "REQUEST"
+        processor.uid = "12345-67890"
+
+        result = yield processor.doImplicitAttendee()
+        self.assertEqual(result, (True, False, False, None,))
+        yield self.commit()
+
+        new_name = []
+
+        @inlineCallbacks
+        def _verify_state():
+            # Get user01 data
+            cal = yield self.calendarUnderTest(name="calendar", home="user01")
+            cobjs = yield cal.calendarObjects()
+            self.assertEqual(len(cobjs), 2)
+            for cobj in cobjs:
+                ical = yield cobj.component()
+                if ical.resourceUID() == "12345-67890":
+                    ical_future = ical
+                else:
+                    ical_past = ical
+                    new_name.append(cobj.name())
+
+            # Verify user01 data
+            title = "user01"
+            self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % self.subs, "Failed future: %s\n%s" % (title, diff_iCalStrs(ical_future, data_future % self.subs),))
+            self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % self.subs, "Failed past: %s\n%s" % (title, diff_iCalStrs(ical_past, data_past % self.subs),))
+
+            # No inbox
+            cal = yield self.calendarUnderTest(name="inbox", home="user01")
+            cobjs = yield cal.calendarObjects()
+            self.assertEqual(len(cobjs), 0)
+            yield self.commit()
+
+        yield _verify_state()
+
+        # Now inject an iTIP with split
+        processor = ImplicitProcessor()
+        processor.getRecipientsCopy = lambda : succeed(None)
+
+        cobj = yield self.calendarObjectUnderTest(name=new_name[0], calendar_name="calendar", home="user01")
+        self.assertTrue(cobj is not None)
+        processor.recipient_calendar_resource = cobj
+        processor.recipient_calendar = (yield cobj.componentForUser("user01"))
+        processor.message = Component.fromString(itip2 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "REQUEST"
+        processor.uid = "C4526F4C-4324-4893-B769-BD766E4A4E7C"
+
+        result = yield processor.doImplicitAttendee()
+        self.assertEqual(result, (True, False, False, None,))
+        yield self.commit()
+
+        yield _verify_state()
+
+
+    @inlineCallbacks
+    def test_calendarObjectSplit_processing_one_past_instance(self):
+        """
+        Test that splitting of calendar objects works when outside invites are processed.
+        """
+        self.patch(config.Scheduling.Options.Splitting, "Enabled", True)
+        self.patch(config.Scheduling.Options.Splitting, "Size", 1024)
+        self.patch(config.Scheduling.Options.Splitting, "PastDays", 14)
+        self.patch(config.Scheduling.Options.Splitting, "Delay", 2)
+
+        # Create one event from outside organizer that will not split
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=NEEDS-ACTION:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+"""
+
+        itip1 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:CANCEL
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-OLDER-UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=NEEDS-ACTION:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_past = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=NEEDS-ACTION:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_back25)s
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        component = Component.fromString(data % self.subs)
+        cobj = yield calendar.createCalendarObjectWithName("data.ics", component)
+        self.assertFalse(hasattr(cobj, "_workItems"))
+        yield self.commit()
+
+        # Now inject an iTIP with split
+        processor = ImplicitProcessor()
+        processor.getRecipientsCopy = lambda : succeed(None)
+
+        cobj = yield self.calendarObjectUnderTest(name="data.ics", calendar_name="calendar", home="user01")
+        processor.recipient_calendar_resource = cobj
+        processor.recipient_calendar = (yield cobj.componentForUser("user01"))
+        processor.message = Component.fromString(itip1 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "CANCEL"
+        processor.uid = "12345-67890"
+
+        result = yield processor.doImplicitAttendee()
+        self.assertEqual(result, (True, False, False, None,))
+        yield self.commit()
+
+        # Get user01 data
+        cal = yield self.calendarUnderTest(name="calendar", home="user01")
+        cobjs = yield cal.calendarObjects()
+        self.assertEqual(len(cobjs), 1)
+        ical = yield cobjs[0].component()
+        ical_past = ical
+
+        # Verify user01 data
+        title = "user01"
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % self.subs, "Failed past: %s\n%s" % (title, diff_iCalStrs(ical_past, data_past % self.subs),))
+
+
+    @inlineCallbacks
+    def test_calendarObjectSplit_processing_one_future_instance(self):
+        """
+        Test that splitting of calendar objects works when outside invites are processed.
+        """
+        self.patch(config.Scheduling.Options.Splitting, "Enabled", True)
+        self.patch(config.Scheduling.Options.Splitting, "Size", 1024)
+        self.patch(config.Scheduling.Options.Splitting, "PastDays", 14)
+        self.patch(config.Scheduling.Options.Splitting, "Delay", 2)
+
+        # Create one event from outside organizer that will not split
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+"""
+
+        itip1 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-OLDER-UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:cuser01 at example.org
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_future = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=TENTATIVE:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:12345-67890
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_fwd10)s
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        component = Component.fromString(data % self.subs)
+        cobj = yield calendar.createCalendarObjectWithName("data.ics", component)
+        self.assertFalse(hasattr(cobj, "_workItems"))
+        yield self.commit()
+
+        # Now inject an iTIP with split
+        processor = ImplicitProcessor()
+        processor.getRecipientsCopy = lambda : succeed(None)
+
+        cobj = yield self.calendarObjectUnderTest(name="data.ics", calendar_name="calendar", home="user01")
+        processor.recipient_calendar_resource = cobj
+        processor.recipient_calendar = (yield cobj.componentForUser("user01"))
+        processor.message = Component.fromString(itip1 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "REQUEST"
+        processor.uid = "12345-67890"
+
+        result = yield processor.doImplicitAttendee()
+        self.assertEqual(result, (True, False, False, None,))
+        yield self.commit()
+
+        # Get user01 data
+        cal = yield self.calendarUnderTest(name="calendar", home="user01")
+        cobjs = yield cal.calendarObjects()
+        self.assertEqual(len(cobjs), 1)
+        ical = yield cobjs[0].component()
+        ical_future = ical
+
+        # Verify user01 data
+        title = "user01"
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % self.subs, "Failed future: %s\n%s" % (title, diff_iCalStrs(ical_future, data_future % self.subs),))
+
+
+    @inlineCallbacks
+    def test_calendarObjectSplit_processing_one_past_and_one_future(self):
+        """
+        Test that splitting of calendar objects works when outside invites are processed.
+        """
+        self.patch(config.Scheduling.Options.Splitting, "Enabled", True)
+        self.patch(config.Scheduling.Options.Splitting, "Size", 1024)
+        self.patch(config.Scheduling.Options.Splitting, "PastDays", 14)
+        self.patch(config.Scheduling.Options.Splitting, "Delay", 2)
+
+        # Create one event from outside organizer that will not split
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=NEEDS-ACTION:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+"""
+
+        itip1 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:CANCEL
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-OLDER-UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=NEEDS-ACTION:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_future = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=TENTATIVE:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:12345-67890
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_fwd10)s
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        data_past = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=NEEDS-ACTION:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-PERUSER-UID:user01
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+RECURRENCE-ID:%(now_back25)s
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        component = Component.fromString(data % self.subs)
+        cobj = yield calendar.createCalendarObjectWithName("data.ics", component)
+        self.assertFalse(hasattr(cobj, "_workItems"))
+        yield self.commit()
+
+        # Now inject an iTIP with split
+        processor = ImplicitProcessor()
+        processor.getRecipientsCopy = lambda : succeed(None)
+
+        cobj = yield self.calendarObjectUnderTest(name="data.ics", calendar_name="calendar", home="user01")
+        processor.recipient_calendar_resource = cobj
+        processor.recipient_calendar = (yield cobj.componentForUser("user01"))
+        processor.message = Component.fromString(itip1 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "REQUEST"
+        processor.uid = "12345-67890"
+
+        result = yield processor.doImplicitAttendee()
+        self.assertEqual(result, (True, False, False, None,))
+        yield self.commit()
+
+        # Get user01 data
+        cal = yield self.calendarUnderTest(name="calendar", home="user01")
+        cobjs = yield cal.calendarObjects()
+        self.assertEqual(len(cobjs), 2)
+        for cobj in cobjs:
+            ical = yield cobj.component()
+            if ical.resourceUID() == "12345-67890":
+                ical_future = ical
+            else:
+                ical_past = ical
+
+        # Verify user01 data
+        title = "user01"
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % self.subs, "Failed future: %s\n%s" % (title, diff_iCalStrs(ical_future, data_future % self.subs),))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % self.subs, "Failed past: %s\n%s" % (title, diff_iCalStrs(ical_past, data_past % self.subs),))
+
+
+    @inlineCallbacks
+    def test_calendarObjectSplit_processing_disabled(self):
+        """
+        Test that splitting of calendar objects works when outside invites are processed.
+        """
+        self.patch(config.Scheduling.Options.Splitting, "Enabled", False)
+        self.patch(config.Scheduling.Options.Splitting, "Size", 1024)
+        self.patch(config.Scheduling.Options.Splitting, "PastDays", 14)
+        self.patch(config.Scheduling.Options.Splitting, "Delay", 2)
+
+        # Create one event from outside organizer that will not split
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RRULE:FREQ=DAILY
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:Master
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=NEEDS-ACTION:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back25
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=DECLINED:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=DECLINED:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+TRANSP:TRANSPARENT
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_back24
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+BEGIN:VALARM
+ACTION:DISPLAY
+DESCRIPTION:now_fwd10
+TRIGGER;RELATED=START:-PT10M
+END:VALARM
+END:VEVENT
+END:VCALENDAR
+"""
+
+        itip1 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-OLDER-UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:cuser01 at example.org
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:user01 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:cuser01 at example.org
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        itip2 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-NEWER-UID:12345-67890
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=TENTATIVE:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=NEEDS-ACTION:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+BEGIN:VEVENT
+UID:C4526F4C-4324-4893-B769-BD766E4A4E7C
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=DECLINED:mailto:cuser01 at example.org
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=DECLINED:urn:uuid:user01
+DTSTAMP:20051222T210507Z
+ORGANIZER;SCHEDULE-AGENT=NONE:mailto:cuser01 at example.org
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:C4526F4C-4324-4893-B769-BD766E4A4E7C
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        component = Component.fromString(data % self.subs)
+        cobj = yield calendar.createCalendarObjectWithName("data.ics", component)
+        self.assertFalse(hasattr(cobj, "_workItems"))
+        yield self.commit()
+
+        # Now inject an iTIP with split
+        processor_action = [False, False, ]
+        def _doImplicitAttendeeRequest():
+            processor_action[0] = True
+            return succeed(True)
+        def _doImplicitAttendeeCancel():
+            processor_action[1] = True
+            return succeed(True)
+        processor = ImplicitProcessor()
+        processor.getRecipientsCopy = lambda : succeed(None)
+        processor.doImplicitAttendeeRequest = _doImplicitAttendeeRequest
+        processor.doImplicitAttendeeCancel = _doImplicitAttendeeCancel
+
+        cobj = yield self.calendarObjectUnderTest(name="data.ics", calendar_name="calendar", home="user01")
+        processor.recipient_calendar_resource = cobj
+        processor.recipient_calendar = (yield cobj.componentForUser("user01"))
+        processor.message = Component.fromString(itip1 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "REQUEST"
+        processor.uid = "12345-67890"
+
+        yield processor.doImplicitAttendee()
+        self.assertTrue(processor_action[0])
+        self.assertFalse(processor_action[1])
+        yield self.commit()
+
+        # Now inject an iTIP with split
+        processor_action = [False, False, ]
+        processor.getRecipientsCopy = lambda : succeed(None)
+        processor.doImplicitAttendeeRequest = _doImplicitAttendeeRequest
+        processor.doImplicitAttendeeCancel = _doImplicitAttendeeCancel
+
+        processor.recipient_calendar_resource = None
+        processor.recipient_calendar = None
+        processor.message = Component.fromString(itip2 % self.subs)
+        processor.originator = RemoteCalendarUser("mailto:cuser01 at example.org")
+        processor.recipient = LocalCalendarUser("urn:uuid:user01", None)
+        processor.method = "REQUEST"
+        processor.uid = "C4526F4C-4324-4893-B769-BD766E4A4E7C"
+
+        yield processor.doImplicitAttendee()
+        self.assertTrue(processor_action[0])
+        self.assertFalse(processor_action[1])
+
+
+    @inlineCallbacks
+    def test_calendarObjectSplit_external(self):
+        """
+        Test that splitting of calendar objects works.
+        """
+        self.patch(config.Scheduling.Options.Splitting, "Enabled", True)
+        self.patch(config.Scheduling.Options.Splitting, "Size", 1024)
+        self.patch(config.Scheduling.Options.Splitting, "PastDays", 14)
+        self.patch(config.Scheduling.Options.Splitting, "Delay", 2)
+
+        # Create one event that will split
+        calendar = yield self.calendarUnderTest(name="calendar", home="user01")
+
+        data = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+ATTENDEE:mailto:user02 at example.com
+ATTENDEE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:user01 at example.com
+RRULE:FREQ=DAILY
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+ATTENDEE:mailto:user02 at example.com
+ATTENDEE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:user01 at example.com
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+ATTENDEE:mailto:user02 at example.com
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:user01 at example.com
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;PARTSTAT=ACCEPTED:mailto:user01 at example.com
+ATTENDEE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER:mailto:user01 at example.com
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_future = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE;SCHEDULE-STATUS=1.2:urn:uuid:user02
+ATTENDEE;RSVP=TRUE;SCHEDULE-STATUS=3.7:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;RSVP=TRUE;SCHEDULE-STATUS=3.7:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_past = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:%(relID)s
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE;SCHEDULE-STATUS=1.2:urn:uuid:user02
+ATTENDEE;RSVP=TRUE;SCHEDULE-STATUS=3.7:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:%(relID)s
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE;SCHEDULE-STATUS=1.2:urn:uuid:user02
+ATTENDEE;RSVP=TRUE;SCHEDULE-STATUS=3.7:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+BEGIN:VEVENT
+UID:%(relID)s
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE;SCHEDULE-STATUS=1.2:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_future2 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+EXDATE:%(now_fwd10)s
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:12345-67890
+X-CALENDARSERVER-PERUSER-UID:user02
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+TRANSP:TRANSPARENT
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        data_past2 = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:%(relID)s
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:%(relID)s
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+BEGIN:VEVENT
+UID:%(relID)s
+RECURRENCE-ID:%(now_back24)s
+DTSTART:%(now_back24)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+BEGIN:X-CALENDARSERVER-PERUSER
+UID:%(relID)s
+X-CALENDARSERVER-PERUSER-UID:user02
+BEGIN:X-CALENDARSERVER-PERINSTANCE
+TRANSP:TRANSPARENT
+END:X-CALENDARSERVER-PERINSTANCE
+END:X-CALENDARSERVER-PERUSER
+END:VCALENDAR
+"""
+
+        data_inbox2 = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+EXDATE:%(now_fwd10)s
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_future_external = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-OLDER-UID:%(relID)s
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:%(now_back14)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:%(now_fwd10)s
+DTSTART:%(now_fwd10)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        data_past_external = """BEGIN:VCALENDAR
+VERSION:2.0
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+X-CALENDARSERVER-SPLIT-NEWER-UID:12345-67890
+X-CALENDARSERVER-SPLIT-RID;VALUE=DATE-TIME:%(now_back14)s
+BEGIN:VEVENT
+UID:%(relID)s
+DTSTART:%(now_back30)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+EXDATE:%(now_back24)s
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+RRULE:FREQ=DAILY;UNTIL=%(now_back14_1)s
+SEQUENCE:1
+SUMMARY:1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+ 1234567890123456789012345678901234567890
+END:VEVENT
+BEGIN:VEVENT
+UID:%(relID)s
+RECURRENCE-ID:%(now_back25)s
+DTSTART:%(now_back25)s
+DURATION:PT1H
+ATTENDEE;CN=User 01;EMAIL=user01 at example.com;PARTSTAT=ACCEPTED:urn:uuid:user01
+ATTENDEE;CN=User 02;EMAIL=user02 at example.com;RSVP=TRUE:urn:uuid:user02
+ATTENDEE;RSVP=TRUE:mailto:cuser01 at example.org
+DTSTAMP:20051222T210507Z
+ORGANIZER;CN=User 01;EMAIL=user01 at example.com:urn:uuid:user01
+RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+"""
+
+        # Patch CalDAVScheduler to trap external schedules
+        details = []
+        def _doSchedulingViaPUT(self, originator, recipients, calendar, internal_request=False, suppress_refresh=False):
+            details.append((originator, recipients, calendar,))
+
+            responses = ScheduleResponseQueue("REQUEST", responsecode.OK)
+            for recipient in recipients:
+                responses.add(recipient, responsecode.OK, reqstatus=iTIPRequestStatus.MESSAGE_DELIVERED)
+            return succeed(responses)
+
+        component = Component.fromString(data % self.subs)
+        cobj = yield calendar.createCalendarObjectWithName("data1.ics", component)
+        self.assertTrue(hasattr(cobj, "_workItems"))
+        work = cobj._workItems[0]
+        yield self.commit()
+
+        self.patch(CalDAVScheduler, "doSchedulingViaPUT", _doSchedulingViaPUT)
+
+        w = schema.CALENDAR_OBJECT_SPLITTER_WORK
+        rows = yield Select(
+            [w.RESOURCE_ID, ],
+            From=w
+        ).on(self.transactionUnderTest())
+        self.assertEqual(len(rows), 1)
+        self.assertEqual(rows[0][0], cobj._resourceID)
+        yield self.abort()
+
+        # Wait for it to complete
+        yield work.whenExecuted()
+
+        rows = yield Select(
+            [w.RESOURCE_ID, ],
+            From=w
+        ).on(self.transactionUnderTest())
+        self.assertEqual(len(rows), 0)
+        yield self.abort()
+
+        # Get the existing and new object data
+        cobj1 = yield self.calendarObjectUnderTest(name="data1.ics", calendar_name="calendar", home="user01")
+        self.assertTrue(cobj1.isScheduleObject)
+        ical1 = yield cobj1.component()
+        newUID = ical1.masterComponent().propertyValue("RELATED-TO")
+
+        cobj2 = yield self.calendarObjectUnderTest(name="%s.ics" % (newUID,), calendar_name="calendar", home="user01")
+        self.assertTrue(cobj2 is not None)
+        self.assertTrue(cobj2.isScheduleObject)
+
+        ical_future = yield cobj1.component()
+        ical_past = yield cobj2.component()
+
+        # Verify user01 data
+        title = "user01"
+        relsubs = dict(self.subs)
+        relsubs["relID"] = newUID
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future) % relsubs, "Failed future: %s\n%s" % (title, diff_iCalStrs(ical_future, data_future % relsubs),))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past) % relsubs, "Failed past: %s\n%s" % (title, diff_iCalStrs(ical_past, data_past % relsubs),))
+
+        # Get user02 data
+        cal = yield self.calendarUnderTest(name="calendar", home="user02")
+        cobjs = yield cal.calendarObjects()
+        self.assertEqual(len(cobjs), 2)
+        for cobj in cobjs:
+            ical = yield cobj.component()
+            if ical.resourceUID() == "12345-67890":
+                ical_future = ical
+            else:
+                ical_past = ical
+
+        cal = yield self.calendarUnderTest(name="inbox", home="user02")
+        cobjs = yield cal.calendarObjects()
+        self.assertEqual(len(cobjs), 1)
+        ical_inbox = yield cobjs[0].component()
+
+        # Verify user02 data
+        title = "user02"
+        self.assertEqual(normalize_iCalStr(ical_future), normalize_iCalStr(data_future2) % relsubs, "Failed future: %s\n%s" % (title, diff_iCalStrs(ical_future, data_future2 % relsubs),))
+        self.assertEqual(normalize_iCalStr(ical_past), normalize_iCalStr(data_past2) % relsubs, "Failed past: %s\n%s" % (title, diff_iCalStrs(ical_past, data_past2 % relsubs),))
+        self.assertEqual(normalize_iCalStr(ical_inbox), normalize_iCalStr(data_inbox2) % relsubs, "Failed past: %s\n%s" % (title, diff_iCalStrs(ical_inbox, data_inbox2 % relsubs),))
+
+        # Verify cuser02 data
+        self.assertEqual(len(details), 2)
+        self.assertEqual(details[0][0], "urn:uuid:user01")
+        self.assertEqual(details[0][1], ("mailto:cuser01 at example.org",))
+        self.assertEqual(normalize_iCalStr(details[0][2]), normalize_iCalStr(data_future_external) % relsubs, "Failed future: %s\n%s" % (title, diff_iCalStrs(details[0][2], data_future_external % relsubs),))
+
+        self.assertEqual(details[1][0], "urn:uuid:user01")
+        self.assertEqual(details[1][1], ("mailto:cuser01 at example.org",))
+        self.assertEqual(normalize_iCalStr(details[1][2]), normalize_iCalStr(data_past_external) % relsubs, "Failed past: %s\n%s" % (title, diff_iCalStrs(details[1][2], data_past_external % relsubs),))

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_util.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_util.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/test/test_util.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -354,20 +354,19 @@
         }, self.storeUnderTest())
         txn = self.transactionUnderTest()
         emptyHome = yield txn.calendarHomeWithUID("empty_home")
-        self.assertIdentical((yield emptyHome.calendarWithName("calendar")),
-                             None)
+        self.assertIdentical((yield emptyHome.calendarWithName("calendar")), None)
         nonEmpty = yield txn.calendarHomeWithUID("non_empty_home")
         yield migrateHome(emptyHome, nonEmpty)
         yield self.commit()
         txn = self.transactionUnderTest()
         emptyHome = yield txn.calendarHomeWithUID("empty_home")
         nonEmpty = yield txn.calendarHomeWithUID("non_empty_home")
-        self.assertIdentical((yield nonEmpty.calendarWithName("inbox")),
-                             None)
-        self.assertIdentical((yield nonEmpty.calendarWithName("calendar")),
-                             None)
 
+        self.assertIdentical((yield nonEmpty.calendarWithName("calendar")), None)
+        self.assertNotIdentical((yield nonEmpty.calendarWithName("inbox")), None)
+        self.assertNotIdentical((yield nonEmpty.calendarWithName("other-default-calendar")), None)
 
+
     @staticmethod
     def sampleEvent(uid, summary=None):
         """

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/util.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/util.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/datastore/util.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -356,8 +356,7 @@
 
 
 @inlineCallbacks
-def migrateHome(inHome, outHome, getComponent=lambda x: x.component(),
-                merge=False):
+def migrateHome(inHome, outHome, getComponent=lambda x: x.component(), merge=False):
     """
     Copy all calendars and properties in the given input calendar home to the
     given output calendar home.
@@ -373,7 +372,7 @@
         a calendar in outHome).
 
     @param merge: a boolean indicating whether to raise an exception when
-        encounting a conflicting element of data (calendar or event), or to
+        encountering a conflicting element of data (calendar or event), or to
         attempt to merge them together.
 
     @return: a L{Deferred} that fires with C{None} when the migration is
@@ -398,8 +397,7 @@
         yield d
         outCalendar = yield outHome.calendarWithName(name)
         try:
-            yield _migrateCalendar(calendar, outCalendar, getComponent,
-                                   merge=merge)
+            yield _migrateCalendar(calendar, outCalendar, getComponent, merge=merge)
         except InternalDataStoreError:
             log.error(
                 "  Failed to migrate calendar: %s/%s" % (inHome.name(), name,)
@@ -408,6 +406,11 @@
     # No migration for notifications, since they weren't present in earlier
     # released versions of CalendarServer.
 
+    # May need to create inbox if it was not present in the original file store for some reason
+    inboxCalendar = yield outHome.calendarWithName("inbox")
+    if inboxCalendar is None:
+        yield outHome.createCalendarWithName("inbox")
+
     # May need to split calendars by component type
     if config.RestrictCalendarsToOneComponentType:
         yield outHome.splitCalendars()

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/icalendarstore.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/icalendarstore.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/caldav/icalendarstore.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -385,15 +385,12 @@
         Low-level query to gather names for calendarObjectsSinceToken.
         """
 
-    def asShared(): #@NoSelf
+    def sharingInvites(): #@NoSelf
         """
-        Get a view of this L{ICalendar} as present in everyone's calendar home
-        except for its owner's.
+        Retrieve the list of all L{SharingInvitation} for this L{CommonHomeChild}, irrespective of mode.
 
-        @return: a L{Deferred} which fires with a list of L{ICalendar}s, each
-            L{ICalendar} as seen by its respective sharee.  This means that its
-            C{shareMode} will be something other than L{_BIND_MODE_OWN}, and its
-            L{ICalendar.viewerCalendarHome} will return the home of the sharee.
+        @return: L{SharingInvitation} objects
+        @rtype: a L{Deferred} which fires with a L{list} of L{SharingInvitation}s.
         """
 
     # FIXME: This module should define it's own constants and this
@@ -863,9 +860,11 @@
 
     ATTACHMENT_UPDATE     - change to a managed attachment that is re-writing calendar data.
 
-    SPLIT                 - calendar data is being split. Some validation and implicit scheduling is not done.
-                            Schedule-Tag is changed.
+    SPLIT_OWNER           - owner calendar data is being split. Implicit is done with non-hosted attendees.
 
+    SPLIT_ATTENDEE        - attendee calendar data is being split. No implicit done, but some extra processing
+                            is done (more than RAW).
+
     RAW                   - store the supplied data as-is without any processing or validation. This is used
                             for unit testing purposes only.
     """
@@ -875,7 +874,8 @@
     ORGANIZER_ITIP_UPDATE = NamedConstant()
     ATTENDEE_ITIP_UPDATE = NamedConstant()
     ATTACHMENT_UPDATE = NamedConstant()
-    SPLIT = NamedConstant()
+    SPLIT_OWNER = NamedConstant()
+    SPLIT_ATTENDEE = NamedConstant()
     RAW = NamedConstant()
 
     NORMAL.description = "normal"
@@ -883,7 +883,8 @@
     ORGANIZER_ITIP_UPDATE.description = "organizer-update"
     ATTENDEE_ITIP_UPDATE.description = "attendee-update"
     ATTACHMENT_UPDATE.description = "attachment-update"
-    SPLIT.description = "split"
+    SPLIT_OWNER.description = "split-owner"
+    SPLIT_ATTENDEE.description = "split-attendee"
     RAW.description = "raw"
 
 

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/sql.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/sql.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -1,5 +1,5 @@
 # -*- test-case-name: txdav.carddav.datastore.test.test_sql -*-
-##
+# #
 # Copyright (c) 2010-2013 Apple Inc. All rights reserved.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
@@ -13,7 +13,7 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-##
+# #
 
 """
 SQL backend for CardDAV storage.
@@ -25,6 +25,8 @@
     "AddressBookObject",
 ]
 
+from copy import deepcopy
+
 from twext.enterprise.dal.syntax import Delete, Insert, Len, Parameter, \
     Update, Union, Max, Select, utcNowSQL
 from twext.enterprise.locking import NamedLock
@@ -45,10 +47,10 @@
 from txdav.base.propertystore.base import PropertyName
 from txdav.base.propertystore.sql import PropertyStore
 from txdav.carddav.iaddressbookstore import IAddressBookHome, IAddressBook, \
-    IAddressBookObject, GroupForSharedAddressBookDeleteNotAllowedError, \
-    GroupWithUnsharedAddressNotAllowedError, SharedGroupDeleteNotAllowedError
+    IAddressBookObject, GroupWithUnsharedAddressNotAllowedError, \
+    KindChangeNotAllowedError
 from txdav.common.datastore.sql import CommonHome, CommonHomeChild, \
-    CommonObjectResource, EADDRESSBOOKTYPE, SharingMixIn
+    CommonObjectResource, EADDRESSBOOKTYPE, SharingMixIn, SharingInvitation
 from txdav.common.datastore.sql_legacy import PostgresLegacyABIndexEmulator
 from txdav.common.datastore.sql_tables import _ABO_KIND_PERSON, \
     _ABO_KIND_GROUP, _ABO_KIND_RESOURCE, _ABO_KIND_LOCATION, schema, \
@@ -57,14 +59,13 @@
 from txdav.common.icommondatastore import InternalDataStoreError, \
     InvalidUIDError, UIDExistsError, ObjectResourceTooBigError, \
     InvalidObjectResourceError, InvalidComponentForStoreError, \
-    AllRetriesFailed
+    AllRetriesFailed, ObjectResourceNameAlreadyExistsError
 
 from uuid import uuid4
 
 from zope.interface.declarations import implements
 
 
-
 class AddressBookHome(CommonHome):
 
     implements(IAddressBookHome)
@@ -101,7 +102,7 @@
 
 
     @classproperty
-    def _resourceIDAndHomeResourceIDFromOwnerQuery(cls):  #@NoSelf
+    def _resourceIDAndHomeResourceIDFromOwnerQuery(cls): #@NoSelf
         home = cls._homeSchema
         return Select([home.RESOURCE_ID, home.ADDRESSBOOK_PROPERTY_STORE_ID],
                       From=home, Where=home.OWNER_UID == Parameter("ownerUID"))
@@ -139,7 +140,7 @@
                     # Cache the data
                     yield queryCacher.setAfterCommit(self._txn, cacheKey, data)
 
-            #self._created, self._modified = data
+            # self._created, self._modified = data
             yield self._loadPropertyStore()
 
             # created owned address book
@@ -264,7 +265,7 @@
 
 
     @classproperty
-    def _syncTokenQuery(cls):  #@NoSelf
+    def _syncTokenQuery(cls): #@NoSelf
         """
         DAL Select statement to find the sync token.
         """
@@ -308,7 +309,7 @@
 
 
     @classproperty
-    def _changesQuery(cls):  #@NoSelf
+    def _changesQuery(cls): #@NoSelf
         rev = cls._revisionsSchema
         return Select(
             [rev.COLLECTION_NAME,
@@ -336,6 +337,8 @@
 
 AddressBookHome._register(EADDRESSBOOKTYPE)
 
+
+
 class AddressBookSharingMixIn(SharingMixIn):
     """
         Sharing code shared between AddressBook and AddressBookObject
@@ -358,7 +361,7 @@
     @inlineCallbacks
     def _isSharedOrInvited(self):
         """
-        return a bool if this L{AddressBook} is shared or invited
+        return True if this L{AddressBook} is shared or invited
         """
         sharedRows = []
         if self.owned():
@@ -370,12 +373,14 @@
 
         returnValue(bool(sharedRows))
 
+
     @inlineCallbacks
     def _initIsShared(self):
         isShared = yield self._isSharedOrInvited()
         self.setShared(isShared)
 
 
+
 class AddressBook(CommonHomeChild, AddressBookSharingMixIn):
     """
     SQL-based implementation of L{IAddressBook}.
@@ -430,10 +435,21 @@
     addressbookObjectsSinceToken = CommonHomeChild.objectResourcesSinceToken
 
 
-    def shareeAddressBookName(self):
-        return self._home.shareeAddressBookName()
+    def shareeName(self):
+        """
+        The sharee's name for a shared address book is the sharer's home ownerUID.
+        """
+        return self.ownerHome().shareeAddressBookName()
 
 
+    def bindNameIsResourceName(self):
+        """
+        For shared address books the resource name of an accepted share is not the same as the name
+        in the bind table.
+        """
+        return False
+
+
     @inlineCallbacks
     def _loadPropertyStore(self, props=None):
         if props is None:
@@ -469,10 +485,10 @@
     @classmethod
     def create(cls, home, name):
         if name == home.addressbook().name():
-            #raise HomeChildNameAlreadyExistsError
+            # raise HomeChildNameAlreadyExistsError
             pass
         else:
-            #raise HomeChildNameNotAllowedError
+            # raise HomeChildNameNotAllowedError
             raise HTTPError(FORBIDDEN)
 
 
@@ -531,8 +547,8 @@
 
             # account for fully-shared address book group
             if self.fullyShared():
-                if not self._fullySharedAddressBookGroupName() in objectNames:
-                    objectNames.append(self._fullySharedAddressBookGroupName())
+                if not self._groupForSharedAddressBookName() in objectNames:
+                    objectNames.append(self._groupForSharedAddressBookName())
             self._objectNames = sorted(objectNames)
 
         returnValue(self._objectNames)
@@ -568,36 +584,36 @@
                       Where=obj.ADDRESSBOOK_HOME_RESOURCE_ID == Parameter("addressbookResourceID"),)
 
 
-    def _fullySharedAddressBookGroupRow(self):  #@NoSelf
+    def _groupForSharedAddressBookRow(self): #@NoSelf
         return [
             self._resourceID,  # obj.ADDRESSBOOK_HOME_RESOURCE_ID,
             self._resourceID,  # obj.RESOURCE_ID,
-            self._fullySharedAddressBookGroupName(),  # obj.RESOURCE_NAME, shared name is UID and thus avoids collisions
-            self._fullySharedAddressBookGroupUID(),  # obj.UID, shared name is uuid
+            self._groupForSharedAddressBookName(),  # obj.RESOURCE_NAME, shared name is UID and thus avoids collisions
+            self._groupForSharedAddressBookUID(),  # obj.UID, shared name is uuid
             _ABO_KIND_GROUP,  # obj.KIND,
-            "1",  # obj.MD5, unused
-            "1",  # Len(obj.TEXT), unused
+            "1",  # obj.MD5, non-zero temporary value; set to correct value when known
+            "1",  # Len(obj.TEXT), non-zero temporary value; set to correct value when known
             self._created,  # obj.CREATED,
             self._modified,  # obj.CREATED,
         ]
 
 
-    def _fullySharedAddressBookGroupName(self):
+    def _groupForSharedAddressBookName(self):
         return self.ownerHome().addressbook().name() + ".vcf"
 
 
-    def _fullySharedAddressBookGroupUID(self):
-        return self.name()
+    def _groupForSharedAddressBookUID(self):
+        return self.shareUID()
 
 
     @inlineCallbacks
-    def _fullySharedAddressBookGroupComponent(self):
+    def _groupForSharedAddressBookComponent(self):
 
-        n = self.ownerHome().shareeAddressBookName()
+        n = self.shareeName()
         fn = n
-        uid = self.name()
+        uid = self._groupForSharedAddressBookUID()
 
-        #  store bridge should substitute principal name and full name
+        #  storebridge should substitute principal name and full name
         #      owner = yield CalDAVResource.principalForUID(self.ownerHome().uid())
         #      n = owner.name()
         #      fn = owner.displayName()
@@ -661,7 +677,7 @@
         )
         # get ownerHomeIDs
         for dataRow in dataRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = dataRow[:cls.bindColumnCount]  #@UnusedVariable
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = dataRow[:cls.bindColumnCount] #@UnusedVariable
             ownerHome = yield home.ownerHomeWithChildID(resourceID)
             ownerHomeToDataRowMap[ownerHome] = dataRow
 
@@ -670,7 +686,7 @@
             home._txn, homeID=home._resourceID
         )
         for groupBindRow in groupBindRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount]  #@UnusedVariable
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
             ownerAddressBookID = yield AddressBookObject.ownerAddressBookIDFromGroupID(home._txn, resourceID)
             ownerHome = yield home.ownerHomeWithChildID(ownerAddressBookID)
             if ownerHome not in ownerHomeToDataRowMap:
@@ -693,7 +709,7 @@
 
             # Create the actual objects merging in properties
             for ownerHome, dataRow in ownerHomeToDataRowMap.iteritems():
-                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = dataRow[:cls.bindColumnCount]  #@UnusedVariable
+                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = dataRow[:cls.bindColumnCount] #@UnusedVariable
                 additionalBind = dataRow[cls.bindColumnCount:cls.bindColumnCount + len(cls.additionalBindColumns())]
                 metadata = dataRow[cls.bindColumnCount + len(cls.additionalBindColumns()):]
 
@@ -739,8 +755,8 @@
         """
         if accepted and name == home.addressbook().name():
             returnValue(home.addressbook())
+        # shared address books only from this point on
 
-        # all shared address books now
         rows = None
         queryCacher = home._txn._queryCacher
         ownerHome = None
@@ -748,35 +764,37 @@
         if queryCacher:
             # Retrieve data from cache
             cacheKey = queryCacher.keyForObjectWithName(home._resourceID, name)
-            rows = yield queryCacher.get(cacheKey)
+            cachedRows = yield queryCacher.get(cacheKey)
+            if cachedRows and (cachedRows[0][4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+                rows = cachedRows
 
-        if rows is None:
-
+        if not rows:
             # name must be a home uid
             ownerHome = yield home._txn.addressbookHomeWithUID(name)
             if ownerHome:
                 # see if address book resource id in bind table
                 ownerAddressBook = ownerHome.addressbook()
-                rows = yield cls._bindForResourceIDAndHomeID.on(
+                bindRows = yield cls._bindForResourceIDAndHomeID.on(
                     home._txn, resourceID=ownerAddressBook._resourceID, homeID=home._resourceID
                 )
-                if rows:
-                    rows[0].insert(cls.bindColumnCount, ownerAddressBook._resourceID)
-                    rows[0].insert(cls.bindColumnCount + 1, rows[0][4])  # cachedStatus = bindStatus
+                if bindRows and (bindRows[0][4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+                    bindRows[0].insert(cls.bindColumnCount, ownerAddressBook._resourceID)
+                    bindRows[0].insert(cls.bindColumnCount + 1, bindRows[0][4])  # cachedStatus = bindStatus
+                    rows = bindRows
                 else:
                     groupBindRows = yield AddressBookObject._bindForHomeIDAndAddressBookID.on(
                             home._txn, homeID=home._resourceID, addressbookID=ownerAddressBook._resourceID
                     )
-                    if groupBindRows:
-                        groupBindRow = groupBindRows[0]
-                        cachedBindStatus = groupBindRow[4]  # save off bindStatus
-                        groupBindRow[0] = _BIND_MODE_WRITE
-                        groupBindRow[3] = None  # bindName
-                        groupBindRow[4] = None  # bindStatus
-                        groupBindRow[6] = None  # bindMessage
-                        groupBindRow.insert(AddressBookObject.bindColumnCount, ownerAddressBook._resourceID)
-                        groupBindRow.insert(AddressBookObject.bindColumnCount + 1, cachedBindStatus)
-                        rows = [groupBindRow]
+                    for groupBindRow in groupBindRows:
+                        if (groupBindRow[4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+                            groupBindRow.insert(AddressBookObject.bindColumnCount, ownerAddressBook._resourceID)
+                            groupBindRow.insert(AddressBookObject.bindColumnCount + 1, groupBindRow[4])
+                            groupBindRow[0] = _BIND_MODE_WRITE
+                            groupBindRow[3] = None  # bindName
+                            groupBindRow[4] = None  # bindStatus
+                            groupBindRow[6] = None  # bindMessage
+                            rows = [groupBindRow]
+                            break
 
             if rows and queryCacher:
                 # Cache the result
@@ -785,17 +803,17 @@
         if not rows:
             returnValue(None)
 
-        bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage, ownerAddressBookID, cachedBindStatus = rows[0]  #@UnusedVariable
+        row = rows[0]
+        bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage, ownerAddressBookID, cachedBindStatus = row[:cls.bindColumnCount + 2] #@UnusedVariable
 
         # if wrong status, exit here.  Item is in queryCache
         if (cachedBindStatus == _BIND_STATUS_ACCEPTED) != bool(accepted):
             returnValue(None)
 
         ownerHome = yield home.ownerHomeWithChildID(ownerAddressBookID)
-        ownerAddressBook = ownerHome.addressbook()
         child = cls(
                 home=home,
-                name=ownerAddressBook.shareeAddressBookName(), resourceID=ownerAddressBookID,
+                name=ownerHome.shareeAddressBookName(), resourceID=ownerAddressBookID,
                 mode=bindMode, status=bindStatus,
                 revision=bindRevision,
                 message=bindMessage, ownerHome=ownerHome,
@@ -820,10 +838,9 @@
             exists.
         """
         bindRows = yield cls._bindForNameAndHomeID.on(home._txn, name=name, homeID=home._resourceID)
-        if bindRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = bindRows[0]  #@UnusedVariable
-            if (bindStatus == _BIND_STATUS_ACCEPTED) != bool(accepted):
-                returnValue(None)
+        if bindRows and (bindRows[0][4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+            bindRow = bindRows[0]
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = bindRow[:cls.bindColumnCount] #@UnusedVariable
 
             # alt:
             # returnValue((yield cls.objectWithID(home, resourceID)))
@@ -836,10 +853,9 @@
         groupBindRows = yield AddressBookObject._bindForNameAndHomeID.on(
             home._txn, name=name, homeID=home._resourceID
         )
-        if groupBindRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRows[0]  #@UnusedVariable
-            if (bindStatus == _BIND_STATUS_ACCEPTED) != bool(accepted):
-                returnValue(None)
+        if groupBindRows and (groupBindRows[0][4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+            groupBindRow = groupBindRows[0]
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
 
             ownerAddressBookID = yield AddressBookObject.ownerAddressBookIDFromGroupID(home._txn, resourceID)
             # alt:
@@ -848,6 +864,7 @@
             addressbook = yield home.childWithName(ownerHome.shareeAddressBookName())
             if not addressbook:
                 addressbook = yield cls.objectWithName(home, ownerHome.shareeAddressBookName(), accepted=False)
+                assert addressbook
 
             if accepted:
                 returnValue((yield addressbook.objectResourceWithID(resourceID)))
@@ -875,10 +892,9 @@
         bindRows = yield cls._bindForResourceIDAndHomeID.on(
             home._txn, resourceID=resourceID, homeID=home._resourceID
         )
-        if bindRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = bindRows[0]  #@UnusedVariable
-            if (bindStatus == _BIND_STATUS_ACCEPTED) != bool(accepted):
-                returnValue(None)
+        if bindRows and (bindRows[0][4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+            bindRow = bindRows[0]
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = bindRow[:cls.bindColumnCount] #@UnusedVariable
 
             ownerHome = yield home.ownerHomeWithChildID(resourceID)
             if accepted:
@@ -889,10 +905,9 @@
         groupBindRows = yield AddressBookObject._bindForHomeIDAndAddressBookID.on(
                     home._txn, homeID=home._resourceID, addressbookID=resourceID
         )
-        if groupBindRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRows[0]  #@UnusedVariable
-            if (bindStatus == _BIND_STATUS_ACCEPTED) != bool(accepted):
-                returnValue(None)
+        if groupBindRows and (groupBindRows[0][4] == _BIND_STATUS_ACCEPTED) == bool(accepted):
+            groupBindRow = groupBindRows[0]
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
 
             ownerAddressBookID = yield AddressBookObject.ownerAddressBookIDFromGroupID(home._txn, resourceID)
             ownerHome = yield home.ownerHomeWithChildID(ownerAddressBookID)
@@ -929,7 +944,7 @@
             home._txn, homeID=home._resourceID
         )
         for row in rows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = row[:cls.bindColumnCount]  #@UnusedVariable
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = row[:cls.bindColumnCount] #@UnusedVariable
             ownerHome = yield home._txn.homeWithResourceID(home._homeType, resourceID, create=True)
             names |= set([ownerHome.shareeAddressBookName()])
 
@@ -937,7 +952,7 @@
             home._txn, homeID=home._resourceID
         )
         for groupRow in groupRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupRow[:AddressBookObject.bindColumnCount]  #@UnusedVariable
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
             ownerAddressBookID = yield AddressBookObject.ownerAddressBookIDFromGroupID(home._txn, resourceID)
             ownerHome = yield home._txn.homeWithResourceID(home._homeType, ownerAddressBookID, create=True)
             names |= set([ownerHome.shareeAddressBookName()])
@@ -1008,7 +1023,7 @@
             readWriteGroupIDs = []
             readOnlyGroupIDs = []
             for groupBindRow in groupBindRows:
-                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount]  #@UnusedVariable
+                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
                 if bindMode == _BIND_MODE_WRITE:
                     readWriteGroupIDs.append(resourceID)
                 else:
@@ -1025,7 +1040,7 @@
             returnValue((tuple(adjustedReadOnlyGroupIDs), tuple(adjustedReadWriteGroupIDs)))
 
 
-    #FIXME: Unused
+    # FIXME: Unused
     @inlineCallbacks
     def readOnlyGroupIDs(self):
         returnValue((yield self.accessControlGroupIDs())[0])
@@ -1036,7 +1051,7 @@
         returnValue((yield self.accessControlGroupIDs())[1])
 
 
-    #FIXME: Unused:  Use for caching access
+    # FIXME: Unused:  Use for caching access
     @inlineCallbacks
     def accessControlObjectIDs(self):
         readOnlyIDs = set()
@@ -1056,7 +1071,7 @@
         readWriteGroupIDs = []
         readOnlyGroupIDs = []
         for groupBindRow in groupBindRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount]  #@UnusedVariable
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
             if bindMode == _BIND_MODE_WRITE:
                 readWriteGroupIDs.append(resourceID)
             else:
@@ -1070,19 +1085,19 @@
         returnValue(tuple(readOnlyIDs), tuple(readWriteIDs))
 
 
-    #FIXME: Unused:  Use for caching access
+    # FIXME: Unused:  Use for caching access
     @inlineCallbacks
     def readOnlyObjectIDs(self):
         returnValue((yield self.accessControlObjectIDs())[1])
 
 
-    #FIXME: Unused:  Use for caching access
+    # FIXME: Unused:  Use for caching access
     @inlineCallbacks
     def readWriteObjectIDs(self):
         returnValue((yield self.accessControlObjectIDs())[1])
 
 
-    #FIXME: Unused:  Use for caching access
+    # FIXME: Unused:  Use for caching access
     @inlineCallbacks
     def allObjectIDs(self):
         readOnlyIDs, readWriteIDs = yield self.accessControlObjectIDs()
@@ -1090,7 +1105,7 @@
 
 
     @inlineCallbacks
-    def updateShare(self, shareeView, mode=None, status=None, message=None, name=None):
+    def updateShare(self, shareeView, mode=None, status=None, message=None):
         """
         Update share mode, status, and message for a home child shared with
         this (owned) L{CommonHomeChild}.
@@ -1111,22 +1126,18 @@
             will be used as the default display name, or None to not update
         @type message: L{str}
 
-        @param name: The bind resource name or None to not update
-        @type message: L{str}
-
         @return: the name of the shared item in the sharee's home.
         @rtype: a L{Deferred} which fires with a L{str}
         """
         # TODO: raise a nice exception if shareeView is not, in fact, a shared
         # version of this same L{CommonHomeChild}
 
-        #remove None parameters, and substitute None for empty string
+        # remove None parameters, and substitute None for empty string
         bind = self._bindSchema
-        columnMap = dict([(k, v if v else None)
+        columnMap = dict([(k, v if v != "" else None)
                           for k, v in {bind.BIND_MODE:mode,
                             bind.BIND_STATUS:status,
-                            bind.MESSAGE:message,
-                            bind.RESOURCE_NAME:name}.iteritems() if v is not None])
+                            bind.MESSAGE:message}.iteritems() if v is not None])
 
         if len(columnMap):
 
@@ -1134,15 +1145,15 @@
             if status is not None:
                 previouslyAcceptedBindCount = 1 if shareeView.fullyShared() else 0
                 previouslyAcceptedBindCount += len((yield AddressBookObject._acceptedBindForHomeIDAndAddressBookID.on(
-                        self._txn, homeID=shareeView._home._resourceID, addressbookID=shareeView._resourceID
+                        self._txn, homeID=shareeView.viewerHome()._resourceID, addressbookID=shareeView._resourceID
                 )))
 
             bindNameRows = yield self._updateBindColumnsQuery(columnMap).on(
                 self._txn,
-                resourceID=self._resourceID, homeID=shareeView._home._resourceID
+                resourceID=self._resourceID, homeID=shareeView.viewerHome()._resourceID
             )
 
-            #update affected attributes
+            # update affected attributes
             if mode is not None:
                 shareeView._bindMode = columnMap[bind.BIND_MODE]
 
@@ -1152,20 +1163,20 @@
                     if 0 == previouslyAcceptedBindCount:
                         yield shareeView._initSyncToken()
                         yield shareeView._initBindRevision()
-                        shareeView._home._children[shareeView._name] = shareeView
-                        shareeView._home._children[shareeView._resourceID] = shareeView
+                        shareeView.viewerHome()._children[self.shareeName()] = shareeView
+                        shareeView.viewerHome()._children[shareeView._resourceID] = shareeView
                 elif shareeView._bindStatus == _BIND_STATUS_DECLINED:
                     if 1 == previouslyAcceptedBindCount:
                         yield shareeView._deletedSyncToken(sharedRemoval=True)
-                        shareeView._home._children.pop(shareeView._name, None)
-                        shareeView._home._children.pop(shareeView._resourceID, None)
+                        shareeView.viewerHome()._children.pop(self.shareeName(), None)
+                        shareeView.viewerHome()._children.pop(shareeView._resourceID, None)
 
             if message is not None:
                 shareeView._bindMessage = columnMap[bind.MESSAGE]
 
             queryCacher = self._txn._queryCacher
             if queryCacher:
-                cacheKey = queryCacher.keyForObjectWithName(shareeView._home._resourceID, shareeView._name)
+                cacheKey = queryCacher.keyForObjectWithName(shareeView.viewerHome()._resourceID, self.shareeName())
                 queryCacher.invalidateAfterCommit(self._txn, cacheKey)
 
             shareeView._name = bindNameRows[0][0]
@@ -1177,65 +1188,17 @@
 
 
     @inlineCallbacks
-    def asShared(self):
-        """
-        Retrieve all the versions of this L{AddressBook} as it is shared to
-        everyone.
-
-        @see: L{IAddressBookHome.asShared}
-
-        @return: L{CommonHomeChild} objects that represent this
-            L{CommonHomeChild} as a child of different L{CommonHome}s
-        @rtype: a L{Deferred} which fires with a L{list} of L{ICalendar}s.
-        """
-        result = []
-        if self.owned():
-            # get all accepted shared binds
-            bindRows = yield self._sharedBindForResourceID.on(
-                self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
-            )
-            for bindRow in bindRows:
-                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = bindRow[:self.bindColumnCount]  #@UnusedVariable
-                home = yield self._txn.homeWithResourceID(self._home._homeType, homeID, create=True)
-                new = yield home.childWithName(self.shareeAddressBookName())
-                result.append(new)
-
-        returnValue(result)
-
-
-    @inlineCallbacks
-    def asInvited(self):
-        """
-        Retrieve all the versions of this L{CommonHomeChild} as it is invited to
-        everyone.
-
-        @see: L{ICalendarHome.asInvited}
-
-        @return: L{CommonHomeChild} objects that represent this
-            L{CommonHomeChild} as a child of different L{CommonHome}s
-        @rtype: a L{Deferred} which fires with a L{list} of L{ICalendar}s.
-        """
-        result = []
-        if self.owned():
-            # get all accepted shared binds
-            bindRows = yield self._unacceptedBindForResourceID.on(
-                self._txn, resourceID=self._resourceID
-            )
-            for bindRow in bindRows:
-                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = bindRow[:self.bindColumnCount]  #@UnusedVariable
-                home = yield self._txn.homeWithResourceID(self._home._homeType, homeID, create=True)
-                new = yield self.objectWithName(home, self.shareeAddressBookName(), accepted=False)
-                result.append(new)
-
-        returnValue(result)
-
-
-    @inlineCallbacks
     def shareWith(self, shareeHome, mode, status=None, message=None):
         """
             call super and set isShared = True
         """
         bindName = yield super(AddressBook, self).shareWith(shareeHome, mode, status, message)
+
+        queryCacher = self._txn._queryCacher
+        if queryCacher:
+            cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, self.shareeName())
+            queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+
         self.setShared(True)
         returnValue(bindName)
 
@@ -1253,7 +1216,7 @@
 
         @return: a L{Deferred} which will fire with the previous shareUID
         """
-        sharedAddressBook = yield shareeHome.addressbookWithName(self.shareeAddressBookName())
+        sharedAddressBook = yield shareeHome.addressbookWithName(self.shareeName())
         if sharedAddressBook:
 
             acceptedBindCount = 1 if sharedAddressBook.fullyShared() else 0
@@ -1262,10 +1225,10 @@
             )))
             if acceptedBindCount == 1:
                 yield sharedAddressBook._deletedSyncToken(sharedRemoval=True)
-                shareeHome._children.pop(sharedAddressBook.name(), None)
+                shareeHome._children.pop(self.shareeName(), None)
                 shareeHome._children.pop(sharedAddressBook._resourceID, None)
             elif not sharedAddressBook.fullyShared():
-                #FIXME: remove objects for this group only using self.removeObjectResource
+                # FIXME: remove objects for this group only using self.removeObjectResource
                 self._objectNames = None
 
             # Must send notification to ensure cache invalidation occurs
@@ -1279,7 +1242,7 @@
             deletedBindName = deletedBindNameRows[0][0]
             queryCacher = self._txn._queryCacher
             if queryCacher:
-                cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, self.shareeAddressBookName())
+                cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, self.shareeName())
                 queryCacher.invalidateAfterCommit(self._txn, cacheKey)
         else:
             deletedBindName = None
@@ -1293,15 +1256,16 @@
 
     implements(IAddressBookObject)
 
+    _homeSchema = schema.ADDRESSBOOK_HOME
     _objectSchema = schema.ADDRESSBOOK_OBJECT
     _bindSchema = schema.SHARED_GROUP_BIND
 
     # used by CommonHomeChild._childrenAndMetadataForHomeID() only
-    #_homeChildSchema = schema.ADDRESSBOOK_OBJECT
-    #_homeChildMetaDataSchema = schema.ADDRESSBOOK_OBJECT
+    # _homeChildSchema = schema.ADDRESSBOOK_OBJECT
+    # _homeChildMetaDataSchema = schema.ADDRESSBOOK_OBJECT
 
 
-    def __init__(self, addressbook, name, uid, resourceID=None, options=None):  #@UnusedVariable
+    def __init__(self, addressbook, name, uid, resourceID=None, options=None):
 
         self._kind = None
         self._ownerAddressBookResourceID = None
@@ -1314,6 +1278,7 @@
         self._bindMessage = None
         self._bindName = None
         super(AddressBookObject, self).__init__(addressbook, name, uid, resourceID, options)
+        self._options = {} if options is None else options
 
 
     def __repr__(self):
@@ -1333,6 +1298,10 @@
         return self._kind
 
 
+    def isGroupForSharedAddressBook(self):
+        return self._resourceID == self.addressbook()._resourceID
+
+
     @classmethod
     def _deleteMembersWithMemberIDAndGroupIDsQuery(cls, memberID, groupIDs):
         aboMembers = schema.ABO_MEMBERS
@@ -1346,71 +1315,72 @@
     def remove(self):
 
         if self.owned():
-            # storebridge should already have done this
-            yield self.unshare()
+            yield self.unshare() # storebridge should already have done this
         else:
-            # Can't delete a share here with notification so raise.
-            if self._resourceID == self.addressbook()._resourceID:
-                raise GroupForSharedAddressBookDeleteNotAllowedError
-            elif self.shareUID():
-                raise SharedGroupDeleteNotAllowedError
+            # handled in storebridge as unshare, should not be here.  assert instead?
+            if self.isGroupForSharedAddressBook() or self.shareUID():
+                raise HTTPError(FORBIDDEN)
 
         if not self.owned() and not self.addressbook().fullyShared():
-            # convert delete in sharee shared group address book to remove of memberships
-            # that make this object visible to the sharee
-
+            readWriteObjectIDs = []
             readWriteGroupIDs = yield self.addressbook().readWriteGroupIDs()
             if readWriteGroupIDs:
-                objectsIDs = yield self.addressbook().expandGroupIDs(self._txn, readWriteGroupIDs)
-                yield self._deleteMembersWithMemberIDAndGroupIDsQuery(self._resourceID, objectsIDs).on(
-                    self._txn, groupIDs=objectsIDs
+                readWriteObjectIDs = yield self.addressbook().expandGroupIDs(self._txn, readWriteGroupIDs)
+
+            # can't delete item in shared group, even if user has addressbook unbind
+            if self._resourceID not in readWriteObjectIDs:
+                raise HTTPError(FORBIDDEN)
+
+            # convert delete in sharee shared group address book to remove of memberships
+            # that make this object visible to the sharee
+            if readWriteObjectIDs:
+                yield self._deleteMembersWithMemberIDAndGroupIDsQuery(self._resourceID, readWriteObjectIDs).on(
+                    self._txn, groupIDs=readWriteObjectIDs
                 )
 
-            yield self._changeAddressBookRevision(self.ownerHome().addressbook())
+        aboMembers = schema.ABO_MEMBERS
+        aboForeignMembers = schema.ABO_FOREIGN_MEMBERS
 
-        else:
-            # delete members table rows for this object,...
-            aboMembers = schema.ABO_MEMBERS
-            aboForeignMembers = schema.ABO_FOREIGN_MEMBERS
+        groupIDRows = yield Delete(
+            aboMembers,
+            Where=aboMembers.MEMBER_ID == self._resourceID,
+            Return=aboMembers.GROUP_ID
+        ).on(self._txn)
 
-            groupIDRows = yield Delete(
-                aboMembers,
-                Where=aboMembers.MEMBER_ID == self._resourceID,
-                Return=aboMembers.GROUP_ID
+        # add to foreign member table row by UID (aboForeignMembers on address books)
+        memberAddress = "urn:uuid:" + self._uid
+        for groupID in set([groupIDRow[0] for groupIDRow in groupIDRows]) - set([self._ownerAddressBookResourceID]):
+            yield Insert(
+                {aboForeignMembers.GROUP_ID: groupID,
+                 aboForeignMembers.ADDRESSBOOK_ID: self._ownerAddressBookResourceID,
+                 aboForeignMembers.MEMBER_ADDRESS: memberAddress, }
             ).on(self._txn)
 
-            # add to foreign member table row by UID
-            memberAddress = "urn:uuid:" + self._uid
-            for groupID in [groupIDRow[0] for groupIDRow in groupIDRows]:
-                if groupID != self._ownerAddressBookResourceID:  # no aboForeignMembers on address books
-                    yield Insert(
-                        {aboForeignMembers.GROUP_ID: groupID,
-                         aboForeignMembers.ADDRESSBOOK_ID: self._ownerAddressBookResourceID,
-                         aboForeignMembers.MEMBER_ADDRESS: memberAddress, }
-                    ).on(self._txn)
+        yield super(AddressBookObject, self).remove()
+        self._kind = None
+        self._ownerAddressBookResourceID = None
+        self._component = None
 
-            yield super(AddressBookObject, self).remove()
-            self._kind = None
-            self._ownerAddressBookResourceID = None
-            self._component = None
 
-
     @inlineCallbacks
     def readWriteAccess(self):
         assert not self.owned(), "Don't call items in owned address book"
+        yield None
 
+        #shared address book group is always read-only
+        if self.isGroupForSharedAddressBook():
+            returnValue(False)
+
         # if fully shared and rw, must be RW since sharing group read-only has no affect
         if self.addressbook().fullyShared() and self.addressbook().shareMode() == _BIND_MODE_WRITE:
-            yield None
             returnValue(True)
 
+        #otherwise, must be in a read-write group
         readWriteGroupIDs = yield self.addressbook().readWriteGroupIDs()
-        if self._resourceID in (yield self.addressbook().expandGroupIDs(self._txn, readWriteGroupIDs)):
-            returnValue(True)
+        readWriteIDs = yield self.addressbook().expandGroupIDs(self._txn, readWriteGroupIDs)
+        returnValue(self._resourceID in readWriteIDs)
 
-        returnValue(False)
 
-
     @classmethod
     def _allColumnsWithResourceIDsAnd(cls, resourceIDs, column, paramName):
         """
@@ -1436,7 +1406,7 @@
 
 
     @classproperty
-    def _allColumnsWithResourceID(cls):  #@NoSelf
+    def _allColumnsWithResourceID(cls): #@NoSelf
         obj = cls._objectSchema
         return Select(
             cls._allColumns, From=obj,
@@ -1473,14 +1443,14 @@
 
             if not rows and self.addressbook().fullyShared():  # perhaps add special group
                 if self._name:
-                    if self._name == self.addressbook()._fullySharedAddressBookGroupName():
-                        rows = [self.addressbook()._fullySharedAddressBookGroupRow()]
+                    if self._name == self.addressbook()._groupForSharedAddressBookName():
+                        rows = [self.addressbook()._groupForSharedAddressBookRow()]
                 elif self._uid:
-                    if self._uid == (yield self.addressbook()._fullySharedAddressBookGroupUID()):
-                        rows = [self.addressbook()._fullySharedAddressBookGroupRow()]
+                    if self._uid == (yield self.addressbook()._groupForSharedAddressBookUID()):
+                        rows = [self.addressbook()._groupForSharedAddressBookRow()]
                 elif self._resourceID:
-                    if self._resourceID == self.addressbook()._resourceID:
-                        rows = [self.addressbook()._fullySharedAddressBookGroupRow()]
+                    if self.isGroupForSharedAddressBook():
+                        rows = [self.addressbook()._groupForSharedAddressBookRow()]
         else:
             acceptedGroupIDs = yield self.addressbook().acceptedGroupIDs()
             allowedObjectIDs = yield self.addressbook().expandGroupIDs(self._txn, acceptedGroupIDs)
@@ -1509,11 +1479,6 @@
             self._initFromRow(tuple(rows[0]))
 
             if self._kind == _ABO_KIND_GROUP:
-                # generate "X-ADDRESSBOOKSERVER-MEMBER" properties
-                # calc md5 and set size
-                componentText = str((yield self.component()))
-                self._md5 = hashlib.md5(componentText).hexdigest()
-                self._size = len(componentText)
 
                 groupBindRows = yield AddressBookObject._bindForResourceIDAndHomeID.on(
                     self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
@@ -1521,7 +1486,7 @@
 
                 if groupBindRows:
                     groupBindRow = groupBindRows[0]
-                    bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount]  #@UnusedVariable
+                    bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:AddressBookObject.bindColumnCount] #@UnusedVariable
                     self._bindMode = bindMode
                     self._bindStatus = bindStatus
                     self._bindMessage = bindMessage
@@ -1537,7 +1502,7 @@
 
 
     @classproperty
-    def _allColumns(cls):  #@NoSelf
+    def _allColumns(cls): #@NoSelf
         """
         Full set of columns in the object table that need to be loaded to
         initialize the object resource state.
@@ -1588,7 +1553,7 @@
         if addressbook.owned() or addressbook.fullyShared():
             rows = yield super(AddressBookObject, cls)._allColumnsWithParent(addressbook)
             if addressbook.fullyShared():
-                rows.append(addressbook._fullySharedAddressBookGroupRow())
+                rows.append(addressbook._groupForSharedAddressBookRow())
         else:
             acceptedGroupIDs = yield addressbook.acceptedGroupIDs()
             allowedObjectIDs = yield addressbook.expandGroupIDs(addressbook._txn, acceptedGroupIDs)
@@ -1612,8 +1577,8 @@
 
         if addressbook.owned() or addressbook.fullyShared():
             rows = yield super(AddressBookObject, cls)._allColumnsWithParentAndNames(addressbook, names)
-            if addressbook.fullyShared() and addressbook._fullySharedAddressBookGroupName() in names:
-                rows.append(addressbook._fullySharedAddressBookGroupRow())
+            if addressbook.fullyShared() and addressbook._groupForSharedAddressBookName() in names:
+                rows.append(addressbook._groupForSharedAddressBookRow())
         else:
             acceptedGroupIDs = yield addressbook.acceptedGroupIDs()
             allowedObjectIDs = yield addressbook.expandGroupIDs(addressbook._txn, acceptedGroupIDs)
@@ -1651,7 +1616,7 @@
         self.validAddressDataCheck(component, inserting)
 
 
-    def validAddressDataCheck(self, component, inserting):  #@UnusedVariable
+    def validAddressDataCheck(self, component, inserting): #@UnusedVariable
         """
         Check that the calendar data is valid iCalendar.
         @return:         tuple: (True/False if the calendar data is valid,
@@ -1672,28 +1637,48 @@
             raise InvalidComponentForStoreError(str(e))
 
 
+    def _componentResourceKindToKind(self, component):
+        componentResourceKindToAddressBookObjectKindMap = {
+            "person": _ABO_KIND_PERSON,
+            "group": _ABO_KIND_GROUP,
+            "resource": _ABO_KIND_RESOURCE,
+            "location": _ABO_KIND_LOCATION,
+        }
+        lcResourceKind = component.resourceKind().lower() if component.resourceKind() else component.resourceKind()
+        kind = componentResourceKindToAddressBookObjectKindMap.get(lcResourceKind, _ABO_KIND_PERSON)
+        return kind
+
+
     @inlineCallbacks
     def _lockUID(self, component, inserting):
         """
         Create a lock on the component's UID and verify, after getting the lock, that the incoming UID
         meets the requirements of the store.
         """
-
         new_uid = component.resourceUID()
-        yield NamedLock.acquire(self._txn, "vCardUIDLock:%s/%s/%s" % (self._home.uid(), self.addressbook().name(), hashlib.md5(new_uid).hexdigest(),))
+        yield NamedLock.acquire(self._txn, "vCardUIDLock:%s/%s" % (self.ownerHome().uid(), hashlib.md5(new_uid).hexdigest(),))
 
         # UID conflict check - note we do this after reserving the UID to avoid a race condition where two requests
-        # try to write the same calendar data to two different resource URIs.
+        # try to write the same address data to two different resource URIs.
 
-        # Cannot overwrite a resource with different UID
         if not inserting:
+            # Cannot overwrite a resource with different kind
+            if self._kind != self._componentResourceKindToKind(component):
+                raise KindChangeNotAllowedError
+
+            # Cannot overwrite a resource with different UID
             if self._uid != new_uid:
                 raise InvalidUIDError("Cannot change the UID in an existing resource.")
         else:
-            # New UID must be unique for the owner - no need to do this on an overwrite as we can assume
-            # the store is already consistent in this regard
-            elsewhere = (yield self.addressbook().addressbookObjectWithUID(new_uid))
-            if elsewhere is not None:
+            # for partially shared addressbooks, cannot use name that already exists in owner
+            if not self.owned() and not self.addressbook().fullyShared():
+                nameElsewhere = (yield self.ownerHome().addressbook().addressbookObjectWithName(self.name()))
+                if nameElsewhere is not None:
+                    raise ObjectResourceNameAlreadyExistsError(self.name() + ' in use by owning addressbook.')
+
+            # New UID must be unique for the owner
+            uidElsewhere = (yield self.ownerHome().addressbook().addressbookObjectWithUID(new_uid))
+            if uidElsewhere is not None:
                 raise UIDExistsError("UID already exists in same addressbook.")
 
 
@@ -1706,7 +1691,8 @@
         self.fullValidation(component, inserting)
 
         # UID lock - this will remain active until the end of the current txn
-        yield self._lockUID(component, inserting)
+        if not inserting or self._options.get("coaddedUIDs") is None:
+            yield self._lockUID(component, inserting)
 
         yield self.updateDatabase(component, inserting=inserting)
         yield self._changeAddressBookRevision(self._addressbook, inserting)
@@ -1714,8 +1700,10 @@
         if self.owned():
             # update revision table of the sharee group address book
             if self._kind == _ABO_KIND_GROUP:  # optimization
-                for shareeAddressBook in (yield self.asShared()):
-                    yield self._changeAddressBookRevision(shareeAddressBook, inserting)
+                invites = yield self.sharingInvites()
+                for invite in invites:
+                    shareeHome = (yield self._txn.homeWithResourceID(self.addressbook()._home._homeType, invite.shareeHomeID()))
+                    yield self._changeAddressBookRevision(shareeHome.addressbook(), inserting)
                     # one is enough because all have the same resourceID
                     break
         else:
@@ -1723,7 +1711,6 @@
                 # update revisions table of shared group's containing address book
                 yield self._changeAddressBookRevision(self.ownerHome().addressbook(), inserting)
 
-        self._component = component
         returnValue(self._componentChanged)
 
 
@@ -1757,7 +1744,7 @@
 
 
     @classproperty
-    def _insertABObject(cls):  #@NoSelf
+    def _insertABObject(cls): #@NoSelf
         """
         DAL statement to create an addressbook object with all default values.
         """
@@ -1777,7 +1764,7 @@
 
 
     @inlineCallbacks
-    def updateDatabase(self, component, expand_until=None, reCreate=False,  #@UnusedVariable
+    def updateDatabase(self, component, expand_until=None, reCreate=False, #@UnusedVariable
                        inserting=False):
         """
         Update the database tables for the new data being written.
@@ -1786,28 +1773,26 @@
         @type component: L{Component}
         """
 
-        componentResourceKindToAddressBookObjectKindMap = {
-            "person": _ABO_KIND_PERSON,
-            "group": _ABO_KIND_GROUP,
-            "resource": _ABO_KIND_RESOURCE,
-            "location": _ABO_KIND_LOCATION,
-        }
-        lcResourceKind = component.resourceKind().lower() if component.resourceKind() else component.resourceKind()
-        kind = componentResourceKindToAddressBookObjectKindMap.get(lcResourceKind, _ABO_KIND_PERSON)
-        assert inserting or self._kind == kind  # can't change kind. Should be checked in upper layers
-        self._kind = kind
+        if inserting:
+            self._kind = self._componentResourceKindToKind(component)
 
         # For shared groups:  Non owner may NOT add group members not currently in group!
         # (Or it would be possible to troll for unshared vCard UIDs and make them shared.)
         if not self._ownerAddressBookResourceID:
             self._ownerAddressBookResourceID = self.ownerHome().addressbook()._resourceID
 
+        uid = component.resourceUID()
+        assert inserting or self._uid == uid  # can't change UID. Should be checked in upper layers
+        self._uid = uid
+        originalComponentText = str(component)
+
         if self._kind == _ABO_KIND_GROUP:
+            memberAddresses = set(component.resourceMemberAddresses())
 
             # get member ids
             memberUIDs = []
             foreignMemberAddrs = []
-            for memberAddr in component.resourceMemberAddresses():
+            for memberAddr in memberAddresses:
                 if len(memberAddr) > len("urn:uuid:") and memberAddr.startswith("urn:uuid:"):
                     memberUIDs.append(memberAddr[len("urn:uuid:"):])
                 else:
@@ -1818,50 +1803,45 @@
             ) if memberUIDs else []
             memberIDs = [memberRow[0] for memberRow in memberRows]
             foundUIDs = [memberRow[1] for memberRow in memberRows]
-            foreignMemberAddrs.extend(["urn:uuid:" + missingUID for missingUID in set(memberUIDs) - set(foundUIDs)])
+            foundUIDs.append(self._uid) # circular self reference is OK
+            missingUIDs = set(memberUIDs) - set(foundUIDs)
 
-            if not self.owned():
-                if not self.addressbook().fullyShared():
-                    #in shared ab defined by groups, all members must be inside the shared groups
+            if not self.owned() and not self.addressbook().fullyShared():
+                # in partially shared addressbook, all members UIDs must be inside the shared groups
+                # except during bulk operations, when other UIDs added are OK
+                coaddedUIDs = set() if self._options.get("coaddedUIDs") is None else self._options["coaddedUIDs"]
+                if missingUIDs - coaddedUIDs:
+                    raise GroupWithUnsharedAddressNotAllowedError(missingUIDs)
 
-                    #FIXME: does this apply to whole-shared address books too?
-                    if foreignMemberAddrs:
-                        raise GroupWithUnsharedAddressNotAllowedError
+                # see if user has access all the members
+                acceptedGroupIDs = yield self.addressbook().acceptedGroupIDs()
+                allowedObjectIDs = yield self.addressbook().expandGroupIDs(self._txn, acceptedGroupIDs)
+                if set(memberIDs) - set(allowedObjectIDs):
+                    raise HTTPError(FORBIDDEN) # could give more info here, and use special exception
 
-                    acceptedGroupIDs = yield self.addressbook().acceptedGroupIDs()
-                    allowedObjectIDs = yield self.addressbook().expandGroupIDs(self._txn, acceptedGroupIDs)
-                    if set(memberIDs) - set(allowedObjectIDs):
-                        raise GroupWithUnsharedAddressNotAllowedError
+            # missing uids and other cuaddrs e.g. user at example.com, are stored in same schema table
+            foreignMemberAddrs.extend(["urn:uuid:" + missingUID for missingUID in missingUIDs])
 
-            # don't store group members in object text
-
-            orginialComponent = str(component)
-            # sort addresses in component text
-            memberAddresses = component.resourceMemberAddresses()
+            # sort unique members
             component.removeProperties("X-ADDRESSBOOKSERVER-MEMBER")
-            for memberAddress in sorted(memberAddresses):
+            for memberAddress in sorted(list(memberAddresses)): # sort unique
                 component.addProperty(Property("X-ADDRESSBOOKSERVER-MEMBER", memberAddress))
-
-            # use sorted test to get size and md5
             componentText = str(component)
-            self._md5 = hashlib.md5(componentText).hexdigest()
-            self._size = len(componentText)
-            self._componentChanged = componentText != orginialComponent
 
-            # remove members from component get new text
-            component.removeProperties("X-ADDRESSBOOKSERVER-MEMBER")
-            componentText = str(component)
-            self._objectText = componentText
-
+            # remove unneeded fields to get stored _objectText
+            thinComponent = deepcopy(component)
+            thinComponent.removeProperties("X-ADDRESSBOOKSERVER-MEMBER")
+            thinComponent.removeProperties("X-ADDRESSBOOKSERVER-KIND")
+            thinComponent.removeProperties("UID")
+            self._objectText = str(thinComponent)
         else:
             componentText = str(component)
-            self._md5 = hashlib.md5(componentText).hexdigest()
-            self._size = len(componentText)
             self._objectText = componentText
 
-        uid = component.resourceUID()
-        assert inserting or self._uid == uid  # can't change UID. Should be checked in upper layers
-        self._uid = uid
+        self._size = len(self._objectText)
+        self._component = component
+        self._md5 = hashlib.md5(componentText).hexdigest()
+        self._componentChanged = originalComponentText != componentText
 
         # Special - if migrating we need to preserve the original md5
         if self._txn._migrating and hasattr(component, "md5"):
@@ -1872,7 +1852,6 @@
         aboMembers = schema.ABO_MEMBERS
 
         if inserting:
-
             self._resourceID, self._created, self._modified = (
                 yield self._insertABObject.on(
                     self._txn,
@@ -1892,14 +1871,12 @@
                 Where=aboForeignMembers.MEMBER_ADDRESS == "urn:uuid:" + self._uid,
                 Return=aboForeignMembers.GROUP_ID
             ).on(self._txn)
-            groupIDs = [groupIDRow[0] for groupIDRow in groupIDRows]
+            groupIDs = set([groupIDRow[0] for groupIDRow in groupIDRows])
 
-            # FIXME: Is this correct? Write test case
-            if not self.owned():
-                if not self.addressbook().fullyShared() or self.addressbook().shareMode() != _BIND_MODE_WRITE:
-                    readWriteGroupIDs = yield self.addressbook().readWriteGroupIDs()
-                    assert readWriteGroupIDs, "no access"
-                    groupIDs.extend(readWriteGroupIDs)
+            if not self.owned() and not self.addressbook().fullyShared():
+                readWriteGroupIDs = yield self.addressbook().readWriteGroupIDs()
+                assert readWriteGroupIDs, "no access"
+                groupIDs |= set(readWriteGroupIDs)
 
             # add to member table rows
             for groupID in groupIDs:
@@ -1919,7 +1896,11 @@
 
         if self._kind == _ABO_KIND_GROUP:
 
-            #get current members
+            # allow circular group
+            if inserting and "urn:uuid:" + self._uid in memberAddresses:
+                memberIDs.append(self._resourceID)
+
+            # get current members
             currentMemberRows = yield Select([aboMembers.MEMBER_ID],
                  From=aboMembers,
                  Where=aboMembers.GROUP_ID == self._resourceID,).on(self._txn)
@@ -1943,7 +1924,7 @@
             # don't bother with aboForeignMembers on address books
             if self._resourceID != self._ownerAddressBookResourceID:
 
-                #get current foreign members
+                # get current foreign members
                 currentForeignMemberRows = yield Select(
                     [aboForeignMembers.MEMBER_ADDRESS],
                      From=aboForeignMembers,
@@ -1977,8 +1958,8 @@
 
         if self._component is None:
 
-            if not self.owned() and  self._resourceID == self.addressbook()._resourceID:
-                component = yield self.addressbook()._fullySharedAddressBookGroupComponent()
+            if self.isGroupForSharedAddressBook():
+                component = yield self.addressbook()._groupForSharedAddressBookComponent()
             else:
                 text = yield self._text()
 
@@ -2038,6 +2019,8 @@
                     # now add the properties to the component
                     for memberAddress in sorted(memberAddresses + foreignMembers):
                         component.addProperty(Property("X-ADDRESSBOOKSERVER-MEMBER", memberAddress))
+                    component.addProperty(Property("X-ADDRESSBOOKSERVER-KIND", "group"))
+                    component.addProperty(Property("UID", self._uid))
 
             self._component = component
 
@@ -2099,7 +2082,7 @@
 
     # same as CommonHomeChild._childrenAndMetadataForHomeID() w/o metadata join
     @classproperty
-    def _childrenAndMetadataForHomeID(cls):  #@NoSelf
+    def _childrenAndMetadataForHomeID(cls): #@NoSelf
         bind = cls._bindSchema
         child = cls._objectSchema
         columns = cls.bindColumns() + cls.additionalBindColumns() + cls.metadataColumns()
@@ -2115,68 +2098,8 @@
         return self.addressbook().notifyChanged()
 
 
-    @inlineCallbacks
-    def asShared(self):
-        """
-        Retrieve all the versions of this L{AddressBookObject} as it is shared to
-        everyone.
-
-        @see: L{IAddressBookHome.asShared}
-
-        @return: L{AddressBookObject} objects that represent this
-            L{AddressBookObject} as a child of different L{AddressBooks}s
-            in different L{CommonHome}s
-        @rtype: a L{Deferred} which fires with a L{list} of L{AddressBookObject}s.
-        """
-        result = []
-        if self.owned():
-            # get all accepted shared binds
-            groupBindRows = yield self._sharedBindForResourceID.on(
-                self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
-            )
-            for groupBindRow in groupBindRows:
-                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:self.bindColumnCount]  #@UnusedVariable
-                home = yield self._txn.homeWithResourceID(self._home._homeType, homeID, create=True)
-                addressbook = yield home.childWithName(self._home.shareeAddressBookName())
-                new = yield addressbook.objectResourceWithID(resourceID)
-                result.append(new)
-
-        returnValue(result)
-
-
-    @inlineCallbacks
-    def asInvited(self):
-        """
-        Retrieve all the versions of this L{AddressBookObject} as it is shared to
-        everyone.
-
-        @see: L{ICalendarHome.asShared}
-
-        @return: L{AddressBookObject} objects that represent this
-            L{AddressBookObject} as a child of different L{AddressBooks}s
-            in different L{CommonHome}s
-        @rtype: a L{Deferred} which fires with a L{list} of L{AddressBookObject}s.
-        """
-        result = []
-        if self.owned():
-            # get all accepted shared binds
-            groupBindRows = yield self._unacceptedBindForResourceID.on(
-                self._txn, resourceID=self._resourceID
-            )
-            for groupBindRow in groupBindRows:
-                bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:self.bindColumnCount]  #@UnusedVariable
-                home = yield self._txn.homeWithResourceID(self._home._homeType, homeID, create=True)
-                addressbook = yield home.childWithName(self._home.shareeAddressBookName())
-                if not addressbook:
-                    addressbook = yield AddressBook.objectWithName(home, self._home.shareeAddressBookName(), accepted=False)
-                new = yield AddressBookObject.objectWithID(addressbook, resourceID)  # avoids object cache
-                result.append(new)
-
-        returnValue(result)
-
-
     @classproperty
-    def _addressbookIDForResourceID(cls):  #@NoSelf
+    def _addressbookIDForResourceID(cls): #@NoSelf
         obj = cls._objectSchema
         return Select([obj.PARENT_RESOURCE_ID],
                       From=obj,
@@ -2192,6 +2115,40 @@
 
 
     @inlineCallbacks
+    def sharingInvites(self):
+        """
+        Retrieve the list of all L{SharingInvitation} for this L{CommonHomeChild}, irrespective of mode.
+
+        @return: L{SharingInvitation} objects
+        @rtype: a L{Deferred} which fires with a L{list} of L{SharingInvitation}s.
+        """
+        if not self.owned():
+            returnValue([])
+
+        # get all accepted binds
+        acceptedRows = yield self._sharedInvitationBindForResourceID.on(
+            self._txn, resourceID=self._resourceID, homeID=self.addressbook()._home._resourceID
+        )
+
+        result = []
+        for homeUID, homeRID, resourceID, resourceName, bindMode, bindStatus, bindMessage in acceptedRows: #@UnusedVariable
+            invite = SharingInvitation(
+                resourceName,
+                self.addressbook()._home.name(),
+                self.addressbook()._home._resourceID,
+                homeUID,
+                homeRID,
+                resourceID,
+                self.addressbook().shareeName(),
+                bindMode,
+                bindStatus,
+                bindMessage,
+            )
+            result.append(invite)
+        returnValue(result)
+
+
+    @inlineCallbacks
     def unshare(self):
         """
         Unshares a group, regardless of which "direction" it was shared.
@@ -2199,8 +2156,10 @@
         if self._kind == _ABO_KIND_GROUP:
             if self.owned():
                 # This collection may be shared to others
-                for sharedToHome in [x.viewerHome() for x in (yield self.asShared())]:
-                    yield self.unshareWith(sharedToHome)
+                invites = yield self.sharingInvites()
+                for invite in invites:
+                    shareeHome = (yield self._txn.homeWithResourceID(self.addressbook()._home._homeType, invite.shareeHomeID()))
+                    (yield self.unshareWith(shareeHome))
             else:
                 # This collection is shared to me
                 ownerAddressBook = self.addressbook().ownerHome().addressbook()
@@ -2222,7 +2181,7 @@
 
         @return: a L{Deferred} which will fire with the previously-used name.
         """
-        sharedAddressBook = yield shareeHome.addressbookWithName(self.addressbook().shareeAddressBookName())
+        sharedAddressBook = yield shareeHome.addressbookWithName(self.addressbook().shareeName())
 
         if sharedAddressBook:
 
@@ -2235,7 +2194,7 @@
 
             if acceptedBindCount == 1:
                 yield sharedAddressBook._deletedSyncToken(sharedRemoval=True)
-                shareeHome._children.pop(self.addressbook().shareeAddressBookName(), None)
+                shareeHome._children.pop(self.addressbook().shareeName(), None)
                 shareeHome._children.pop(self.addressbook()._resourceID, None)
 
             # Must send notification to ensure cache invalidation occurs
@@ -2250,7 +2209,7 @@
             deletedBindName = deletedBindNameRows[0][0]
             queryCacher = self._txn._queryCacher
             if queryCacher:
-                cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, self.addressbook().shareeAddressBookName())
+                cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, self.addressbook().shareeName())
                 queryCacher.invalidateAfterCommit(self._txn, cacheKey)
         else:
             deletedBindName = None
@@ -2303,7 +2262,7 @@
                 self._txn, resourceID=self._resourceID, homeID=shareeHome._resourceID
             )
             groupBindRow = groupBindRows[0]
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:self.bindColumnCount]  #@UnusedVariable
+            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = groupBindRow[:self.bindColumnCount] #@UnusedVariable
             if bindStatus == _BIND_STATUS_ACCEPTED:
                 group = yield shareeHome.objectWithShareUID(bindName)
             else:
@@ -2315,9 +2274,14 @@
         else:
             if status == _BIND_STATUS_ACCEPTED:
                 shareeView = yield shareeHome.objectWithShareUID(bindName)
-                yield shareeView._initSyncToken()
+                yield shareeView.addressbook()._initSyncToken()
                 yield shareeView._initBindRevision()
 
+        queryCacher = self._txn._queryCacher
+        if queryCacher:
+            cacheKey = queryCacher.keyForObjectWithName(shareeHome._resourceID, self.addressbook().shareeName())
+            queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+
         # Must send notification to ensure cache invalidation occurs
         yield self.notifyChanged()
         self.setShared(True)
@@ -2325,16 +2289,9 @@
 
 
     @inlineCallbacks
-    def _initSyncToken(self):
-        yield self.addressbook()._initSyncToken()
-
-
-    @inlineCallbacks
     def _initBindRevision(self):
         yield self.addressbook()._initBindRevision()
 
-        # almost works
-        # yield super(AddressBookObject, self)._initBindRevision()
         bind = self._bindSchema
         yield self._updateBindColumnsQuery(
             {bind.BIND_REVISION : Parameter("revision"), }).on(
@@ -2347,8 +2304,7 @@
 
 
     @inlineCallbacks
-    #TODO:  This is almost the same as AddressBook.updateShare(): combine
-    def updateShare(self, shareeView, mode=None, status=None, message=None, name=None):
+    def updateShare(self, shareeView, mode=None, status=None, message=None):
         """
         Update share mode, status, and message for a home child shared with
         this (owned) L{CommonHomeChild}.
@@ -2369,40 +2325,36 @@
             will be used as the default display name, or None to not update
         @type message: L{str}
 
-        @param name: The bind resource name or None to not update
-        @type message: L{str}
-
         @return: the name of the shared item in the sharee's home.
         @rtype: a L{Deferred} which fires with a L{str}
         """
         # TODO: raise a nice exception if shareeView is not, in fact, a shared
         # version of this same L{CommonHomeChild}
 
-        #remove None parameters, and substitute None for empty string
+        # remove None parameters, and substitute None for empty string
         bind = self._bindSchema
-        columnMap = dict([(k, v if v else None)
+        columnMap = dict([(k, v if v != "" else None)
                           for k, v in {bind.BIND_MODE:mode,
                             bind.BIND_STATUS:status,
-                            bind.MESSAGE:message,
-                            bind.RESOURCE_NAME:name}.iteritems() if v is not None])
+                            bind.MESSAGE:message}.iteritems() if v is not None])
 
         if len(columnMap):
 
             # count accepted
             if status is not None:
-                previouslyAcceptedBindCount = 1 if shareeView.addressbook().fullyShared() else 0
+                previouslyAcceptedBindCount = 1 if self.addressbook().fullyShared() else 0
                 previouslyAcceptedBindCount += len((
                     yield AddressBookObject._acceptedBindForHomeIDAndAddressBookID.on(
-                        self._txn, homeID=shareeView._home._resourceID, addressbookID=self.addressbook()._resourceID
+                        self._txn, homeID=shareeView.viewerHome()._resourceID, addressbookID=self.addressbook()._resourceID
                     )
                 ))
 
             bindNameRows = yield self._updateBindColumnsQuery(columnMap).on(
                 self._txn,
-                resourceID=self._resourceID, homeID=shareeView._home._resourceID
+                resourceID=self._resourceID, homeID=shareeView.viewerHome()._resourceID
             )
 
-            #update affected attributes
+            # update affected attributes
             if mode is not None:
                 shareeView._bindMode = columnMap[bind.BIND_MODE]
 
@@ -2410,15 +2362,15 @@
                 shareeView._bindStatus = columnMap[bind.BIND_STATUS]
                 if shareeView._bindStatus == _BIND_STATUS_ACCEPTED:
                     if 0 == previouslyAcceptedBindCount:
-                        yield shareeView._initSyncToken()
+                        yield shareeView.addressbook()._initSyncToken()
                         yield shareeView._initBindRevision()
-                        shareeView._home._children[shareeView.addressbook()._name] = shareeView._addressbook
-                        shareeView._home._children[shareeView.addressbook()._resourceID] = shareeView._addressbook
+                        shareeView.viewerHome()._children[self.addressbook().shareeName()] = shareeView.addressbook()
+                        shareeView.viewerHome()._children[shareeView._resourceID] = shareeView.addressbook()
                 elif shareeView._bindStatus != _BIND_STATUS_INVITED:
                     if 1 == previouslyAcceptedBindCount:
                         yield shareeView.addressbook()._deletedSyncToken(sharedRemoval=True)
-                        shareeView._home._children.pop(shareeView.addressbook()._name, None)
-                        shareeView._home._children.pop(shareeView.addressbook()._resourceID, None)
+                        shareeView.viewerHome()._children.pop(self.addressbook().shareeName(), None)
+                        shareeView.viewerHome()._children.pop(shareeView._resourceID, None)
 
             if message is not None:
                 shareeView._bindMessage = columnMap[bind.MESSAGE]
@@ -2426,7 +2378,7 @@
             # safer to just invalidate in all cases rather than calculate when to invalidate
             queryCacher = self._txn._queryCacher
             if queryCacher:
-                cacheKey = queryCacher.keyForObjectWithName(shareeView._home._resourceID, shareeView.addressbook()._name)
+                cacheKey = queryCacher.keyForObjectWithName(shareeView.viewerHome()._resourceID, self.addressbook().shareeName())
                 queryCacher.invalidateAfterCommit(self._txn, cacheKey)
 
             shareeView._name = bindNameRows[0][0]
@@ -2438,7 +2390,7 @@
 
 
     @classproperty
-    def _acceptedBindForHomeIDAndAddressBookID(cls):  #@NoSelf
+    def _acceptedBindForHomeIDAndAddressBookID(cls): #@NoSelf
         bind = cls._bindSchema
         abo = cls._objectSchema
         return Select(
@@ -2452,7 +2404,7 @@
 
 
     @classproperty
-    def _unacceptedBindForHomeIDAndAddressBookID(cls):  #@NoSelf
+    def _unacceptedBindForHomeIDAndAddressBookID(cls): #@NoSelf
         bind = cls._bindSchema
         abo = cls._objectSchema
         return Select(
@@ -2466,7 +2418,7 @@
 
 
     @classproperty
-    def _bindForHomeIDAndAddressBookID(cls):  #@NoSelf
+    def _bindForHomeIDAndAddressBookID(cls): #@NoSelf
         bind = cls._bindSchema
         abo = cls._objectSchema
         return Select(

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/test/test_file.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/test/test_file.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/datastore/test/test_file.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -466,7 +466,6 @@
         can be retrieved with L{IAddressBookHome.addressbookWithName}.
         """
 
-
     @testUnimplemented
     def test_removeAddressBookWithName_exists(self):
         """

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/iaddressbookstore.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/iaddressbookstore.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/carddav/iaddressbookstore.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -26,9 +26,8 @@
 
 __all__ = [
     # Classes
-    "GroupForSharedAddressBookDeleteNotAllowedError",
     "GroupWithUnsharedAddressNotAllowedError",
-    "SharedGroupDeleteNotAllowedError",
+    "KindChangeNotAllowedError",
     "IAddressBookTransaction",
     "IAddressBookHome",
     "IAddressBook",
@@ -36,13 +35,7 @@
 ]
 
 
-class GroupForSharedAddressBookDeleteNotAllowedError(CommonStoreError):
-    """
-    Sharee cannot delete the group for a shared address book.
-    """
 
-
-
 class GroupWithUnsharedAddressNotAllowedError(CommonStoreError):
     """
     Sharee cannot add unshared group members.
@@ -50,9 +43,9 @@
 
 
 
-class SharedGroupDeleteNotAllowedError(CommonStoreError):
+class KindChangeNotAllowedError(CommonStoreError):
     """
-    Sharee cannot delete a shared group.
+    Cannot change group kind.
     """
 
 

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/file.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/file.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/file.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -1285,7 +1285,7 @@
 
 
     @inlineCallbacks
-    def asInvited(self):
+    def sharingInvites(self):
         """
         Stub for interface-compliance tests.
         """
@@ -1293,16 +1293,7 @@
         returnValue([])
 
 
-    @inlineCallbacks
-    def asShared(self):
-        """
-        Stub for interface-compliance tests.
-        """
-        yield None
-        returnValue([])
 
-
-
 class CommonObjectResource(FileMetaDataMixin, FancyEqMixin):
     """
     @ivar _path: The path of the file on disk

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/sql.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/sql.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -293,7 +293,17 @@
         returnValue(self._dropbox_ok)
 
 
+    def queryCachingEnabled(self):
+        """
+        Indicate whether SQL statement query caching is enabled. Also controls whether propstore caching is done.
 
+        @return: C{True} if enabled, else C{False}
+        @rtype: C{bool}
+        """
+        return self.queryCacher is not None
+
+
+
 class TransactionStatsCollector(object):
     """
     Used to log each SQL query and statistics about that query during the course of a single transaction.
@@ -2620,6 +2630,125 @@
 
 
 
+class SharingInvitation(object):
+    """
+    SharingInvitation covers all the information needed to expose sharing invites to upper layers. Its primarily used to
+    minimize the need to load full properties/data when only this subset of information is needed.
+    """
+    def __init__(self, uid, owner_uid, owner_rid, sharee_uid, sharee_rid, resource_id, resource_name, mode, status, summary):
+        self._uid = uid
+        self._owner_uid = owner_uid
+        self._owner_rid = owner_rid
+        self._sharee_uid = sharee_uid
+        self._sharee_rid = sharee_rid
+        self._resource_id = resource_id
+        self._resource_name = resource_name
+        self._mode = mode
+        self._status = status
+        self._summary = summary
+
+
+    @classmethod
+    def fromCommonHomeChild(cls, homeChild):
+        return cls(
+            homeChild.shareUID(),
+            homeChild.ownerHome().uid(),
+            homeChild.ownerHome()._resourceID,
+            homeChild.viewerHome().uid(),
+            homeChild.viewerHome()._resourceID,
+            homeChild._resourceID,
+            homeChild.shareeName(),
+            homeChild.shareMode(),
+            homeChild.shareStatus(),
+            homeChild.shareMessage(),
+        )
+
+
+    def uid(self):
+        """
+        This maps to the resource name in the bind table, the "bind name". This is used as the "uid"
+        for invites, and is not necessarily the name of the resource as it appears in the collection.
+        """
+        return self._uid
+
+
+    def ownerUID(self):
+        """
+        The ownerUID of the sharer.
+        """
+        return self._owner_uid
+
+
+    def ownerHomeID(self):
+        """
+        The resourceID of the sharer's L{CommonHome}.
+        """
+        return self._owner_rid
+
+
+    def shareeUID(self):
+        """
+        The ownerUID of the sharee.
+        """
+        return self._sharee_uid
+
+
+    def shareeHomeID(self):
+        """
+        The resourceID of the sharee's L{CommonHome}.
+        """
+        return self._sharee_rid
+
+
+    def resourceID(self):
+        """
+        The resourceID of the shared object.
+        """
+        return self._resource_id
+
+
+    def resourceName(self):
+        """
+        This maps to the name of the shared resource in the collection it is bound into. It is not necessarily the
+        same as the "bind name" which is used as the "uid" for invites.
+        """
+        return self._resource_name
+
+
+    def mode(self):
+        """
+        The sharing mode: one of the _BIND_MODE_XXX values.
+        """
+        return self._mode
+
+
+    def setMode(self, mode):
+        self._mode = mode
+
+
+    def status(self):
+        """
+        The sharing status: one of the _BIND_STATUS_XXX values.
+        """
+        return self._status
+
+
+    def setStatus(self, status):
+        self._status = status
+
+
+    def summary(self):
+        """
+        Message associated with the invitation.
+        """
+        return self._summary
+
+
+    def setSummary(self, summary):
+        self._summary = summary
+
+
+
 class SharingMixIn(object):
     """
     Common class for CommonHomeChild and AddressBookObject
@@ -2682,13 +2811,32 @@
         )
 
 
+    @classmethod
+    def _bindInviteFor(cls, condition): #@NoSelf
+        home = cls._homeSchema
+        bind = cls._bindSchema
+        return Select(
+            [
+                home.OWNER_UID,
+                bind.HOME_RESOURCE_ID,
+                bind.RESOURCE_ID,
+                bind.RESOURCE_NAME,
+                bind.BIND_MODE,
+                bind.BIND_STATUS,
+                bind.MESSAGE,
+            ],
+            From=bind.join(home, on=(bind.HOME_RESOURCE_ID == home.RESOURCE_ID)),
+            Where=condition
+        )
+
+
     @classproperty
-    def _sharedBindForResourceID(cls): #@NoSelf
+    def _sharedInvitationBindForResourceID(cls): #@NoSelf
         bind = cls._bindSchema
-        return cls._bindFor((bind.RESOURCE_ID == Parameter("resourceID"))
-                            .And(bind.BIND_STATUS == _BIND_STATUS_ACCEPTED)
-                            .And(bind.BIND_MODE != _BIND_MODE_OWN)
-                            )
+        return cls._bindInviteFor(
+            (bind.RESOURCE_ID == Parameter("resourceID")).And
+            (bind.BIND_MODE != _BIND_MODE_OWN)
+        )
 
 
     @classproperty
@@ -2699,14 +2847,6 @@
 
 
     @classproperty
-    def _unacceptedBindForResourceID(cls): #@NoSelf
-        bind = cls._bindSchema
-        return cls._bindFor((bind.RESOURCE_ID == Parameter("resourceID"))
-                            .And(bind.BIND_STATUS != _BIND_STATUS_ACCEPTED)
-                            )
-
-
-    @classproperty
     def _bindForResourceIDAndHomeID(cls): #@NoSelf
         """
         DAL query that looks up home bind rows by home child
@@ -2791,8 +2931,26 @@
 
 
     @inlineCallbacks
-    def updateShare(self, shareeView, mode=None, status=None, message=None, name=None):
+    def updateShareFromSharingInvitation(self, invitation, mode=None, status=None, message=None):
         """
+        Like L{updateShare} except that the original invitation is provided. That is used
+        to find the actual sharee L{CommonHomeChild} which is then passed to L{updateShare}.
+        """
+
+        # Look up the shared child - might be accepted or not. If accepted use the resource name
+        # to look it up, else use the invitation uid (bind name)
+        shareeHome = yield self._txn.homeWithUID(self._home._homeType, invitation.shareeUID())
+        shareeView = yield shareeHome.childWithName(invitation.resourceName())
+        if shareeView is None:
+            shareeView = yield shareeHome.invitedObjectWithShareUID(invitation.uid())
+
+        result = yield self.updateShare(shareeView, mode, status, message)
+        returnValue(result)
+
+
+    @inlineCallbacks
+    def updateShare(self, shareeView, mode=None, status=None, message=None):
+        """
         Update share mode, status, and message for a home child shared with
         this (owned) L{CommonHomeChild}.
 
@@ -2812,9 +2970,6 @@
             will be used as the default display name, or None to not update
         @type message: L{str}
 
-        @param name: The bind resource name or None to not update
-        @type message: L{str}
-
         @return: the name of the shared item in the sharee's home.
         @rtype: a L{Deferred} which fires with a L{str}
         """
@@ -2823,11 +2978,10 @@
 
         #remove None parameters, and substitute None for empty string
         bind = self._bindSchema
-        columnMap = dict([(k, v if v else None)
+        columnMap = dict([(k, v if v != "" else None)
                           for k, v in {bind.BIND_MODE:mode,
                             bind.BIND_STATUS:status,
-                            bind.MESSAGE:message,
-                            bind.RESOURCE_NAME:name}.iteritems() if v is not None])
+                            bind.MESSAGE:message}.iteritems() if v is not None])
 
         if len(columnMap):
 
@@ -2870,6 +3024,18 @@
 
 
     @inlineCallbacks
+    def unshareWithUID(self, shareeUID):
+        """
+        Like L{unshareWith} except this is passed a sharee UID which is then used to lookup the
+        L{CommonHome} for the sharee to pass to L{unshareWith}.
+        """
+
+        shareeHome = yield self._txn.homeWithUID(self._home._homeType, shareeUID)
+        result = yield self.unshareWith(shareeHome)
+        returnValue(result)
+
+
+    @inlineCallbacks
     def unshareWith(self, shareeHome):
         """
         Remove the shared version of this (owned) L{CommonHomeChild} from the
@@ -2918,8 +3084,10 @@
         """
         if self.owned():
             # This collection may be shared to others
-            for sharedToHome in [x.viewerHome() for x in (yield self.asShared())]:
-                yield self.unshareWith(sharedToHome)
+            invites = yield self.sharingInvites()
+            for invite in invites:
+                shareeHome = (yield self._txn.homeWithResourceID(self._home._homeType, invite.shareeHomeID()))
+                (yield self.unshareWith(shareeHome))
         else:
             # This collection is shared to me
             ownerHomeChild = yield self.ownerHome().childWithID(self._resourceID)
@@ -2927,65 +3095,40 @@
 
 
     @inlineCallbacks
-    def asShared(self):
+    def sharingInvites(self):
         """
-        Retrieve all the versions of this L{CommonHomeChild} as it is shared to
-        everyone.
+        Retrieve the list of all L{SharingInvitation}'s for this L{CommonHomeChild}, irrespective of mode.
 
-        @see: L{ICalendarHome.asShared}
-
-        @return: L{CommonHomeChild} objects that represent this
-            L{CommonHomeChild} as a child of different L{CommonHome}s
-        @rtype: a L{Deferred} which fires with a L{list} of L{ICalendar}s.
+        @return: L{SharingInvitation} objects
+        @rtype: a L{Deferred} which fires with a L{list} of L{SharingInvitation}s.
         """
         if not self.owned():
             returnValue([])
 
         # get all accepted binds
-        acceptedRows = yield self._sharedBindForResourceID.on(
-            self._txn, resourceID=self._resourceID,
+        acceptedRows = yield self._sharedInvitationBindForResourceID.on(
+            self._txn, resourceID=self._resourceID, homeID=self._home._resourceID
         )
 
         result = []
-        for row in acceptedRows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = row[:self.bindColumnCount] #@UnusedVariable
-            home = yield self._txn.homeWithResourceID(self._home._homeType, homeID)
-            new = yield home.objectWithShareUID(bindName)
-            result.append(new)
-
+        for homeUID, homeRID, resourceID, resourceName, bindMode, bindStatus, bindMessage in acceptedRows: #@UnusedVariable
+            invite = SharingInvitation(
+                resourceName,
+                self._home.name(),
+                self._home._resourceID,
+                homeUID,
+                homeRID,
+                resourceID,
+                resourceName if self.bindNameIsResourceName() else self.shareeName(),
+                bindMode,
+                bindStatus,
+                bindMessage,
+            )
+            result.append(invite)
         returnValue(result)
 
 
     @inlineCallbacks
-    def asInvited(self):
-        """
-        Retrieve all the versions of this L{CommonHomeChild} as it is invited to
-        everyone.
-
-        @see: L{ICalendarHome.asInvited}
-
-        @return: L{CommonHomeChild} objects that represent this
-            L{CommonHomeChild} as a child of different L{CommonHome}s
-        @rtype: a L{Deferred} which fires with a L{list} of L{ICalendar}s.
-        """
-        if not self.owned():
-            returnValue([])
-
-        rows = yield self._unacceptedBindForResourceID.on(
-            self._txn, resourceID=self._resourceID,
-        )
-
-        result = []
-        for row in rows:
-            bindMode, homeID, resourceID, bindName, bindStatus, bindRevision, bindMessage = row[:self.bindColumnCount] #@UnusedVariable
-            home = yield self._txn.homeWithResourceID(self._home._homeType, homeID)
-            new = yield home.invitedObjectWithShareUID(bindName)
-            result.append(new)
-
-        returnValue(result)
-
-
-    @inlineCallbacks
     def _initBindRevision(self):
         self._bindRevision = self._syncTokenRevision
 
@@ -3050,6 +3193,20 @@
         yield self.notifyPropertyChanged()
 
 
+    def shareeName(self):
+        """
+        The sharee's name for a shared L{CommonHomeChild} is the name of the resource by default.
+        """
+        return self.name()
+
+
+    def bindNameIsResourceName(self):
+        """
+        By default, the shared resource name of an accepted share is the same as the name in the bind table.
+        """
+        return True
+
+
     def shareStatus(self):
         """
         @see: L{ICalendar.shareStatus}

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/test_sql_schema_files.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/test_sql_schema_files.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/test_sql_schema_files.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -118,3 +118,28 @@
         v5Schema = schemaFromPath(sqlSchema.child("old").child("postgres-dialect").child("v5.sql"))
         mismatched = v6Schema.compare(v5Schema)
         self.assertEqual(len(mismatched), 3, msg="\n".join(mismatched))
+
+
+    def test_references_index(self):
+        """
+        Make sure current-oracle-dialect.sql matches current.sql
+        """
+
+        schema = schemaFromPath(getModule(__name__).filePath.parent().sibling("sql_schema").child("current.sql"))
+
+        # Get index details
+        indexed_columns = set()
+        for index in schema.pseudoIndexes():
+            indexed_columns.add("%s.%s" % (index.table.name, index.columns[0].name,))
+        #print indexed_columns
+
+        # Look at each table
+        failures = []
+        for table in schema.tables:
+            for column in table.columns:
+                if column.references is not None:
+                    id = "%s.%s" % (table.name, column.name,)
+                    if id not in indexed_columns:
+                        failures.append(id)
+
+        self.assertEqual(len(failures), 0, msg="Missing index for references columns: %s" % (", ".join(sorted(failures))))

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/util.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/test/util.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -199,7 +199,7 @@
             directoryService = TestStoreDirectoryService()
         if self.sharedService is None:
             ready = Deferred()
-            def getReady(connectionFactory):
+            def getReady(connectionFactory, storageService):
                 self.makeAndCleanStore(
                     testCase, notifierFactory, directoryService, attachmentRoot
                 ).chainDeferred(ready)
@@ -236,10 +236,12 @@
 
         @return: a L{Deferred} that fires with a L{CommonDataStore}
         """
-        try:
-            attachmentRoot.createDirectory()
-        except OSError:
-            pass
+
+        # Always clean-out old attachments
+        if attachmentRoot.exists():
+            attachmentRoot.remove()
+        attachmentRoot.createDirectory()
+
         currentTestID = testCase.id()
         cp = ConnectionPool(self.sharedService.produceConnection,
                             maxConnections=5)

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/migrate.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/migrate.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/migrate.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -297,8 +297,8 @@
 
                     appropriateStoreClass = AppleDoubleStore
 
-                    return FileStore(path, None, None, True, True,
-                              propertyStoreClass=appropriateStoreClass)
+                return FileStore(path, None, None, True, True,
+                          propertyStoreClass=appropriateStoreClass)
         return None
 
 

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/others/attachment_migration.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/others/attachment_migration.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/others/attachment_migration.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -17,6 +17,8 @@
 from twisted.internet.defer import inlineCallbacks, returnValue
 from txdav.caldav.datastore.sql import CalendarStoreFeatures
 
+import os
+
 """
 Upgrader that checks for any dropbox attachments, and upgrades them all to managed attachments.
 
@@ -54,6 +56,18 @@
             upgrader.log.warn("No dropbox migration needed")
         if managed is None:
             yield txn.setCalendarserverValue(statusKey, "1")
+
+        # Set attachment directory ownership as upgrade runs as root
+        # but child processes running as something else need to manipulate
+        # the attachment files
+        sqlAttachmentsPath = upgrader.sqlStore.attachmentsPath
+        if (sqlAttachmentsPath and sqlAttachmentsPath.exists() and
+            (upgrader.uid or upgrader.gid)):
+            uid = upgrader.uid or -1
+            gid = upgrader.gid or -1
+            for fp in sqlAttachmentsPath.walk():
+                os.chown(fp.path, uid, gid)
+
     except RuntimeError:
         yield txn.abort()
         raise

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/upgrade.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/upgrade.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/sql/upgrade.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -16,7 +16,7 @@
 ##
 
 """
-Utilities, mostly related to upgrading, common to calendar and addresbook
+Utilities, mostly related to upgrading, common to calendar and addressbook
 data stores.
 """
 
@@ -45,6 +45,7 @@
     def __init__(self, sqlStore):
         self.sqlStore = sqlStore
 
+
     @inlineCallbacks
     def stepWithResult(self, result):
         sqlTxn = self.sqlStore.newTransaction()
@@ -52,6 +53,7 @@
         yield sqlTxn.commit()
 
 
+
 class UpgradeReleaseLockStep(object):
     """
     A Step which releases the upgrade lock.
@@ -64,16 +66,19 @@
     def __init__(self, sqlStore):
         self.sqlStore = sqlStore
 
+
     @inlineCallbacks
     def stepWithResult(self, result):
         sqlTxn = self.sqlStore.newTransaction()
         yield sqlTxn.releaseUpgradeLock()
         yield sqlTxn.commit()
 
+
     def stepWithFailure(self, failure):
         return self.stepWithResult(None)
 
 
+
 class UpgradeDatabaseCoreStep(object):
     """
     Base class for either schema or data upgrades on the database.

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/test/test_migrate.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/test/test_migrate.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/common/datastore/upgrade/test/test_migrate.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -368,3 +368,31 @@
         ):
             object = (yield adbk.addressbookObjectWithName(name))
             self.assertEquals(object.md5(), md5)
+
+
+    def test_fileStoreFromPath(self):
+        """
+        Verify that fileStoreFromPath() will return a CommonDataStore if
+        the given path contains either "calendars" or "addressbooks"
+        sub-directories.  Otherwise it returns None
+        """
+
+        # No child directories
+        docRootPath = CachingFilePath(self.mktemp())
+        docRootPath.createDirectory()
+        step = UpgradeToDatabaseStep.fileStoreFromPath(docRootPath)
+        self.assertEquals(step, None)
+
+        # "calendars" child directory exists
+        childPath = docRootPath.child("calendars")
+        childPath.createDirectory()
+        step = UpgradeToDatabaseStep.fileStoreFromPath(docRootPath)
+        self.assertTrue(isinstance(step, CommonDataStore))
+        childPath.remove()
+
+        # "addressbooks" child directory exists
+        childPath = docRootPath.child("addressbooks")
+        childPath.createDirectory()
+        step = UpgradeToDatabaseStep.fileStoreFromPath(docRootPath)
+        self.assertTrue(isinstance(step, CommonDataStore))
+        childPath.remove()

Modified: CalendarServer/branches/users/gaya/directorybacker/txdav/xml/base.py
===================================================================
--- CalendarServer/branches/users/gaya/directorybacker/txdav/xml/base.py	2013-08-22 21:19:37 UTC (rev 11632)
+++ CalendarServer/branches/users/gaya/directorybacker/txdav/xml/base.py	2013-08-22 21:45:36 UTC (rev 11633)
@@ -693,7 +693,7 @@
             return date.strftime("%a, %d %b %Y %H:%M:%S GMT")
 
         if type(date) is int:
-            date = format(datetime.datetime.fromtimestamp(date))
+            date = format(datetime.datetime.utcfromtimestamp(date))
         elif type(date) is str:
             pass
         elif type(date) is unicode:
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20130822/f453af14/attachment-0001.html>


More information about the calendarserver-changes mailing list