[CalendarServer-changes] [9221] CalendarServer/branches/users/gaya/ldapdirectorybacker

source_changes at macosforge.org source_changes at macosforge.org
Wed May 2 11:54:12 PDT 2012


Revision: 9221
          http://trac.macosforge.org/projects/calendarserver/changeset/9221
Author:   gaya at apple.com
Date:     2012-05-02 11:54:12 -0700 (Wed, 02 May 2012)
Log Message:
-----------
update to trunk

Modified Paths:
--------------
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/applepush.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_applepush.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/util.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/cmd.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/terminal.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_cmd.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_vfs.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/vfs.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/test/test_calverify.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/conf/caldavd-test.plist
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/migration/calendarpromotion.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/display-calendar-events.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/__init__.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/config.plist
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/ical.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/sim
    CalendarServer/branches/users/gaya/ldapdirectorybacker/sim
    CalendarServer/branches/users/gaya/ldapdirectorybacker/support/build.sh
    CalendarServer/branches/users/gaya/ldapdirectorybacker/support/pydoctor
    CalendarServer/branches/users/gaya/ldapdirectorybacker/support/shell.sh
    CalendarServer/branches/users/gaya/ldapdirectorybacker/test
    CalendarServer/branches/users/gaya/ldapdirectorybacker/testserver
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twext/web2/dav/resource.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/__init__.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/directory/ldapdirectory.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/ical.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/instance.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/notify.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/resource.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/implicit.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/test/test_implicit.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/simpleresource.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_icalendar.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_sharing.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/__init__.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/file.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/util.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/sql.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/common.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_file.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/sql.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/idav.py

Added Paths:
-----------
    CalendarServer/branches/users/gaya/ldapdirectorybacker/bin/calendarserver_monitor_amp_notifications
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/amppush.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_amppush.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/ampnotifications.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify_diff.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/eventkitframework.py
    CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/rfc6578-WebDAV Sync.txt

Removed Paths:
-------------
    CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/draft-daboo-webdav-sync.txt

Property Changed:
----------------
    CalendarServer/branches/users/gaya/ldapdirectorybacker/


Property changes on: CalendarServer/branches/users/gaya/ldapdirectorybacker
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:8831-8867
   + /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:8831-8867

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/bin/calendarserver_monitor_amp_notifications (from rev 9220, CalendarServer/trunk/bin/calendarserver_monitor_amp_notifications)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/bin/calendarserver_monitor_amp_notifications	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/bin/calendarserver_monitor_amp_notifications	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,33 @@
+#!/usr/bin/env python
+
+##
+# Copyright (c) 2006-2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+import sys
+
+#PYTHONPATH
+
+if __name__ == "__main__":
+    if "PYTHONPATH" in globals():
+        sys.path.insert(0, PYTHONPATH)
+    else:
+        try:
+            import _calendarserver_preamble
+        except ImportError:
+            sys.exc_clear()
+
+    from calendarserver.tools.ampnotifications import main
+    main()

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/amppush.py (from rev 9220, CalendarServer/trunk/calendarserver/push/amppush.py)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/amppush.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/amppush.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,256 @@
+##
+# Copyright (c) 2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from calendarserver.push.util import PushScheduler
+from twext.python.log import Logger, LoggingMixIn
+from twext.python.log import LoggingMixIn
+from twisted.application.internet import StreamServerEndpointService
+from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.internet.endpoints import TCP4ClientEndpoint, TCP4ServerEndpoint
+from twisted.internet.protocol import Factory, ServerFactory
+from twisted.protocols import amp
+from twistedcaldav.notify import getPubSubPath
+import uuid
+
+
+log = Logger()
+
+
+# AMP Commands sent to server
+
+class SubscribeToID(amp.Command):
+    arguments = [('token', amp.String()), ('id', amp.String())]
+    response = [('status', amp.String())]
+
+
+class UnsubscribeFromID(amp.Command):
+    arguments = [('token', amp.String()), ('id', amp.String())]
+    response = [('status', amp.String())]
+
+
+# AMP Commands sent to client
+
+class NotificationForID(amp.Command):
+    arguments = [('id', amp.String())]
+    response = [('status', amp.String())]
+
+
+# Server classes
+
+class AMPPushNotifierService(StreamServerEndpointService, LoggingMixIn):
+    """
+    AMPPushNotifierService allows clients to use AMP to subscribe to,
+    and receive, change notifications.
+    """
+
+    @classmethod
+    def makeService(cls, settings, ignored, serverHostName, reactor=None):
+        return cls(settings, serverHostName, reactor=reactor)
+
+    def __init__(self, settings, serverHostName, reactor=None):
+        if reactor is None:
+            from twisted.internet import reactor
+        factory = AMPPushNotifierFactory(self)
+        endpoint = TCP4ServerEndpoint(reactor, settings["Port"])
+        super(AMPPushNotifierService, self).__init__(endpoint, factory)
+        self.subscribers = []
+
+        if settings["EnableStaggering"]:
+            self.scheduler = PushScheduler(reactor, self.sendNotification,
+                staggerSeconds=settings["StaggerSeconds"])
+        else:
+            self.scheduler = None
+
+        self.serverHostName = serverHostName
+
+    def addSubscriber(self, p):
+        self.log_debug("Added subscriber")
+        self.subscribers.append(p)
+
+    def removeSubscriber(self, p):
+        self.log_debug("Removed subscriber")
+        self.subscribers.remove(p)
+
+    def enqueue(self, op, id):
+        """
+        Sends an AMP push notification to any clients subscribing to this id.
+
+        @param op: The operation that took place, either "create" or "update"
+            (ignored in this implementation)
+        @type op: C{str}
+
+        @param id: The identifier of the resource that was updated, including
+            a prefix indicating whether this is CalDAV or CardDAV related.
+            The prefix is separated from the id with "|", e.g.:
+
+            "CalDAV|abc/def"
+
+            The id is an opaque token as far as this code is concerned, and
+            is used in conjunction with the prefix and the server hostname
+            to build the actual key value that devices subscribe to.
+        @type id: C{str}
+        """
+
+        try:
+            id.split("|", 1)
+        except ValueError:
+            # id has no protocol, so we can't do anything with it
+            self.log_error("Notification id '%s' is missing protocol" % (id,))
+            return
+
+        id = getPubSubPath(id, {"host": self.serverHostName})
+
+        tokens = []
+        for subscriber in self.subscribers:
+            token = subscriber.subscribedToID(id)
+            if token is not None:
+                tokens.append(token)
+        if tokens:
+            return self.scheduleNotifications(tokens, id)
+
+
+    @inlineCallbacks
+    def sendNotification(self, token, id):
+        for subscriber in self.subscribers:
+            if subscriber.subscribedToID(id):
+                yield subscriber.notify(token, id)
+
+
+    @inlineCallbacks
+    def scheduleNotifications(self, tokens, id):
+        if self.scheduler is not None:
+            self.scheduler.schedule(tokens, id)
+        else:
+            for token in tokens:
+                yield self.sendNotification(token, id)
+
+
+class AMPPushNotifierProtocol(amp.AMP, LoggingMixIn):
+
+    def __init__(self, service):
+        super(AMPPushNotifierProtocol, self).__init__()
+        self.service = service
+        self.subscriptions = {}
+        self.any = None
+
+    def subscribe(self, token, id):
+        if id == "any":
+            self.any = token
+        else:
+            self.subscriptions[id] = token
+        return {"status" : "OK"}
+    SubscribeToID.responder(subscribe)
+
+    def unsubscribe(self, token, id):
+        try:
+            del self.subscriptions[id]
+        except KeyError:
+            pass
+        return {"status" : "OK"}
+    UnsubscribeFromID.responder(unsubscribe)
+
+    def notify(self, token, id):
+        if self.subscribedToID(id) == token:
+            self.log_debug("Sending notification for %s to %s" % (id, token))
+            return self.callRemote(NotificationForID, id=id)
+
+    def subscribedToID(self, id):
+        if self.any is not None:
+            return self.any
+        return self.subscriptions.get(id, None)
+
+    def connectionLost(self, reason=None):
+        self.service.removeSubscriber(self)
+
+
+class AMPPushNotifierFactory(ServerFactory, LoggingMixIn):
+
+    protocol = AMPPushNotifierProtocol
+
+    def __init__(self, service):
+        self.service = service
+
+    def buildProtocol(self, addr):
+        p = self.protocol(self.service)
+        self.service.addSubscriber(p)
+        p.service = self.service
+        return p
+
+
+# Client classes
+
+class AMPPushClientProtocol(amp.AMP):
+    """
+    Implements the client side of the AMP push protocol.  Whenever
+    the NotificationForID Command arrives, the registered callback
+    will be called with the id.
+    """
+
+    def __init__(self, callback):
+        super(AMPPushClientProtocol, self).__init__()
+        self.callback = callback
+
+    @inlineCallbacks
+    def notificationForID(self, id):
+        yield self.callback(id)
+        returnValue( {"status" : "OK"} )
+
+    NotificationForID.responder(notificationForID)
+
+
+class AMPPushClientFactory(Factory, LoggingMixIn):
+
+    protocol = AMPPushClientProtocol
+
+    def __init__(self, callback):
+        self.callback = callback
+
+    def buildProtocol(self, addr):
+        p = self.protocol(self.callback)
+        return p
+
+
+# Client helper methods
+
+ at inlineCallbacks
+def subscribeToIDs(host, port, ids, callback, reactor=None):
+    """
+    Clients can call this helper method to register a callback which
+    will get called whenever a push notification is fired for any
+    id in the ids list.
+
+    @param host: AMP host name to connect to
+    @type host: string
+    @param port: AMP port to connect to
+    @type port: integer
+    @param ids: The push IDs to subscribe to
+    @type ids: list of strings
+    @param callback: The method to call whenever a notification is
+        received.
+    @type callback: callable which is passed an id (string)
+    """
+
+    if reactor is None:
+        from twisted.internet import reactor
+
+    token = str(uuid.uuid4())
+    endpoint = TCP4ClientEndpoint(reactor, host, port)
+    factory = AMPPushClientFactory(callback)
+    protocol = yield endpoint.connect(factory)
+    for id in ids:
+        yield protocol.callRemote(SubscribeToID, token=token, id=id)
+
+    returnValue(factory)

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/applepush.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/applepush.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/applepush.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -20,7 +20,6 @@
 from twext.web2 import responsecode
 from txdav.xml import element as davxml
 from twext.web2.dav.noneprops import NonePropertyStore
-from twext.web2.dav.resource import DAVResource
 from twext.web2.http import Response
 from twext.web2.http_headers import MimeType
 from twext.web2.server import parsePOSTData
@@ -54,7 +53,7 @@
     """
 
     @classmethod
-    def makeService(cls, settings, store, testConnectorClass=None,
+    def makeService(cls, settings, store, serverHostName, testConnectorClass=None,
         reactor=None):
         """
         Creates the various "subservices" that work together to implement

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_amppush.py (from rev 9220, CalendarServer/trunk/calendarserver/push/test/test_amppush.py)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_amppush.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_amppush.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,115 @@
+##
+# Copyright (c) 2011-2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from calendarserver.push.amppush import AMPPushNotifierService, AMPPushNotifierProtocol
+from calendarserver.push.amppush import NotificationForID
+from twistedcaldav.test.util import TestCase
+from twisted.internet.defer import inlineCallbacks
+from twisted.internet.task import Clock
+
+class AMPPushNotifierServiceTests(TestCase):
+
+    @inlineCallbacks
+    def test_AMPPushNotifierService(self):
+
+        settings = {
+            "Service" : "calendarserver.push.amppush.AMPPushNotifierService",
+            "Enabled" : True,
+            "Port" : 62311,
+            "EnableStaggering" : True,
+            "StaggerSeconds" : 3,
+        }
+
+        # Set up the service
+        clock = Clock()
+        service = (yield AMPPushNotifierService.makeService(settings,
+            None, "localhost", reactor=clock))
+
+        self.assertEquals(service.subscribers, [])
+
+        client1 = TestProtocol(service)
+        client1.subscribe("token1", "/CalDAV/localhost/user01/")
+        client1.subscribe("token1", "/CalDAV/localhost/user02/")
+
+        client2 = TestProtocol(service)
+        client2.subscribe("token2", "/CalDAV/localhost/user01/")
+
+        client3 = TestProtocol(service)
+        client3.subscribe("token3", "any")
+
+        service.addSubscriber(client1)
+        service.addSubscriber(client2)
+        service.addSubscriber(client3)
+
+        self.assertEquals(len(service.subscribers), 3)
+
+        self.assertTrue(client1.subscribedToID("/CalDAV/localhost/user01/"))
+        self.assertTrue(client1.subscribedToID("/CalDAV/localhost/user02/"))
+        self.assertFalse(client1.subscribedToID("nonexistent"))
+
+        self.assertTrue(client2.subscribedToID("/CalDAV/localhost/user01/"))
+        self.assertFalse(client2.subscribedToID("/CalDAV/localhost/user02/"))
+
+        self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user01/"))
+        self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user02/"))
+        self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user03/"))
+
+        service.enqueue("update", "CalDAV|user01")
+        self.assertEquals(len(client1.history), 0)
+        self.assertEquals(len(client2.history), 0)
+        self.assertEquals(len(client3.history), 0)
+        clock.advance(1)
+        self.assertEquals(client1.history, [(NotificationForID, {'id': '/CalDAV/localhost/user01/'})])
+        self.assertEquals(len(client2.history), 0)
+        self.assertEquals(len(client3.history), 0)
+        clock.advance(3)
+        self.assertEquals(client2.history, [(NotificationForID, {'id': '/CalDAV/localhost/user01/'})])
+        self.assertEquals(len(client3.history), 0)
+        clock.advance(3)
+        self.assertEquals(client3.history, [(NotificationForID, {'id': '/CalDAV/localhost/user01/'})])
+
+        client1.reset()
+        client2.reset()
+        client2.unsubscribe("token2", "/CalDAV/localhost/user01/")
+        service.enqueue("update", "CalDAV|user01")
+        self.assertEquals(len(client1.history), 0)
+        clock.advance(1)
+        self.assertEquals(client1.history, [(NotificationForID, {'id': '/CalDAV/localhost/user01/'})])
+        self.assertEquals(len(client2.history), 0)
+        clock.advance(3)
+        self.assertEquals(len(client2.history), 0)
+
+        # Turn off staggering
+        service.scheduler = None
+        client1.reset()
+        client2.reset()
+        client2.subscribe("token2", "/CalDAV/localhost/user01/")
+        service.enqueue("update", "CalDAV|user01")
+        self.assertEquals(client1.history, [(NotificationForID, {'id': '/CalDAV/localhost/user01/'})])
+        self.assertEquals(client2.history, [(NotificationForID, {'id': '/CalDAV/localhost/user01/'})])
+
+
+class TestProtocol(AMPPushNotifierProtocol):
+
+    def __init__(self, service):
+        super(TestProtocol, self).__init__(service)
+        self.reset()
+
+    def callRemote(self, cls, **kwds):
+        self.history.append((cls, kwds))
+
+    def reset(self):
+        self.history = []

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_applepush.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_applepush.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/test/test_applepush.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -112,7 +112,7 @@
         # Set up the service
         clock = Clock()
         service = (yield ApplePushNotifierService.makeService(settings,
-            self.store, testConnectorClass=TestConnector, reactor=clock))
+            self.store, "localhost", testConnectorClass=TestConnector, reactor=clock))
         self.assertEquals(set(service.providers.keys()), set(["CalDAV","CardDAV"]))
         self.assertEquals(set(service.feedbacks.keys()), set(["CalDAV","CardDAV"]))
 

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/util.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/util.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/push/util.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -154,7 +154,7 @@
         """
         self.log_debug("PushScheduler fired for %s %s" % (token, key))
         del self.outstanding[(token, key)]
-        self.callback(token, key)
+        return self.callback(token, key)
 
     def stop(self):
         """

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/ampnotifications.py (from rev 9220, CalendarServer/trunk/calendarserver/tools/ampnotifications.py)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/ampnotifications.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/ampnotifications.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,138 @@
+#!/usr/bin/env python
+##
+# Copyright (c) 2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+from calendarserver.tools.cmdline import utilityMain
+from getopt import getopt, GetoptError
+from twext.python.log import Logger
+from twisted.application.service import Service
+from twisted.internet.defer import inlineCallbacks, succeed
+from twistedcaldav.config import ConfigurationError
+import os
+import sys
+
+from twisted.internet.defer import inlineCallbacks, succeed
+
+from calendarserver.push.amppush import subscribeToIDs
+
+log = Logger()
+
+def usage(e=None):
+
+    name = os.path.basename(sys.argv[0])
+    print "usage: %s [options] [pushkey ...]" % (name,)
+    print ""
+    print "  Monitor AMP Push Notifications"
+    print ""
+    print "options:"
+    print "  -h --help: print this help and exit"
+    print "  -f --config <path>: Specify caldavd.plist configuration path"
+    print "  -p --port <port>: AMP port to connect to"
+    print "  -s --server <hostname>: AMP server to connect to"
+    print ""
+
+    if e:
+        sys.stderr.write("%s\n" % (e,))
+        sys.exit(64)
+    else:
+        sys.exit(0)
+
+
+class WorkerService(Service):
+
+    def __init__(self, store):
+        self._store = store
+
+    @inlineCallbacks
+    def startService(self):
+        try:
+            yield self.doWork()
+        except ConfigurationError, ce:
+            sys.stderr.write("Error: %s\n" % (str(ce),))
+        except Exception, e:
+            sys.stderr.write("Error: %s\n" % (e,))
+            raise
+
+
+class MonitorAMPNotifications(WorkerService):
+
+    ids = []
+    hostname = None
+    port = None
+
+    def doWork(self):
+        return monitorAMPNotifications(self.hostname, self.port, self.ids)
+
+
+def main():
+
+    try:
+        (optargs, args) = getopt(
+            sys.argv[1:], "f:hp:s:", [
+                "config=",
+                "help",
+                "port=",
+                "server=",
+            ],
+        )
+    except GetoptError, e:
+        usage(e)
+
+    #
+    # Get configuration
+    #
+    configFileName = None
+    hostname = "localhost"
+    port = 62311
+
+    for opt, arg in optargs:
+        if opt in ("-h", "--help"):
+            usage()
+
+        elif opt in ("-f", "--config"):
+            configFileName = arg
+
+        elif opt in ("-p", "--port"):
+            port = int(arg)
+
+        elif opt in ("-s", "--server"):
+            hostname = arg
+
+        else:
+            raise NotImplementedError(opt)
+
+    if not args:
+        usage("Not enough arguments")
+
+
+    MonitorAMPNotifications.ids = args
+    MonitorAMPNotifications.hostname = hostname
+    MonitorAMPNotifications.port = port
+
+    utilityMain(
+        configFileName,
+        MonitorAMPNotifications,
+    )
+
+def notificationCallback(id):
+    print "Received notification for:", id
+    return succeed(True)
+
+ at inlineCallbacks
+def monitorAMPNotifications(hostname, port, ids):
+    print "Subscribing to notifications..."
+    yield subscribeToIDs(hostname, port, ids, notificationCallback)
+    print "Waiting for notifications..."

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -59,11 +59,15 @@
 from twistedcaldav.util import normalizationLookup
 from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN
 from txdav.common.icommondatastore import InternalDataStoreError
+import base64
 import collections
 import os
 import sys
 import time
+import traceback
 
+VERSION = "2"
+
 def usage(e=None):
     if e:
         print e
@@ -78,15 +82,21 @@
         sys.exit(0)
 
 
-description = '\n'.join(
+description = ''.join(
     wordWrap(
         """
-        Usage: calendarserver_verify_data [options] [input specifiers]\n
+        Usage: calendarserver_verify_data [options] [input specifiers]
         """,
         int(os.environ.get('COLUMNS', '80'))
     )
 )
+description += "\nVersion: %s" % (VERSION,)
 
+
+def safePercent(x, y, multiplier=100.0):
+    return ((multiplier * x) / y) if y else 0
+
+
 class CalVerifyOptions(Options):
     """
     Command-line options for 'calendarserver_verify_data'
@@ -96,6 +106,8 @@
 
     optFlags = [
         ['ical', 'i', "Calendar data check."],
+        ['badcua', 'i', "Calendar data check for bad CALENDARSERVER-OLD-CUA only."],
+        ['nobase64', 'n', "Do not apply CALENDARSERVER-OLD-CUA base64 transform when fixing."],
         ['mismatch', 's', "Detect organizer/attendee mismatches."],
         ['missing', 'm', "Show 'orphaned' homes."],
         ['fix', 'x', "Fix problems."],
@@ -151,8 +163,13 @@
         self._directory = None
         
         self.cuaCache = {}
+        self.validForCalendaringUUIDs = {}
         
         self.results = {}
+        self.summary = []
+        self.total = 0
+        self.totalErrors = None
+        self.totalExceptions = None
 
 
     def startService(self):
@@ -168,13 +185,17 @@
         """
         Do the export, stopping the reactor when done.
         """
+        self.output.write("\n---- CalVerify version: %s ----\n" % (VERSION,))
+
         try:
             if self.options["missing"]:
                 yield self.doOrphans()
                 
-            if self.options["mismatch"] or self.options["ical"]:
-                yield self.doScan(self.options["ical"], self.options["mismatch"], self.options["fix"])
+            if self.options["mismatch"] or self.options["ical"] or self.options["badcua"]:
+                yield self.doScan(self.options["ical"] or self.options["badcua"], self.options["mismatch"], self.options["fix"])
 
+            self.printSummary()
+
             self.output.close()
         except:
             log.err()
@@ -185,9 +206,10 @@
     @inlineCallbacks
     def doOrphans(self):
         """
-        Report on home collections for which there are no directory records. 
+        Report on home collections for which there are no directory records, or record is for user on
+        a different pod, or a user not enabled for calendaring. 
         """
-        self.output.write("\n---- Finding calendar homes with no directory record ----\n")
+        self.output.write("\n---- Finding calendar homes with missing or disabled directory records ----\n")
         self.txn = self.store.newTransaction()
 
         if self.options["verbose"]:
@@ -197,16 +219,19 @@
             self.output.write("getAllHomeUIDs time: %.1fs\n" % (time.time() - t,))
         missing = []
         wrong_server = []
+        disabled = []
         uids_len = len(uids)
         uids_div = 1 if uids_len < 100 else uids_len / 100
+        self.addToSummary("Total Homes", uids_len)
 
         for ctr, uid in enumerate(uids):
             if self.options["verbose"] and divmod(ctr, uids_div)[1] == 0:
-                self.output.write("%d of %d (%d%%)\n" % (
+                self.output.write(("\r%d of %d (%d%%)" % (
                     ctr+1,
                     uids_len,
                     ((ctr+1) * 100 / uids_len),
-                ))
+                )).ljust(80))
+                self.output.flush()
 
             record = self.directoryService().recordWithGUID(uid)
             if record is None:
@@ -215,6 +240,9 @@
             elif not record.thisServer():
                 contents = yield self.countHomeContents(uid)
                 wrong_server.append((uid, contents,))
+            elif not record.enabledForCalendaring:
+                contents = yield self.countHomeContents(uid)
+                disabled.append((uid, contents,))
             
             # To avoid holding locks on all the rows scanned, commit every 100 resources
             if divmod(ctr, 100)[1] == 0:
@@ -223,7 +251,9 @@
 
         yield self.txn.commit()
         self.txn = None
-        
+        if self.options["verbose"]:
+            self.output.write("\r".ljust(80) + "\n")
+
         # Print table of results
         table = tables.Table()
         table.addHeader(("Owner UID", "Calendar Objects"))
@@ -236,20 +266,38 @@
         self.output.write("\n")
         self.output.write("Homes without a matching directory record (total=%d):\n" % (len(missing),))
         table.printTable(os=self.output)
+        self.addToSummary("Homes without a matching directory record", len(missing), uids_len)
         
         # Print table of results
         table = tables.Table()
         table.addHeader(("Owner UID", "Calendar Objects"))
         for uid, count in sorted(wrong_server, key=lambda x:x[0]):
+            record = self.directoryService().recordWithGUID(uid)
             table.addRow((
-                uid,
+                "%s/%s (%s)" % (record.recordType if record else "-", record.shortNames[0] if record else "-", uid,),
                 count,
             ))
         
         self.output.write("\n")
         self.output.write("Homes not hosted on this server (total=%d):\n" % (len(wrong_server),))
         table.printTable(os=self.output)
+        self.addToSummary("Homes not hosted on this server", len(wrong_server), uids_len)
         
+        # Print table of results
+        table = tables.Table()
+        table.addHeader(("Owner UID", "Calendar Objects"))
+        for uid, count in sorted(disabled, key=lambda x:x[0]):
+            record = self.directoryService().recordWithGUID(uid)
+            table.addRow((
+                "%s/%s (%s)" % (record.recordType if record else "-", record.shortNames[0] if record else "-", uid,),
+                count,
+            ))
+        
+        self.output.write("\n")
+        self.output.write("Homes without an enabled directory record (total=%d):\n" % (len(disabled),))
+        table.printTable(os=self.output)
+        self.addToSummary("Homes without an enabled directory record", len(disabled), uids_len)
+        
 
     @inlineCallbacks
     def getAllHomeUIDs(self):
@@ -296,10 +344,13 @@
         descriptor = None
         if ical:
             if self.options["uuid"]:
-                rows = yield self.getAllResourceInfoWithUUID(self.options["uuid"])
+                rows = yield self.getAllResourceInfoWithUUID(self.options["uuid"], inbox=True)
                 descriptor = "getAllResourceInfoWithUUID"
+            elif self.options["uid"]:
+                rows = yield self.getAllResourceInfoWithUID(self.options["uid"], inbox=True)
+                descriptor = "getAllResourceInfoWithUID"
             else:
-                rows = yield self.getAllResourceInfo()
+                rows = yield self.getAllResourceInfo(inbox=True)
                 descriptor = "getAllResourceInfo"
         else:
             if self.options["uid"]:
@@ -314,8 +365,11 @@
 
         if self.options["verbose"]:
             self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,))
+        
+        self.total = len(rows)
         self.output.write("Number of events to process: %s\n" % (len(rows,)))
         self.results["Number of events to process"] = len(rows)
+        self.addToSummary("Number of events to process", self.total)
         
         # Split into organizer events and attendee events
         self.organized = []
@@ -323,7 +377,26 @@
         self.attended = []
         self.attended_byuid = collections.defaultdict(list)
         self.matched_attendee_to_organizer = collections.defaultdict(set)
-        for owner, resid, uid, md5, organizer, created, modified in rows:
+        skipped = 0
+        inboxes = 0
+        for owner, resid, uid, calname, md5, organizer, created, modified in rows:
+            
+            # Skip owners not enabled for calendaring
+            if not self.testForCalendaringUUID(owner):
+                skipped += 1
+                continue
+
+            # Skip inboxes
+            if calname == "inbox":
+                inboxes += 1
+                continue
+
+            # If targeting a specific organizer, skip events belonging to others
+            if self.options["uuid"]:
+                if not organizer.startswith("urn:uuid:") or self.options["uuid"] != organizer[9:]:
+                    continue
+                
+            # Cache organizer/attendee states
             if organizer.startswith("urn:uuid:") and owner == organizer[9:]:
                 self.organized.append((owner, resid, uid, md5, organizer, created, modified,))
                 self.organized_byuid[uid] = (owner, resid, uid, md5, organizer, created, modified,)
@@ -335,10 +408,19 @@
         self.output.write("Number of attendee events to process: %s\n" % (len(self.attended,)))
         self.results["Number of organizer events to process"] = len(self.organized)
         self.results["Number of attendee events to process"] = len(self.attended)
+        self.results["Number of skipped events"] = skipped
+        self.results["Number of inbox events"] = inboxes
+        self.addToSummary("Number of organizer events to process", len(self.organized), self.total)
+        self.addToSummary("Number of attendee events to process", len(self.attended), self.total)
+        self.addToSummary("Number of skipped events", skipped, self.total)
+        if ical:
+            self.addToSummary("Number of inbox events", inboxes, self.total)
+        self.addSummaryBreak()
 
         if ical:
             yield self.calendarDataCheck(rows)
         elif mismatch:
+            self.totalErrors = 0
             yield self.verifyAllAttendeesForOrganizer()
             yield self.verifyAllOrganizersForAttendee()
         
@@ -346,42 +428,56 @@
 
 
     @inlineCallbacks
-    def getAllResourceInfo(self):
+    def getAllResourceInfo(self, inbox=False):
         co = schema.CALENDAR_OBJECT
         cb = schema.CALENDAR_BIND
         ch = schema.CALENDAR_HOME
+        
+        if inbox:
+            cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
+                    cb.BIND_MODE == _BIND_MODE_OWN)
+        else:
+            cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
+                    cb.BIND_MODE == _BIND_MODE_OWN).And(
+                    cb.CALENDAR_RESOURCE_NAME != "inbox")
+
         kwds = {}
         rows = (yield Select(
-            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
+            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
             From=ch.join(
                 cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
-                co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
-                    cb.BIND_MODE == _BIND_MODE_OWN).And(
-                    cb.CALENDAR_RESOURCE_NAME != "inbox")),
-            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
+                co, type="inner", on=cojoin),
+            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
         ).on(self.txn, **kwds))
         returnValue(tuple(rows))
 
 
     @inlineCallbacks
-    def getAllResourceInfoWithUUID(self, uuid):
+    def getAllResourceInfoWithUUID(self, uuid, inbox=False):
         co = schema.CALENDAR_OBJECT
         cb = schema.CALENDAR_BIND
         ch = schema.CALENDAR_HOME
+
+        if inbox:
+            cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
+                    cb.BIND_MODE == _BIND_MODE_OWN)
+        else:
+            cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
+                    cb.BIND_MODE == _BIND_MODE_OWN).And(
+                    cb.CALENDAR_RESOURCE_NAME != "inbox")
+
         kwds = {"uuid": uuid}
         if len(uuid) != 36:
             where = (ch.OWNER_UID.StartsWith(Parameter("uuid")))
         else:
             where = (ch.OWNER_UID == Parameter("uuid"))
         rows = (yield Select(
-            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
+            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
             From=ch.join(
                 cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
-                co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
-                    cb.BIND_MODE == _BIND_MODE_OWN).And(
-                    cb.CALENDAR_RESOURCE_NAME != "inbox")),
+                co, type="inner", on=cojoin),
             Where=where,
-            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
+            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
         ).on(self.txn, **kwds))
         returnValue(tuple(rows))
 
@@ -397,7 +493,7 @@
             "Max"   : pyCalendarTodatetime(PyCalendarDateTime(1900, 1, 1, 0, 0, 0))
         }
         rows = (yield Select(
-            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
+            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
             From=ch.join(
                 cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
                 co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
@@ -406,28 +502,35 @@
                     co.ORGANIZER != "")).join(
                 tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)),
             Where=(tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX == Parameter("Max")),
-            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
+            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
         ).on(self.txn, **kwds))
         returnValue(tuple(rows))
 
 
     @inlineCallbacks
-    def getAllResourceInfoWithUID(self, uid):
+    def getAllResourceInfoWithUID(self, uid, inbox=False):
         co = schema.CALENDAR_OBJECT
         cb = schema.CALENDAR_BIND
         ch = schema.CALENDAR_HOME
+
+        if inbox:
+            cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
+                    cb.BIND_MODE == _BIND_MODE_OWN)
+        else:
+            cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
+                    cb.BIND_MODE == _BIND_MODE_OWN).And(
+                    cb.CALENDAR_RESOURCE_NAME != "inbox")
+
         kwds = {
             "UID" : uid,
         }
         rows = (yield Select(
-            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
+            [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED],
             From=ch.join(
                 cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
-                co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
-                    cb.BIND_MODE == _BIND_MODE_OWN).And(
-                    cb.CALENDAR_RESOURCE_NAME != "inbox")),
+                co, type="inner", on=cojoin),
             Where=(co.ICALENDAR_UID == Parameter("UID")),
-            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
+            GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,),
         ).on(self.txn, **kwds))
         returnValue(tuple(rows))
 
@@ -443,13 +546,29 @@
             From=ch.join(
                 cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join(
                 co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And(
-                    cb.BIND_MODE == _BIND_MODE_OWN).And(
-                    cb.CALENDAR_RESOURCE_NAME != "inbox")),
+                    cb.BIND_MODE == _BIND_MODE_OWN)),
             Where=(co.RESOURCE_ID == Parameter("resid")),
         ).on(self.txn, **kwds))
         returnValue(rows[0])
 
 
+    def testForCalendaringUUID(self, uuid):
+        """
+        Determine if the specified directory UUID is valid for calendaring. Keep a cache of
+        valid and invalid so we can do this quickly.
+
+        @param uuid: the directory UUID to test
+        @type uuid: C{str}
+        
+        @return: C{True} if valid, C{False} if not
+        """
+
+        if uuid not in self.validForCalendaringUUIDs:
+            record = self.directoryService().recordWithGUID(uuid)
+            self.validForCalendaringUUIDs[uuid] = record is not None and record.enabledForCalendaring and record.thisServer()
+        return self.validForCalendaringUUIDs[uuid]
+
+
     @inlineCallbacks
     def calendarDataCheck(self, rows):
         """
@@ -466,17 +585,25 @@
         count = 0
         total = len(rows)
         badlen = 0
-        for owner, resid, uid, _ignore_md5, _ignore_organizer, _ignore_created, _ignore_modified in rows:
-            result, message = yield self.validCalendarData(resid)
+        rjust = 10
+        for owner, resid, uid, calname, _ignore_md5, _ignore_organizer, _ignore_created, _ignore_modified in rows:
+            result, message = yield self.validCalendarData(resid, calname == "inbox")
             if not result:
                 results_bad.append((owner, uid, resid, message))
                 badlen += 1
             count += 1
             if self.options["verbose"]:
                 if count == 1:
-                    self.output.write("Bad/Current/Total\n")
+                    self.output.write("Bad".rjust(rjust) + "Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n")
                 if divmod(count, 100)[1] == 0:
-                    self.output.write("%s/%s/%s\n" % (badlen, count, total,))
+                    self.output.write((
+                        "\r" + 
+                        ("%s" % badlen).rjust(rjust) +
+                        ("%s" % count).rjust(rjust) +
+                        ("%s" % total).rjust(rjust) +
+                        ("%d%%" % safePercent(count, total)).rjust(rjust)
+                    ).ljust(80))
+                    self.output.flush()
             
             # To avoid holding locks on all the rows scanned, commit every 100 resources
             if divmod(count, 100)[1] == 0:
@@ -485,6 +612,14 @@
 
         yield self.txn.commit()
         self.txn = None
+        if self.options["verbose"]:
+                    self.output.write((
+                        "\r" + 
+                        ("%s" % badlen).rjust(rjust) +
+                        ("%s" % count).rjust(rjust) +
+                        ("%s" % total).rjust(rjust) +
+                        ("%d%%" % safePercent(count, total)).rjust(rjust)
+                    ).ljust(80) + "\n")
         
         # Print table of results
         table = tables.Table()
@@ -504,6 +639,7 @@
         table.printTable(os=self.output)
         
         self.results["Bad iCalendar data"] = results_bad
+        self.addToSummary("Bad iCalendar data", len(results_bad), total)
          
         if self.options["verbose"]:
             diff_time = time.time() - t
@@ -515,22 +651,23 @@
     errorPrefix = "Calendar data had unfixable problems:\n  "
 
     @inlineCallbacks
-    def validCalendarData(self, resid):
+    def validCalendarData(self, resid, isinbox):
         """
         Check the calendar resource for valid iCalendar data.
         """
 
-        caldata = yield self.getCalendar(resid)
+        caldata = yield self.getCalendar(resid, self.fix)
         if caldata is None:
-            returnValue((False, "Failed to parse"))
+            returnValue((False, self.parseError))
 
         component = Component(None, pycalendar=caldata)
         result = True
         message = ""
         try:
-            component.validCalendarData(doFix=False, validateRecurrences=True)
-            component.validCalendarForCalDAV(methodAllowed=False)
-            component.validOrganizerForScheduling(doFix=False)
+            if self.options["ical"]:
+                component.validCalendarData(doFix=False, validateRecurrences=True)
+                component.validCalendarForCalDAV(methodAllowed=isinbox)
+                component.validOrganizerForScheduling(doFix=False)
             self.noPrincipalPathCUAddresses(component, doFix=False)
         except ValueError, e:
             result = False
@@ -540,7 +677,7 @@
             lines = message.splitlines()
             message = lines[0] + (" ++" if len(lines) > 1 else "")
             if self.fix:
-                fixresult, fixmessage = yield self.fixCalendarData(resid)
+                fixresult, fixmessage = yield self.fixCalendarData(resid, isinbox)
                 if fixresult:
                     message = "Fixed: " + message
                 else:
@@ -558,6 +695,12 @@
                 return self.cuaCache[cuaddr]
     
             result = normalizationLookup(cuaddr, principalFunction, config)
+            _ignore_name, guid, _ignore_cuaddrs = result
+            if guid is None:
+                if cuaddr.find("__uids__") != -1:
+                    guid = cuaddr[cuaddr.find("__uids__/")+9:][:36]
+                    result = "", guid, set()
+                    
     
             # Cache the result
             self.cuaCache[cuaddr] = result
@@ -567,20 +710,36 @@
             if subcomponent.name() in ignoredComponents:
                 continue
             organizer = subcomponent.getProperty("ORGANIZER")
-            if organizer and organizer.value().startswith("http"):
-                if doFix:
-                    component.normalizeCalendarUserAddresses(lookupFunction, self.directoryService().principalForCalendarUserAddress)
-                else:
-                    raise InvalidICalendarDataError("iCalendar ORGANIZER starts with 'http(s)'")
+            if organizer:
+                if organizer.value().startswith("http"):
+                    if doFix:
+                        component.normalizeCalendarUserAddresses(lookupFunction, self.directoryService().principalForCalendarUserAddress)
+                    else:
+                        raise InvalidICalendarDataError("iCalendar ORGANIZER starts with 'http(s)'")
+                elif organizer.hasParameter("CALENDARSERVER-OLD-CUA"):
+                    oldcua = organizer.parameterValue("CALENDARSERVER-OLD-CUA")
+                    if not oldcua.startswith("base64-") and not self.options["nobase64"]:
+                        if doFix:
+                            organizer.setParameter("CALENDARSERVER-OLD-CUA", "base64-%s" % (base64.b64encode(oldcua)))
+                        else:
+                            raise InvalidICalendarDataError("iCalendar ORGANIZER CALENDARSERVER-OLD-CUA not base64")
+
             for attendee in subcomponent.properties("ATTENDEE"):
                 if attendee.value().startswith("http"):
                     if doFix:
                         component.normalizeCalendarUserAddresses(lookupFunction, self.directoryService().principalForCalendarUserAddress)
                     else:
                         raise InvalidICalendarDataError("iCalendar ATTENDEE starts with 'http(s)'")
+                elif attendee.hasParameter("CALENDARSERVER-OLD-CUA"):
+                    oldcua = attendee.parameterValue("CALENDARSERVER-OLD-CUA")
+                    if not oldcua.startswith("base64-") and not self.options["nobase64"]:
+                        if doFix:
+                            attendee.setParameter("CALENDARSERVER-OLD-CUA", "base64-%s" % (base64.b64encode(oldcua)))
+                        else:
+                            raise InvalidICalendarDataError("iCalendar ATTENDEE CALENDARSERVER-OLD-CUA not base64")
 
     @inlineCallbacks
-    def fixCalendarData(self, resid):
+    def fixCalendarData(self, resid, isinbox):
         """
         Fix problems in calendar data using store APIs.
         """
@@ -598,9 +757,10 @@
         result = True
         message = ""
         try:
-            component.validCalendarData(doFix=True, validateRecurrences=True)
-            component.validCalendarForCalDAV(methodAllowed=False)
-            component.validOrganizerForScheduling(doFix=True)
+            if self.options["ical"]:
+                component.validCalendarData(doFix=True, validateRecurrences=True)
+                component.validCalendarForCalDAV(methodAllowed=isinbox)
+                component.validOrganizerForScheduling(doFix=True)
             self.noPrincipalPathCUAddresses(component, doFix=True)
         except ValueError:
             result = False
@@ -608,14 +768,83 @@
         
         if result:
             # Write out fix, commit and get a new transaction
-            component = yield calendarObj.setComponent(component)
-            #yield self.txn.commit()
-            #self.txn = self.store.newTransaction()
+            try:
+                # Use _migrating to ignore possible overridden instance errors - we are either correcting or ignoring those
+                self.txn._migrating = True
+                component = yield calendarObj.setComponent(component)
+            except Exception, e:
+                print e, component
+                print traceback.print_exc()
+                result = False
+                message = "Exception fix: "
+            yield self.txn.commit()
+            self.txn = self.store.newTransaction()
 
         returnValue((result, message,))
 
 
     @inlineCallbacks
+    def fixBadOldCua(self, resid, caltxt):
+        """
+        Fix bad CALENDARSERVER-OLD-CUA lines and write fixed data to store. Assumes iCalendar data lines unfolded.
+        """
+
+        # Get store objects
+        homeID, calendarID = yield self.getAllResourceInfoForResourceID(resid)
+        home = yield self.txn.calendarHomeWithResourceID(homeID)
+        calendar = yield home.childWithID(calendarID)
+        calendarObj = yield calendar.objectResourceWithID(resid)
+        
+        # Do raw data fix one line at a time
+        caltxt = self.fixBadOldCuaLines(caltxt)
+        
+        # Re-parse
+        try:
+            component = Component.fromString(caltxt)
+        except InvalidICalendarDataError:
+            returnValue(None)
+
+        # Write out fix, commit and get a new transaction
+        # Use _migrating to ignore possible overridden instance errors - we are either correcting or ignoring those
+        self.txn._migrating = True
+        component = yield calendarObj.setComponent(component)
+        yield self.txn.commit()
+        self.txn = self.store.newTransaction()
+
+        returnValue(caltxt)
+
+
+    def fixBadOldCuaLines(self, caltxt):
+        """
+        Fix bad CALENDARSERVER-OLD-CUA lines. Assumes iCalendar data lines unfolded.
+        """
+
+        # Do raw data fix one line at a time
+        lines = caltxt.splitlines()
+        for ctr, line in enumerate(lines):
+            startpos = line.find(";CALENDARSERVER-OLD-CUA=\"//")
+            if startpos != -1:
+                endpos = line.find("urn:uuid:")
+                if endpos != -1:
+                    endpos += len("urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX\"")
+                    badparam = line[startpos+len(";CALENDARSERVER-OLD-CUA=\""):endpos]
+                    endbadparam = badparam.find(";")
+                    if endbadparam != -1:
+                        badparam = badparam[:endbadparam].replace("\\", "")
+                        if badparam.find("8443") != -1:
+                            badparam = "https:" + badparam
+                        else:
+                            badparam = "http:" + badparam
+                        if self.options["nobase64"]:
+                            badparam = "\"" + badparam + "\""
+                        else:
+                            badparam = "base64-%s" % (base64.b64encode(badparam),)
+                        badparam = ";CALENDARSERVER-OLD-CUA=" + badparam
+                        lines[ctr] = line[:startpos] + badparam + line[endpos:]
+        caltxt = "\r\n".join(lines) + "\r\n"
+        return caltxt
+
+    @inlineCallbacks
     def verifyAllAttendeesForOrganizer(self):
         """
         Make sure that for each organizer, each referenced attendee has a consistent view of the organizer's event.
@@ -633,21 +862,24 @@
         organizer_div = 1 if organized_len < 100 else organized_len / 100
 
         # Test organized events
+        t = time.time()
         for ctr, organizerEvent in enumerate(self.organized):
             
             if self.options["verbose"] and divmod(ctr, organizer_div)[1] == 0:
-                self.output.write("%d of %d (%d%%) Missing: %d  Mismatched: %s\n" % (
+                self.output.write(("\r%d of %d (%d%%) Missing: %d  Mismatched: %s" % (
                     ctr+1,
                     organized_len,
                     ((ctr+1) * 100 / organized_len),
                     len(results_missing),
                     len(results_mismatch),
-                ))
+                )).ljust(80))
+                self.output.flush()
 
-            # To avoid holding locks on all the rows scanned, commit every 100 resources
-            if divmod(ctr, 100)[1] == 0:
+            # To avoid holding locks on all the rows scanned, commit every 10 seconds
+            if time.time() - t > 10:
                 yield self.txn.commit()
                 self.txn = self.store.newTransaction()
+                t = time.time()
 
             # Get the organizer's view of attendee states            
             organizer, resid, uid, _ignore_md5, _ignore_organizer, org_created, org_modified = organizerEvent
@@ -681,8 +913,8 @@
 
                 self.matched_attendee_to_organizer[uid].add(organizerAttendee)
                 
-                attendeeRecord = self.directoryService().recordWithGUID(organizerAttendee)
-                if attendeeRecord is None or not attendeeRecord.thisServer():
+                # Skip attendees not enabled for calendaring
+                if not self.testForCalendaringUUID(organizerAttendee):
                     continue
 
                 # If an entry for the attendee exists, then check whether attendee status matches
@@ -731,7 +963,9 @@
 
         yield self.txn.commit()
         self.txn = None
-                
+        if self.options["verbose"]:
+            self.output.write("\r".ljust(80) + "\n")
+
         # Print table of results
         table = tables.Table()
         table.addHeader(("Organizer", "Attendee", "Event UID", "Organizer RID", "Created", "Modified",))
@@ -746,13 +980,15 @@
                 uid,
                 resid,
                 created,
-                modified,
+                "" if modified == created else modified,
             ))
         
         self.output.write("\n")
         self.output.write("Events missing from Attendee's calendars (total=%d):\n" % (len(results_missing),))
         table.printTable(os=self.output)
-            
+        self.addToSummary("Events missing from Attendee's calendars", len(results_missing), self.total)
+        self.totalErrors += len(results_missing)
+
         # Print table of results
         table = tables.Table()
         table.addHeader(("Organizer", "Attendee", "Event UID", "Organizer RID", "Created", "Modified", "Attendee RID", "Created", "Modified",))
@@ -767,15 +1003,17 @@
                 uid,
                 org_resid,
                 org_created,
-                org_modified,
+                "" if org_modified == org_created else org_modified,
                 attendeeResIDs[(attendee, uid)],
                 att_created,
-                att_modified,
+                "" if att_modified == att_created else att_modified,
             ))
         
         self.output.write("\n")
         self.output.write("Events mismatched between Organizer's and Attendee's calendars (total=%d):\n" % (len(results_mismatch),))
         table.printTable(os=self.output)
+        self.addToSummary("Events mismatched between Organizer's and Attendee's calendars", len(results_mismatch), self.total)
+        self.totalErrors += len(results_mismatch)
 
 
     @inlineCallbacks
@@ -793,21 +1031,24 @@
         attended_len = len(self.attended)
         attended_div = 1 if attended_len < 100 else attended_len / 100
 
+        t = time.time()
         for ctr, attendeeEvent in enumerate(self.attended):
             
             if self.options["verbose"] and divmod(ctr, attended_div)[1] == 0:
-                self.output.write("%d of %d (%d%%) Missing: %d  Mismatched: %s\n" % (
+                self.output.write(("\r%d of %d (%d%%) Missing: %d  Mismatched: %s" % (
                     ctr+1,
                     attended_len,
                     ((ctr+1) * 100 / attended_len),
                     len(missing),
                     len(mismatched),
-                ))
+                )).ljust(80))
+                self.output.flush()
 
-            # To avoid holding locks on all the rows scanned, commit every 100 resources
-            if divmod(ctr, 100)[1] == 0:
+            # To avoid holding locks on all the rows scanned, commit every 10 seconds
+            if time.time() - t > 10:
                 yield self.txn.commit()
                 self.txn = self.store.newTransaction()
+                t = time.time()
 
             attendee, resid, uid, _ignore_md5, organizer, att_created, att_modified = attendeeEvent
             calendar = yield self.getCalendar(resid)
@@ -822,8 +1063,8 @@
                 continue
             organizer = organizer[9:]
 
-            organizerRecord = self.directoryService().recordWithGUID(organizer)
-            if organizerRecord is None or not organizerRecord.thisServer():
+            # Skip organizers not enabled for calendaring
+            if not self.testForCalendaringUUID(organizer):
                 continue
 
             if uid not in self.organized_byuid:
@@ -853,6 +1094,8 @@
 
         yield self.txn.commit()
         self.txn = None
+        if self.options["verbose"]:
+            self.output.write("\r".ljust(80) + "\n")
 
         # Print table of results
         table = tables.Table()
@@ -872,12 +1115,14 @@
                 uid,
                 resid,
                 created,
-                modified,
+                "" if modified == created else modified,
             ))
         
         self.output.write("\n")
         self.output.write("Attendee events missing in Organizer's calendar (total=%d, unique=%d):\n" % (len(missing), len(unique_set),))
         table.printTable(os=self.output)
+        self.addToSummary("Attendee events missing in Organizer's calendar", len(missing), self.total)
+        self.totalErrors += len(missing)
 
         # Print table of results
         table = tables.Table()
@@ -898,16 +1143,53 @@
                 self.organized_byuid[uid][6],
                 resid,
                 att_created,
-                att_modified,
+                "" if att_modified == att_created else att_modified,
             ))
         
         self.output.write("\n")
         self.output.write("Attendee events mismatched in Organizer's calendar (total=%d):\n" % (len(mismatched),))
         table.printTable(os=self.output)
+        self.addToSummary("Attendee events mismatched in Organizer's calendar", len(mismatched), self.total)
+        self.totalErrors += len(mismatched)
 
 
+    def addToSummary(self, title, count, total=None):
+        if total is not None:
+            percent = safePercent(count, total),
+        else:
+            percent = ""
+        self.summary.append((title, count, percent))
+
+
+    def addSummaryBreak(self):
+        self.summary.append(None)
+
+
+    def printSummary(self):
+        # Print summary of results
+        table = tables.Table()
+        table.addHeader(("Item", "Count", "%"))
+        table.setDefaultColumnFormats(
+            (
+                tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), 
+                tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
+                tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY),
+            )
+        )
+        for item in self.summary:
+            table.addRow(item)
+
+        if self.totalErrors is not None:
+            table.addRow(None)
+            table.addRow(("Total Errors", self.totalErrors, safePercent(self.totalErrors, self.total),))
+        
+        self.output.write("\n")
+        self.output.write("Overall Summary:\n")
+        table.printTable(os=self.output)
+
+
     @inlineCallbacks
-    def getCalendar(self, resid):
+    def getCalendar(self, resid, doFix=False):
         co = schema.CALENDAR_OBJECT
         kwds = { "ResourceID" : resid }
         rows = (yield Select(
@@ -920,7 +1202,24 @@
         try:
             caldata = PyCalendar.parseText(rows[0][0]) if rows else None
         except PyCalendarError:
-            caldata = None
+            caltxt = rows[0][0] if rows else None
+            if caltxt:
+                caltxt = caltxt.replace("\r\n ", "")
+                if caltxt.find("CALENDARSERVER-OLD-CUA=\"//") != -1:
+                    if doFix:
+                        caltxt = (yield self.fixBadOldCua(resid, caltxt))
+                        try:
+                            caldata = PyCalendar.parseText(caltxt) if rows else None
+                        except PyCalendarError:
+                            self.parseError = "No fix bad CALENDARSERVER-OLD-CUA"
+                            returnValue(None)
+                    else:
+                        self.parseError = "Bad CALENDARSERVER-OLD-CUA"
+                        returnValue(None)
+            
+            self.parseError = "Failed to parse"
+            returnValue(None)
+
         returnValue(caldata)
 
 

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify_diff.py (from rev 9220, CalendarServer/trunk/calendarserver/tools/calverify_diff.py)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify_diff.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/calverify_diff.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,152 @@
+#!/usr/bin/env python
+# -*- test-case-name: calendarserver.tools.test.test_calverify -*-
+##
+# Copyright (c) 2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+import getopt
+import sys
+import os
+
+
+def analyze(fname):
+    
+    lines = open(os.path.expanduser(fname)).read().splitlines()
+    total = len(lines)
+    ctr = 0
+    results = {
+        "table1": [],
+        "table2": [],
+        "table3": [],
+        "table4": [],
+    }
+
+    def _tableParser(ctr, tableName, parseFn):
+        ctr += 4
+        while ctr < total:
+            line = lines[ctr]
+            if line.startswith("+------"):
+                break
+            else:
+                results[tableName].append(parseFn(line))
+            ctr += 1
+        return ctr
+
+    while ctr < total:
+        line = lines[ctr]
+        if line.startswith("Events missing from Attendee's calendars"):
+            ctr = _tableParser(ctr, "table1", parseTableMissing)
+        elif line.startswith("Events mismatched between Organizer's and Attendee's calendars"):
+            ctr = _tableParser(ctr, "table2", parseTableMismatch)
+        elif line.startswith("Attendee events missing in Organizer's calendar"):
+            ctr = _tableParser(ctr, "table3", parseTableMissing)
+        elif line.startswith("Attendee events mismatched in Organizer's calendar"):
+            ctr = _tableParser(ctr, "table4", parseTableMismatch)
+        ctr += 1
+    
+    return results
+
+def parseTableMissing(line):
+    splits = line.split("|")
+    organizer = splits[1].strip()
+    attendee = splits[2].strip()
+    uid = splits[3].strip()
+    resid = splits[4].strip()
+    return (organizer, attendee, uid, resid,)
+
+def parseTableMismatch(line):
+    splits = line.split("|")
+    organizer = splits[1].strip()
+    attendee = splits[2].strip()
+    uid = splits[3].strip()
+    organizer_resid = splits[4].strip()
+    attendee_resid = splits[7].strip()
+    return (organizer, attendee, uid, organizer_resid, attendee_resid,)
+
+def diff(results1, results2):
+    
+    print "\n\nEvents missing from Attendee's calendars"
+    diffSets(results1["table1"], results2["table1"])
+    
+    print "\n\nEvents mismatched between Organizer's and Attendee's calendars"
+    diffSets(results1["table2"], results2["table2"])
+    
+    print "\n\nAttendee events missing in Organizer's calendar"
+    diffSets(results1["table3"], results2["table3"])
+    
+    print "\n\nAttendee events mismatched in Organizer's calendar"
+    diffSets(results1["table4"], results2["table4"])
+
+def diffSets(results1, results2):
+    
+    s1 = set(results1)
+    s2 = set(results2)
+    
+    d = s1 - s2
+    print "\nIn first, not in second: (%d)" % (len(d),)
+    for i in sorted(d):
+        print i
+    
+    d = s2 - s1
+    print "\nIn second, not in first: (%d)" % (len(d),)
+    for i in sorted(d):
+        print i
+
+def usage(error_msg=None):
+    if error_msg:
+        print error_msg
+
+    print """Usage: calverify_diff [options] FILE1 FILE2
+Options:
+    -h          Print this help and exit
+
+Arguments:
+    FILE1     File containing calverify output to analyze
+    FILE2     File containing calverify output to analyze
+
+Description:
+    This utility will analyze the output of two calverify runs
+    and show what is different between the two.
+"""
+
+    if error_msg:
+        raise ValueError(error_msg)
+    else:
+        sys.exit(0)
+
+
+if __name__ == '__main__':
+
+    options, args = getopt.getopt(sys.argv[1:], "h", [])
+
+    for option, value in options:
+        if option == "-h":
+            usage()
+        else:
+            usage("Unrecognized option: %s" % (option,))
+
+    if len(args) != 2:
+        usage("Must have two arguments")
+    else:
+        fname1 = args[0]
+        fname2 = args[1]
+
+    print "*** CalVerify diff from %s to %s" % (
+        os.path.basename(fname1),
+        os.path.basename(fname2),
+    )
+    results1 = analyze(fname1)
+    results2 = analyze(fname2)
+    diff(results1, results2)

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/cmd.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/cmd.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/cmd.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -25,6 +25,8 @@
     "Commands",
 ]
 
+from getopt import getopt
+
 #from twisted.python import log
 from twisted.internet.defer import succeed
 from twisted.internet.defer import inlineCallbacks, returnValue
@@ -33,7 +35,10 @@
 
 from txdav.common.icommondatastore import NotFoundError
 
+from calendarserver.version import version
+from calendarserver.tap.util import getRootResource
 from calendarserver.tools.tables import Table
+from calendarserver.tools.purge import purgeUID
 from calendarserver.tools.shell.vfs import Folder, RootFolder
 from calendarserver.tools.shell.directory import findRecords, summarizeRecords, recordInfo
 
@@ -49,10 +54,18 @@
     Unknown arguments.
     """
     def __init__(self, arguments):
-        Exception.__init__(self, "Unknown arguments: %s" % (arguments,))
+        UsageError.__init__(self, "Unknown arguments: %s" % (arguments,))
         self.arguments = arguments
 
 
+class InsufficientArguments(UsageError):
+    """
+    Insufficient arguments.
+    """
+    def __init__(self):
+        UsageError.__init__(self, "Insufficient arguments.")
+
+
 class CommandsBase(object):
     def __init__(self, protocol):
         self.protocol = protocol
@@ -67,17 +80,25 @@
     # Utilities
     #
 
-    def getTarget(self, tokens):
+    def getTarget(self, tokens, wdFallback=False):
+        """
+        Pop's the first token from tokens and locates the File
+        indicated by that token.
+        @return: a C{File}.
+        """
         if tokens:
             return self.wd.locate(tokens.pop(0).split("/"))
         else:
-            return succeed(self.wd)
+            if wdFallback:
+                return succeed(self.wd)
+            else:
+                return succeed(None)
 
     @inlineCallbacks
-    def getTargets(self, tokens):
+    def getTargets(self, tokens, wdFallback=False):
         """
         For each given C{token}, locate a File to operate on.
-        @return: iterable of File objects.
+        @return: iterable of C{File} objects.
         """
         if tokens:
             result = []
@@ -85,7 +106,10 @@
                 result.append((yield self.wd.locate(token.split("/"))))
             returnValue(result)
         else:
-            returnValue((self.wd,))
+            if wdFallback:
+                returnValue((self.wd,))
+            else:
+                returnValue(())
 
     def commands(self, showHidden=False):
         """
@@ -143,18 +167,32 @@
         if filter is None:
             filter = lambda item: True
 
+        if tokens:
+            token = tokens[-1]
+
+            i = token.rfind("/")
+            if i == -1:
+                # No "/" in token
+                base = self.wd
+                word = token
+            else:
+                base = (yield self.wd.locate(token[:i].split("/")))
+                word = token[i+1:]
+
+        else:
+            base = self.wd
+            word = ""
+
         files = (
             entry.toString()
-            for entry in (yield self.wd.list())
+            for entry in (yield base.list())
             if filter(entry)
         )
 
         if len(tokens) == 0:
             returnValue(files)
-        elif len(tokens) == 1:
-            returnValue(self.complete(tokens[0], files))
         else:
-            returnValue(())
+            returnValue(self.complete(word, files))
 
 
 class Commands(CommandsBase):
@@ -325,6 +363,18 @@
     cmd_log.hidden = "debug tool"
 
 
+    def cmd_version(self, tokens):
+        """
+        Print version.
+
+        usage: version
+        """
+        if tokens:
+            raise UnknownArguments(tokens)
+
+        self.terminal.write("%s\n" % (version,))
+
+
     #
     # Filesystem tools
     #
@@ -380,11 +430,11 @@
 
         usage: ls [folder]
         """
-        targets = (yield self.getTargets(tokens))
+        targets = (yield self.getTargets(tokens, wdFallback=True))
         multiple = len(targets) > 0
 
         for target in targets:
-            entries = (yield target.list())
+            entries = sorted((yield target.list()), key=lambda e: e.fileName)
             #
             # FIXME: this can be ugly if, for example, there are zillions
             # of entries to output. Paging would be good.
@@ -409,7 +459,7 @@
 
         usage: info [folder]
         """
-        target = (yield self.getTarget(tokens))
+        target = (yield self.getTarget(tokens, wdFallback=True))
 
         if tokens:
             raise UnknownArguments(tokens)
@@ -428,7 +478,12 @@
 
         usage: cat target [target ...]
         """
-        for target in (yield self.getTargets(tokens)):
+        targets = (yield self.getTargets(tokens))
+
+        if not targets:
+            raise InsufficientArguments()
+
+        for target in targets:
             if hasattr(target, "text"):
                 text = (yield target.text())
                 self.terminal.write(text)
@@ -436,6 +491,40 @@
     complete_cat = CommandsBase.complete_files
 
 
+    @inlineCallbacks
+    def cmd_rm(self, tokens):
+        """
+        Remove target.
+
+        usage: rm target [target ...]
+        """
+        options, tokens = getopt(tokens, "", ["no-implicit"])
+
+        implicit = True
+
+        for option, value in options:
+            if option == "--no-implicit":
+                # Not in docstring; this is really dangerous.
+                implicit = False
+            else:
+                raise AssertionError("We should't be here.")
+
+        targets = (yield self.getTargets(tokens))
+
+        if not targets:
+            raise InsufficientArguments()
+
+        for target in targets:
+            if hasattr(target, "delete"):
+                target.delete(implicit=implicit)
+            else:
+                self.terminal.write("Can not delete read-only target: %s\n" % (target,))
+
+    cmd_rm.hidden = "Incomplete"
+
+    complete_rm = CommandsBase.complete_files
+
+
     #
     # Principal tools
     #
@@ -465,7 +554,7 @@
     @inlineCallbacks
     def cmd_print_principal(self, tokens):
         """
-        Print information about a principal
+        Print information about a principal.
 
         usage: print_principal uid
         """
@@ -490,6 +579,69 @@
 
 
     #
+    # Data purge tools
+    #
+
+    @inlineCallbacks
+    def cmd_purge_principals(self, tokens):
+        """
+        Purge data associated principals.
+
+        usage: purge_principals uid [uid ...]
+        """
+        dryRun     = True
+        completely = False
+        doimplicit = True
+
+        directory = self.protocol.service.directory
+
+        uids = tuple(tokens)
+
+        error = False
+        for uid in uids:
+            record = directory.recordWithUID(uid)
+            if not record:
+                self.terminal.write("Unknown UID: %s\n" % (uid,))
+                error = True
+
+        if error:
+            self.terminal.write("Aborting.\n")
+            return
+
+        rootResource = getRootResource(
+            self.protocol.service.config,
+            self.protocol.service.store,
+        )
+
+        if dryRun:
+            toPurge = "to purge"
+        else:
+            toPurge = "purged"
+
+        total = 0
+        for uid in uids:
+            count, assignments = (yield purgeUID(
+                uid, directory, rootResource,
+                verbose    = False,
+                dryrun     = dryRun,
+                completely = completely,
+                doimplicit = doimplicit,
+            ))
+            total += count
+
+            self.terminal.write(
+                "%d events %s for UID %s.\n"
+                % (count, toPurge, uid)
+            )
+
+        self.terminal.write(
+            "%d total events %s.\n"
+            % (total, toPurge)
+        )
+
+    cmd_purge_principals.hidden = "incomplete"
+
+    #
     # Python prompt, for the win
     #
 
@@ -571,7 +723,7 @@
         if tokens:
             raise UnknownArguments(tokens)
 
-        raise NotImplementedError("")
+        raise NotImplementedError("Command not implemented")
 
     cmd_sql.hidden = "not implemented"
 
@@ -581,6 +733,30 @@
     #
 
     def cmd_raise(self, tokens):
+        """
+        Raises an exception.
+
+        usage: raise [message ...]
+        """
         raise RuntimeError(" ".join(tokens))
 
     cmd_raise.hidden = "test tool"
+
+    def cmd_reload(self, tokens):
+        """
+        Reloads code.
+
+        usage: reload
+        """
+        if tokens:
+            raise UnknownArguments(tokens)
+
+        import calendarserver.tools.shell.vfs
+        reload(calendarserver.tools.shell.vfs)
+
+        import calendarserver.tools.shell.directory
+        reload(calendarserver.tools.shell.directory)
+
+        self.protocol.reloadCommands()
+
+    cmd_reload.hidden = "test tool"

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/terminal.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/terminal.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/terminal.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -35,6 +35,7 @@
 from shlex import shlex
 
 from twisted.python import log
+from twisted.python.failure import Failure
 from twisted.python.text import wordWrap
 from twisted.python.usage import Options, UsageError
 from twisted.internet.defer import Deferred
@@ -143,6 +144,15 @@
         self.activeCommand = None
         self.emulate = "emacs"
 
+    def reloadCommands(self):
+        # FIXME: doesn't work for alternative Commands classes passed
+        # to __init__.
+        self.terminal.write("Reloading commands class...\n")
+
+        import calendarserver.tools.shell.cmd
+        reload(calendarserver.tools.shell.cmd)
+        self.commands = calendarserver.tools.shell.cmd.Commands(self)
+
     #
     # Input handling
     #
@@ -178,42 +188,58 @@
         log.startLoggingWithObserver(observer)
 
     def handle_INT(self):
-        """
-        Handle ^C as an interrupt keystroke by resetting the current input
-        variables to their initial state.
-        """
-        self.pn = 0
-        self.lineBuffer = []
-        self.lineBufferIndex = 0
+        return self.resetInputLine()
 
-        self.terminal.nextLine()
-        self.terminal.write("KeyboardInterrupt")
-        self.terminal.nextLine()
-        self.exit()
-
     def handle_EOF(self):
         if self.lineBuffer:
             if self.emulate == "emacs":
                 self.handle_DELETE()
             else:
-                self.terminal.write('\a')
+                self.terminal.write("\a")
         else:
             self.handle_QUIT()
 
     def handle_FF(self):
         """
-        Handle a 'form feed' byte - generally used to request a screen
+        Handle a "form feed" byte - generally used to request a screen
         refresh/redraw.
         """
+        # FIXME: Clear screen != redraw screen.
+        return self.clearScreen()
+
+    def handle_QUIT(self):
+        return self.exit()
+
+    def handle_TAB(self):
+        return self.completeLine()
+
+    #
+    # Utilities
+    #
+
+    def clearScreen(self):
+        """
+        Clear the display.
+        """
         self.terminal.eraseDisplay()
         self.terminal.cursorHome()
         self.drawInputLine()
 
-    def handle_QUIT(self):
-        self.exit()
+    def resetInputLine(self):
+        """
+        Reset the current input variables to their initial state.
+        """
+        self.pn = 0
+        self.lineBuffer = []
+        self.lineBufferIndex = 0
+        self.terminal.nextLine()
+        self.drawInputLine()
 
     @inlineCallbacks
-    def handle_TAB(self):
+    def completeLine(self):
+        """
+        Perform auto-completion on the input line.
+        """
         # Tokenize the text before the cursor
         tokens = self.tokenize("".join(self.lineBuffer[:self.lineBufferIndex]))
 
@@ -232,19 +258,22 @@
             m = getattr(self.commands, "complete_%s" % (cmd,), None)
             if not m:
                 return
-            completions = tuple((yield m(tokens)))
-
+            try:
+                completions = tuple((yield m(tokens)))
+            except Exception, e:
+                self.handleFailure(Failure(e))
+                return
             log.msg("COMPLETIONS: %r" % (completions,))
         else:
             # Completing command name
             completions = tuple(self.commands.complete_commands(cmd))
 
         if len(completions) == 1:
-            for completion in completions:
-                break
-            for c in completion:
+            for c in completions.__iter__().next():
                 self.characterReceived(c, True)
-            self.characterReceived(" ", False)
+
+            # FIXME: Add a space only if we know we've fully completed the term.
+            #self.characterReceived(" ", False)
         else:
             self.terminal.nextLine()
             for completion in completions:
@@ -252,14 +281,24 @@
                 self.terminal.write("%s%s\n" % (word, completion))
             self.drawInputLine()
 
-    #
-    # Utilities
-    #
-
     def exit(self):
+        """
+        Exit.
+        """
         self.terminal.loseConnection()
         self.service.reactor.stop()
 
+    def handleFailure(self, f):
+        """
+        Handle a failure raises in the interpreter by printing a
+        traceback and resetting the input line.
+        """
+        if self.lineBuffer:
+            self.terminal.nextLine()
+        self.terminal.write("Error: %s !!!" % (f.value,))
+        if not f.check(NotImplementedError, NotFoundError):
+            log.msg(f.getTraceback())
+        self.resetInputLine()
 
     #
     # Command dispatch
@@ -282,16 +321,8 @@
                     f.trap(CommandUsageError)
                     self.terminal.write("%s\n" % (f.value,))
 
-                def handleException(f):
-                    self.terminal.write("Error: %s\n" % (f.value,))
-                    if not f.check(NotImplementedError, NotFoundError):
-                        log.msg("-"*80 + "\n")
-                        log.msg(f.getTraceback())
-                        log.msg("-"*80 + "\n")
-
                 def next(_):
                     self.activeCommand = None
-                    self.drawInputLine()
                     if self.inputLines:
                         line = self.inputLines.pop(0)
                         self.lineReceived(line)
@@ -304,7 +335,8 @@
                     # Add time to test callbacks
                     self.service.reactor.callLater(4, d.callback, None)
                 d.addErrback(handleUsageError)
-                d.addErrback(handleException)
+                d.addCallback(lambda _: self.drawInputLine())
+                d.addErrback(self.handleFailure)
                 d.addCallback(next)
             else:
                 self.terminal.write("Unknown command: %s\n" % (cmd,))
@@ -314,6 +346,10 @@
 
     @staticmethod
     def tokenize(line):
+        """
+        Tokenize input line.
+        @return: an iterable of tokens
+        """
         lexer = shlex(line)
         lexer.whitespace_split = True
 

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_cmd.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_cmd.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_cmd.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -31,9 +31,12 @@
 
     @inlineCallbacks
     def test_getTargetNone(self):
-        target = (yield self.commands.getTarget([]))
+        target = (yield self.commands.getTarget([], wdFallback=True))
         self.assertEquals(target, self.commands.wd)
 
+        target = (yield self.commands.getTarget([]))
+        self.assertEquals(target, None)
+
     def test_getTargetMissing(self):
         self.assertFailure(self.commands.getTarget(["/foo"]), NotFoundError)
 
@@ -114,24 +117,70 @@
         self.assertEquals(c("h"), ["idden"])
         self.assertEquals(c("f"), [])
 
-    def test_completeFiles(self):
+    @inlineCallbacks
+    def _test_completeFiles(self, tests):
         protocol = ShellProtocol(None, commandsClass=SomeCommands)
         commands = protocol.commands
 
         def c(word):
-            return sorted(commands.complete_files(word))
+            # One token
+            d = commands.complete_files((word,))
+            d.addCallback(lambda c: sorted(c))
+            return d
 
-        raise NotImplementedError()
+        def d(word):
+            # Multiple tokens
+            d = commands.complete_files(("XYZZY", word))
+            d.addCallback(lambda c: sorted(c))
+            return d
 
-    test_completeFiles.todo = "Not implemented."
+        def e(word):
+            # No tokens
+            d = commands.complete_files(())
+            d.addCallback(lambda c: sorted(c))
+            return d
 
-    def test_listEntryToString(self):
-        raise NotImplementedError()
-        self.assertEquals(CommandsBase.listEntryToString(file, "stuff"), "")
+        for word, completions in tests:
+            if word is None:
+                self.assertEquals((yield e(word)), completions, "Completing %r" % (word,))
+            else:
+                self.assertEquals((yield c(word)), completions, "Completing %r" % (word,))
+                self.assertEquals((yield d(word)), completions, "Completing %r" % (word,))
 
-    test_listEntryToString.todo = "Not implemented"
+    def test_completeFilesLevelOne(self):
+        return self._test_completeFiles((
+            (None    , ["groups/", "locations/", "resources/", "uids/", "users/"]),
+            (""      , ["groups/", "locations/", "resources/", "uids/", "users/"]),
+            ("u"     , ["ids/", "sers/"]),
+            ("g"     , ["roups/"]),
+            ("gr"    , ["oups/"]),
+            ("groups", ["/"]),
+        ))
 
+    def test_completeFilesLevelOneSlash(self):
+        return self._test_completeFiles((
+            ("/"      , ["groups/", "locations/", "resources/", "uids/", "users/"]),
+            ("/u"     , ["ids/", "sers/"]),
+            ("/g"     , ["roups/"]),
+            ("/gr"    , ["oups/"]),
+            ("/groups", ["/"]),
+        ))
 
+    def test_completeFilesDirectory(self):
+        return self._test_completeFiles((
+            ("users/" , ["wsanchez", "admin"]), # FIXME: Look up users
+        ))
+
+    test_completeFilesDirectory.todo = "Doesn't work yet"
+
+    def test_completeFilesLevelTwo(self):
+        return self._test_completeFiles((
+            ("users/w" , ["sanchez"]), # FIXME: Look up users?
+        ))
+
+    test_completeFilesLevelTwo.todo = "Doesn't work yet"
+
+
 class SomeCommands(CommandsBase):
     def cmd_a(self, tokens):
         pass

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_vfs.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_vfs.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/test/test_vfs.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -24,20 +24,20 @@
 
 class TestListEntry(twisted.trial.unittest.TestCase):
     def test_toString(self):
-        self.assertEquals(ListEntry(File  , "thingo"           ).toString(), "thingo" )
-        self.assertEquals(ListEntry(File  , "thingo", Foo="foo").toString(), "thingo" )
-        self.assertEquals(ListEntry(Folder, "thingo"           ).toString(), "thingo/")
-        self.assertEquals(ListEntry(Folder, "thingo", Foo="foo").toString(), "thingo/")
+        self.assertEquals(ListEntry(None, File  , "thingo"           ).toString(), "thingo" )
+        self.assertEquals(ListEntry(None, File  , "thingo", Foo="foo").toString(), "thingo" )
+        self.assertEquals(ListEntry(None, Folder, "thingo"           ).toString(), "thingo/")
+        self.assertEquals(ListEntry(None, Folder, "thingo", Foo="foo").toString(), "thingo/")
 
     def test_fieldNamesImplicit(self):
         # This test assumes File doesn't set list.fieldNames.
         assert not hasattr(File.list, "fieldNames")
 
-        self.assertEquals(set(ListEntry(File, "thingo").fieldNames), set(("Name",)))
+        self.assertEquals(set(ListEntry(File(None, ()), File, "thingo").fieldNames), set(("Name",)))
 
     def test_fieldNamesExplicit(self):
         def fieldNames(fileClass):
-            return ListEntry(fileClass, "thingo", Flavor="Coconut", Style="Hard")
+            return ListEntry(fileClass(None, ()), fileClass, "thingo", Flavor="Coconut", Style="Hard")
 
         # Full list
         class MyFile(File):
@@ -69,13 +69,13 @@
 
         # Name first, rest sorted by field name
         self.assertEquals(
-            tuple(ListEntry(File, "thingo", Flavor="Coconut", Style="Hard").toFields()),
+            tuple(ListEntry(File(None, ()), File, "thingo", Flavor="Coconut", Style="Hard").toFields()),
             ("thingo", "Coconut", "Hard")
         )
 
     def test_toFieldsExplicit(self):
         def fields(fileClass):
-            return tuple(ListEntry(fileClass, "thingo", Flavor="Coconut", Style="Hard").toFields())
+            return tuple(ListEntry(fileClass(None, ()), fileClass, "thingo", Flavor="Coconut", Style="Hard").toFields())
 
         # Full list
         class MyFile(File):

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/vfs.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/vfs.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/shell/vfs.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -19,6 +19,7 @@
 """
 
 __all__ = [
+    "ListEntry",
     "File",
     "Folder",
     "RootFolder",
@@ -55,7 +56,8 @@
     """
     Information about a C{File} as returned by C{File.list()}.
     """
-    def __init__(self, Class, Name, **fields):
+    def __init__(self, parent, Class, Name, **fields):
+        self.parent    = parent # The class implementing list()
         self.fileClass = Class
         self.fileName  = Name
         self.fields    = fields
@@ -65,6 +67,22 @@
     def __str__(self):
         return self.toString()
 
+    def __repr__(self):
+        fields = self.fields.copy()
+        del fields["Name"]
+
+        if fields:
+            fields = " %s" % (fields,)
+        else:
+            fields = ""
+
+        return "<%s(%s): %r%s>" % (
+            self.__class__.__name__,
+            self.fileClass.__name__,
+            self.fileName,
+            fields,
+        )
+
     def isFolder(self):
         return issubclass(self.fileClass, Folder)
 
@@ -77,18 +95,24 @@
     @property
     def fieldNames(self):
         if not hasattr(self, "_fieldNames"):
-            if hasattr(self.fileClass.list, "fieldNames"):
-                if "Name" in self.fileClass.list.fieldNames:
-                    self._fieldNames = tuple(self.fileClass.list.fieldNames)
+            if hasattr(self.parent.list, "fieldNames"):
+                if "Name" in self.parent.list.fieldNames:
+                    self._fieldNames = tuple(self.parent.list.fieldNames)
                 else:
-                    self._fieldNames = ("Name",) + tuple(self.fileClass.list.fieldNames)
+                    self._fieldNames = ("Name",) + tuple(self.parent.list.fieldNames)
             else:
                 self._fieldNames = ["Name"] + sorted(n for n in self.fields if n != "Name")
 
         return self._fieldNames
 
     def toFields(self):
-        return tuple(self.fields[fieldName] for fieldName in self.fieldNames)
+        try:
+            return tuple(self.fields[fieldName] for fieldName in self.fieldNames)
+        except KeyError, e:
+            raise AssertionError(
+                "Field %s is not in %r, defined by %s"
+                % (e, self.fields.keys(), self.parent.__name__)
+            )
 
 
 class File(object):
@@ -118,7 +142,7 @@
 
     def list(self):
         return succeed((
-            ListEntry(self.__class__, self.path[-1]),
+            ListEntry(self, self.__class__, self.path[-1]),
         ))
 
 
@@ -180,12 +204,13 @@
         raise NotFoundError("Folder %r has no child %r" % (str(self), name))
 
     def list(self):
-        result = set()
+        result = {}
         for name in self._children:
-            result.add(ListEntry(self._children[name].__class__, name))
+            result[name] = ListEntry(self, self._children[name].__class__, name)
         for name in self._childClasses:
-            result.add(ListEntry(self._childClasses[name], name))
-        return succeed(result)
+            if name not in result:
+                result[name] = ListEntry(self, self._childClasses[name], name)
+        return succeed(result.itervalues())
 
 
 class RootFolder(Folder):
@@ -238,7 +263,7 @@
         # FIXME: Add directory info (eg. name) to listing
 
         for txn, home in (yield self.service.store.eachCalendarHome()):
-            result.add(ListEntry(PrincipalHomeFolder, home.uid()))
+            result.add(ListEntry(self, PrincipalHomeFolder, home.uid()))
 
         returnValue(result)
 
@@ -263,14 +288,25 @@
             record=record
         )
 
-    @inlineCallbacks
     def list(self):
-        result = set()
+        names = set()
 
-        # FIXME ...?
-        yield 1
+        for record in self.service.directory.listRecords(self.recordType):
+            for shortName in record.shortNames:
+                if shortName in names:
+                    continue
+                names.add(shortName)
+                yield ListEntry(
+                    self,
+                    PrincipalHomeFolder,
+                    shortName,
+                    **{
+                        "UID": record.uid,
+                        "Full Name": record.fullName,
+                    }
+                )
 
-        returnValue(result)
+    list.fieldNames = ("UID", "Full Name")
 
 
 class UsersFolder(RecordFolder):
@@ -426,7 +462,7 @@
     @inlineCallbacks
     def list(self):
         calendars = (yield self.home.calendars())
-        returnValue((ListEntry(CalendarFolder, c.name()) for c in calendars))
+        returnValue((ListEntry(self, CalendarFolder, c.name()) for c in calendars))
 
     @inlineCallbacks
     def describe(self):
@@ -530,7 +566,17 @@
 
         returnValue("\n".join(description))
 
+    def delete(self, implicit=True):
+        calendar = self.calendarObject.calendar()
 
+        if implicit:
+            # We need data store-level scheduling support to implement
+            # this.
+            raise NotImplementedError("Delete not implemented.")
+        else:
+            calendar.removeCalendarObjectWithUID(self.uid)
+
+
 class CalendarObject(File):
     """
     Calendar object.
@@ -567,7 +613,7 @@
     @inlineCallbacks
     def list(self):
         (yield self.lookup())
-        returnValue((ListEntry(CalendarObject, self.uid, {
+        returnValue((ListEntry(self, CalendarObject, self.uid, {
             "Component Type": self.componentType,
             "Summary": self.summary.replace("\n", " "),
         }),))

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/test/test_calverify.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/test/test_calverify.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/calendarserver/tools/test/test_calverify.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -22,7 +22,7 @@
 from calendarserver.tap.util import getRootResource
 from calendarserver.tools.calverify import CalVerifyService
 from twisted.internet import reactor
-from twisted.internet.defer import inlineCallbacks
+from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.trial import unittest
 from twistedcaldav.config import config
 from txdav.caldav.datastore import util
@@ -223,6 +223,116 @@
 """.replace("\n", "\r\n")
 
 
+# Non-base64 Organizer and Attendee parameter
+BAD7_ICS = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//Apple Inc.//iCal 4.0.1//EN
+CALSCALE:GREGORIAN
+BEGIN:VEVENT
+CREATED:20100303T181216Z
+UID:BAD7
+DTEND:20000307T151500Z
+TRANSP:OPAQUE
+SUMMARY:Ancient event
+DTSTART:20000307T111500Z
+DTSTAMP:20100303T181220Z
+ORGANIZER;CALENDARSERVER-OLD-CUA="http://demo.com:8008/principals/__uids__/
+ D46F3D71-04B7-43C2-A7B6-6F92F92E61D0":urn:uuid:D46F3D71-04B7-43C2-A7B6-6F9
+ 2F92E61D0
+ATTENDEE;CALENDARSERVER-OLD-CUA="http://demo.com:8008/principals/__uids__/D
+ 46F3D71-04B7-43C2-A7B6-6F92F92E61D0":urn:uuid:D46F3D71-04B7-43C2-A7B6-6F92
+ F92E61D0
+ATTENDEE:mailto:example2 at example.com
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
+
+
+# Base64 Organizer and Attendee parameter
+OK8_ICS = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//Apple Inc.//iCal 4.0.1//EN
+CALSCALE:GREGORIAN
+BEGIN:VEVENT
+CREATED:20100303T181216Z
+UID:OK8
+DTEND:20000307T151500Z
+TRANSP:OPAQUE
+SUMMARY:Ancient event
+DTSTART:20000307T111500Z
+DTSTAMP:20100303T181220Z
+ORGANIZER;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbm
+ NpcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":
+ urn:uuid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
+ATTENDEE;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbmN
+ pcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":u
+ rn:uuid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0
+ATTENDEE:mailto:example2 at example.com
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
+
+BAD9_ICS =                 """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VTIMEZONE
+TZID:US/Pacific
+BEGIN:STANDARD
+DTSTART:19621028T020000
+RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYDAY=-1SU;BYMONTH=10
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19870405T020000
+RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYDAY=1SU;BYMONTH=4
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:DAYLIGHT
+DTSTART:20070311T020000
+RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:20071104T020000
+RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+END:VTIMEZONE
+BEGIN:VEVENT
+UID:BAD9
+DTSTART;TZID=US/Pacific:20111103T150000
+DTEND;TZID=US/Pacific:20111103T160000
+ATTENDEE;CALENDARSERVER-OLD-CUA="//example.com\\:8443/principals/users/cyrus
+ /;CN=\\"Cyrus Daboo\\";CUTYPE=INDIVIDUAL;EMAIL=\\"cyrus at example.com\\";PARTSTAT=ACC
+ EPTED:urn:uuid:7B2636C7-07F6-4475-924B-2854107F7A22";CN=Cyrus Daboo;EMAIL=c
+ yrus at example.com;RSVP=TRUE:urn:uuid:7B2636C7-07F6-4475-924B-2854107F7A22
+ATTENDEE;CN=John Smith;CUTYPE=INDIVIDUAL;EMAIL=smith at example.com;PARTSTAT=AC
+ CEPTED;ROLE=REQ-PARTICIPANT:urn:uuid:E975EB3D-C412-411B-A655-C3BE4949788C
+CREATED:20090730T214912Z
+DTSTAMP:20120421T182823Z
+ORGANIZER;CALENDARSERVER-OLD-CUA="//example.com\\:8443/principals/users/cyru
+ s/;CN=\\"Cyrus Daboo\\";EMAIL=\\"cyrus at example.com\\":urn:uuid:7B2636C7-07F6-4475-9
+ 24B-2854107F7A22";CN=Cyrus Daboo;EMAIL=cyrus at example.com:urn:uuid:7B2636C7-
+ 07F6-4475-924B-2854107F7A22
+RRULE:FREQ=WEEKLY;COUNT=400
+SEQUENCE:18
+SUMMARY:1-on-1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n")
+
+
 class CalVerifyTests(CommonCommonTests, unittest.TestCase):
     """
     Tests for deleting events older than a given date
@@ -239,13 +349,16 @@
     requirements = {
         "home1" : {
             "calendar1" : {
-                "ok.ics" : (OK_ICS, metadata,),
+                "ok.ics"   : (OK_ICS, metadata,),
                 "bad1.ics" : (BAD1_ICS, metadata,),
                 "bad2.ics" : (BAD2_ICS, metadata,),
                 "bad3.ics" : (BAD3_ICS, metadata,),
                 "bad4.ics" : (BAD4_ICS, metadata,),
                 "bad5.ics" : (BAD5_ICS, metadata,),
                 "bad6.ics" : (BAD6_ICS, metadata,),
+                "bad7.ics" : (BAD7_ICS, metadata,),
+                "ok8.ics"  : (OK8_ICS, metadata,),
+                "bad9.ics" : (BAD9_ICS, metadata,),
             }
         },
     }
@@ -287,6 +400,26 @@
         return self._sqlCalendarStore
 
 
+    @inlineCallbacks
+    def homeUnderTest(self, txn=None):
+        """
+        Get the calendar home detailed by C{requirements['home1']}.
+        """
+        if txn is None:
+            txn = self.transactionUnderTest()
+        returnValue((yield txn.calendarHomeWithUID("home1")))
+
+
+    @inlineCallbacks
+    def calendarUnderTest(self, txn=None):
+        """
+        Get the calendar detailed by C{requirements['home1']['calendar1']}.
+        """
+        returnValue((yield
+            (yield self.homeUnderTest(txn)).calendarWithName("calendar1"))
+        )
+
+
     def verifyResultsByUID(self, results, expected):
         reported = set([(home, uid) for home, uid, _ignore_resid, _ignore_reason in results])
         self.assertEqual(reported, expected)
@@ -296,18 +429,24 @@
     def test_scanBadData(self):
         """
         CalVerifyService.doScan without fix. Make sure it detects common errors.
+        Make sure sync-token is not changed.
         """
 
+        sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
+        self.commit()
+
         options = {
-            "ical":None,
+            "ical":True,
+            "nobase64":False,
             "verbose":False,
+            "uid":"",
             "uuid":"",
         }
         output = StringIO()
         calverify = CalVerifyService(self._sqlCalendarStore, options, output, reactor, config)
         yield calverify.doScan(True, False, False)
 
-        self.assertEqual(calverify.results["Number of events to process"], 7)
+        self.assertEqual(calverify.results["Number of events to process"], 10)
         self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
             ("home1", "BAD1",),
             ("home1", "BAD2",),
@@ -315,18 +454,29 @@
             ("home1", "BAD4",),
             ("home1", "BAD5",),
             ("home1", "BAD6",),
+            ("home1", "BAD7",),
+            ("home1", "BAD9",),
         )))
 
+        sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
+        self.assertEqual(sync_token_old, sync_token_new)
 
+
     @inlineCallbacks
     def test_fixBadData(self):
         """
-        CalVerifyService.doScan without fix. Make sure it detects and fixes as much as it can.
+        CalVerifyService.doScan with fix. Make sure it detects and fixes as much as it can.
+        Make sure sync-token is changed.
         """
 
+        sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
+        self.commit()
+
         options = {
-            "ical":None,
+            "ical":True,
+            "nobase64":False,
             "verbose":False,
+            "uid":"",
             "uuid":"",
         }
         output = StringIO()
@@ -337,7 +487,7 @@
         calverify = CalVerifyService(self._sqlCalendarStore, options, output, reactor, config)
         yield calverify.doScan(True, False, True)
 
-        self.assertEqual(calverify.results["Number of events to process"], 7)
+        self.assertEqual(calverify.results["Number of events to process"], 10)
         self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
             ("home1", "BAD1",),
             ("home1", "BAD2",),
@@ -345,13 +495,303 @@
             ("home1", "BAD4",),
             ("home1", "BAD5",),
             ("home1", "BAD6",),
+            ("home1", "BAD7",),
+            ("home1", "BAD9",),
         )))
 
         # Do scan
         calverify = CalVerifyService(self._sqlCalendarStore, options, output, reactor, config)
         yield calverify.doScan(True, False, False)
 
-        self.assertEqual(calverify.results["Number of events to process"], 7)
+        self.assertEqual(calverify.results["Number of events to process"], 10)
         self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
             ("home1", "BAD1",),
         )))
+
+        sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
+        self.assertNotEqual(sync_token_old, sync_token_new)
+
+    @inlineCallbacks
+    def test_scanBadCuaOnly(self):
+        """
+        CalVerifyService.doScan without fix for CALENDARSERVER-OLD-CUA only. Make sure it detects
+        and fixes as much as it can. Make sure sync-token is not changed.
+        """
+
+        sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
+        self.commit()
+
+        options = {
+            "ical":False,
+            "badcua":True,
+            "nobase64":False,
+            "verbose":False,
+            "uid":"",
+            "uuid":"",
+        }
+        output = StringIO()
+        calverify = CalVerifyService(self._sqlCalendarStore, options, output, reactor, config)
+        yield calverify.doScan(True, False, False)
+
+        self.assertEqual(calverify.results["Number of events to process"], 10)
+        self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
+            ("home1", "BAD4",),
+            ("home1", "BAD5",),
+            ("home1", "BAD6",),
+            ("home1", "BAD7",),
+            ("home1", "BAD9",),
+        )))
+
+        sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
+        self.assertEqual(sync_token_old, sync_token_new)
+
+    @inlineCallbacks
+    def test_fixBadCuaOnly(self):
+        """
+        CalVerifyService.doScan with fix for CALENDARSERVER-OLD-CUA only. Make sure it detects
+        and fixes as much as it can. Make sure sync-token is changed.
+        """
+
+        sync_token_old = (yield (yield self.calendarUnderTest()).syncToken())
+        self.commit()
+
+        options = {
+            "ical":False,
+            "badcua":True,
+            "nobase64":False,
+            "verbose":False,
+            "uid":"",
+            "uuid":"",
+        }
+        output = StringIO()
+        
+        # Do fix
+        self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com")
+        self.patch(config, "HTTPPort", 8008)
+        calverify = CalVerifyService(self._sqlCalendarStore, options, output, reactor, config)
+        yield calverify.doScan(True, False, True)
+
+        self.assertEqual(calverify.results["Number of events to process"], 10)
+        self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
+            ("home1", "BAD4",),
+            ("home1", "BAD5",),
+            ("home1", "BAD6",),
+            ("home1", "BAD7",),
+            ("home1", "BAD9",),
+        )))
+
+        # Do scan
+        calverify = CalVerifyService(self._sqlCalendarStore, options, output, reactor, config)
+        yield calverify.doScan(True, False, False)
+
+        self.assertEqual(calverify.results["Number of events to process"], 10)
+        self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set((
+        )))
+
+        sync_token_new = (yield (yield self.calendarUnderTest()).syncToken())
+        self.assertNotEqual(sync_token_old, sync_token_new)
+
+    def test_fixBadCuaLines(self):
+        """
+        CalVerifyService.fixBadOldCuaLines. Make sure it applies correct fix.
+        """
+
+        data = (
+            (
+                """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VTIMEZONE
+TZID:US/Pacific
+BEGIN:STANDARD
+DTSTART:19621028T020000
+RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYDAY=-1SU;BYMONTH=10
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19870405T020000
+RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYDAY=1SU;BYMONTH=4
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:DAYLIGHT
+DTSTART:20070311T020000
+RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:20071104T020000
+RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+END:VTIMEZONE
+BEGIN:VEVENT
+UID:32956D5C-579F-46FD-BAE3-4A6C354B8CA3
+DTSTART;TZID=US/Pacific:20111103T150000
+DTEND;TZID=US/Pacific:20111103T160000
+ATTENDEE;CALENDARSERVER-OLD-CUA="//example.com\\:8443/principals/users/cyrus
+ /;CN="Cyrus Daboo";CUTYPE=INDIVIDUAL;EMAIL="cyrus at example.com";PARTSTAT=ACC
+ EPTED:urn:uuid:7B2636C7-07F6-4475-924B-2854107F7A22";CN=Cyrus Daboo;EMAIL=c
+ yrus at example.com;RSVP=TRUE:urn:uuid:7B2636C7-07F6-4475-924B-2854107F7A22
+ATTENDEE;CN=John Smith;CUTYPE=INDIVIDUAL;EMAIL=smith at example.com;PARTSTAT=AC
+ CEPTED;ROLE=REQ-PARTICIPANT:urn:uuid:E975EB3D-C412-411B-A655-C3BE4949788C
+CREATED:20090730T214912Z
+DTSTAMP:20120421T182823Z
+ORGANIZER;CALENDARSERVER-OLD-CUA="//example.com\\:8443/principals/users/cyru
+ s/;CN="Cyrus Daboo";EMAIL="cyrus at example.com":urn:uuid:7B2636C7-07F6-4475-9
+ 24B-2854107F7A22";CN=Cyrus Daboo;EMAIL=cyrus at example.com:urn:uuid:7B2636C7-
+ 07F6-4475-924B-2854107F7A22
+RRULE:FREQ=WEEKLY;COUNT=400
+SEQUENCE:18
+SUMMARY:1-on-1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n"),
+                """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VTIMEZONE
+TZID:US/Pacific
+BEGIN:STANDARD
+DTSTART:19621028T020000
+RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYDAY=-1SU;BYMONTH=10
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19870405T020000
+RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYDAY=1SU;BYMONTH=4
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:DAYLIGHT
+DTSTART:20070311T020000
+RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:20071104T020000
+RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+END:VTIMEZONE
+BEGIN:VEVENT
+UID:32956D5C-579F-46FD-BAE3-4A6C354B8CA3
+DTSTART;TZID=US/Pacific:20111103T150000
+DTEND;TZID=US/Pacific:20111103T160000
+ATTENDEE;CALENDARSERVER-OLD-CUA="https://example.com:8443/principals/users/c
+ yrus/";CN=Cyrus Daboo;EMAIL=cyrus at example.com;RSVP=TRUE:urn:uuid:7B2636C7-0
+ 7F6-4475-924B-2854107F7A22
+ATTENDEE;CN=John Smith;CUTYPE=INDIVIDUAL;EMAIL=smith at example.com;PARTSTAT=AC
+ CEPTED;ROLE=REQ-PARTICIPANT:urn:uuid:E975EB3D-C412-411B-A655-C3BE4949788C
+CREATED:20090730T214912Z
+DTSTAMP:20120421T182823Z
+ORGANIZER;CALENDARSERVER-OLD-CUA="https://example.com:8443/principals/users/
+ cyrus/";CN=Cyrus Daboo;EMAIL=cyrus at example.com:urn:uuid:7B2636C7-07F6-4475-
+ 924B-2854107F7A22
+RRULE:FREQ=WEEKLY;COUNT=400
+SEQUENCE:18
+SUMMARY:1-on-1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n"),
+                """BEGIN:VCALENDAR
+VERSION:2.0
+CALSCALE:GREGORIAN
+METHOD:REQUEST
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VTIMEZONE
+TZID:US/Pacific
+BEGIN:STANDARD
+DTSTART:19621028T020000
+RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYDAY=-1SU;BYMONTH=10
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+BEGIN:DAYLIGHT
+DTSTART:19870405T020000
+RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYDAY=1SU;BYMONTH=4
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:DAYLIGHT
+DTSTART:20070311T020000
+RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
+TZNAME:PDT
+TZOFFSETFROM:-0800
+TZOFFSETTO:-0700
+END:DAYLIGHT
+BEGIN:STANDARD
+DTSTART:20071104T020000
+RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
+TZNAME:PST
+TZOFFSETFROM:-0700
+TZOFFSETTO:-0800
+END:STANDARD
+END:VTIMEZONE
+BEGIN:VEVENT
+UID:32956D5C-579F-46FD-BAE3-4A6C354B8CA3
+DTSTART;TZID=US/Pacific:20111103T150000
+DTEND;TZID=US/Pacific:20111103T160000
+ATTENDEE;CALENDARSERVER-OLD-CUA=base64-aHR0cHM6Ly9leGFtcGxlLmNvbTo4NDQzL3Bya
+ W5jaXBhbHMvdXNlcnMvY3lydXMv;CN=Cyrus Daboo;EMAIL=cyrus at example.com;RSVP=TRU
+ E:urn:uuid:7B2636C7-07F6-4475-924B-2854107F7A22
+ATTENDEE;CN=John Smith;CUTYPE=INDIVIDUAL;EMAIL=smith at example.com;PARTSTAT=AC
+ CEPTED;ROLE=REQ-PARTICIPANT:urn:uuid:E975EB3D-C412-411B-A655-C3BE4949788C
+CREATED:20090730T214912Z
+DTSTAMP:20120421T182823Z
+ORGANIZER;CALENDARSERVER-OLD-CUA=base64-aHR0cHM6Ly9leGFtcGxlLmNvbTo4NDQzL3By
+ aW5jaXBhbHMvdXNlcnMvY3lydXMv;CN=Cyrus Daboo;EMAIL=cyrus at example.com:urn:uui
+ d:7B2636C7-07F6-4475-924B-2854107F7A22
+RRULE:FREQ=WEEKLY;COUNT=400
+SEQUENCE:18
+SUMMARY:1-on-1
+END:VEVENT
+END:VCALENDAR
+""".replace("\n", "\r\n"),
+            ),
+        )
+        
+        optionsNo64 = {
+            "ical":True,
+            "nobase64":True,
+            "verbose":False,
+            "uid":"",
+            "uuid":"",
+        }
+        calverifyNo64 = CalVerifyService(self._sqlCalendarStore, optionsNo64, StringIO(), reactor, config)
+
+        options64 = {
+            "ical":True,
+            "nobase64":False,
+            "verbose":False,
+            "uid":"",
+            "uuid":"",
+        }
+        calverify64 = CalVerifyService(self._sqlCalendarStore, options64, StringIO(), reactor, config)
+
+        for bad, oknobase64, okbase64 in data:
+            bad = bad.replace("\r\n ", "")
+            oknobase64 = oknobase64.replace("\r\n ", "")
+            okbase64 = okbase64.replace("\r\n ", "")
+            self.assertEqual(calverifyNo64.fixBadOldCuaLines(bad), oknobase64)
+            self.assertEqual(calverify64.fixBadOldCuaLines(bad), okbase64)

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/conf/caldavd-test.plist
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/conf/caldavd-test.plist	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/conf/caldavd-test.plist	2012-05-02 18:54:12 UTC (rev 9221)
@@ -32,7 +32,7 @@
 
     <!-- Network host name [empty = system host name] -->
     <key>ServerHostName</key>
-    <string></string> <!-- The hostname clients use when connecting -->
+    <string>localhost</string> <!-- The hostname clients use when connecting -->
 
     <!-- Enable Calendars -->
     <key>EnableCalDAV</key>
@@ -250,7 +250,7 @@
           <key>base</key>
           <string>dc=example,dc=com</string>
           <key>guidAttr</key>
-          <string>apple-generateduid</string>
+          <string>entryUUID</string>
           <key>users</key>
           <dict>
             <key>rdn</key>
@@ -634,6 +634,21 @@
 
       <key>Services</key>
       <dict>
+
+        <key>AMPNotifier</key>
+        <dict>
+          <key>Service</key>
+          <string>calendarserver.push.amppush.AMPPushNotifierService</string>
+          <key>Enabled</key>
+          <true/>
+          <key>Port</key>
+          <integer>62311</integer>
+          <key>EnableStaggering</key>
+          <false/>
+          <key>StaggerSeconds</key>
+          <integer>3</integer>
+        </dict>
+
         <key>SimpleLineNotifier</key>
         <dict>
           <!-- Simple line notification service (for testing) -->

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/migration/calendarpromotion.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/migration/calendarpromotion.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/migration/calendarpromotion.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -29,14 +29,19 @@
     try:
         # Create calendar ServerRoot
         os.mkdir(CALENDAR_SERVER_ROOT)
+    except OSError:
+        # Already exists
+        pass
 
-        # Copy configuration
-        shutil.copytree(SRC_CONFIG_DIR, DEST_CONFIG_DIR)
+    try:
+        # Create calendar ConfigRoot
+        os.mkdir(DEST_CONFIG_DIR)
     except OSError:
         # Already exists
         pass
 
     plistPath = os.path.join(DEST_CONFIG_DIR, CALDAVD_PLIST)
+
     if os.path.exists(plistPath):
         try:
             plistData = readPlist(plistPath)
@@ -56,6 +61,11 @@
         except Exception, e:
             print "Unable to disable services in %s: %s" % (plistPath, e)
 
+    else:
+        # Copy configuration
+        srcPlistPath = os.path.join(SRC_CONFIG_DIR, CALDAVD_PLIST)
+        shutil.copy(srcPlistPath, DEST_CONFIG_DIR)
+
     # Create log directory
     try:
         os.mkdir(LOG_DIR, 0755)

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/display-calendar-events.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/display-calendar-events.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/display-calendar-events.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -14,15 +14,15 @@
 # limitations under the License.
 ##
 
+import eventkitframework as EventKit
 from Cocoa import NSDate
-from CalendarStore import CalCalendarStore
 
-store = CalCalendarStore.defaultCalendarStore()
-calendars = store.calendars()
+store = EventKit.EKEventStore.alloc().init()
+calendars = store.calendarsForEntityType_(0)
 print calendars
 raise SystemExit
 
-predicate = CalCalendarStore.eventPredicateWithStartDate_endDate_calendars_(
-    NSDate.date(), NSDate.distantFuture(),
-    [calendars[2]])
-print store.eventsWithPredicate_(predicate)
+predicate = store.predicateForEventsWithStartDate_endDate_calendars_(
+     NSDate.date(), NSDate.distantFuture(),
+     [calendars[2]])
+print store.eventsMatchingPredicate_(predicate)

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/eventkitframework.py (from rev 9220, CalendarServer/trunk/contrib/performance/eventkitframework.py)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/eventkitframework.py	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/eventkitframework.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,8 @@
+import objc as _objc
+
+__bundle__ = _objc.initFrameworkWrapper("EventKit",
+    frameworkIdentifier="com.apple.EventKit",
+    frameworkPath=_objc.pathForFramework(
+    "/System/Library/Frameworks/EventKit.framework"),
+    globals=globals())
+

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/__init__.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/__init__.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/__init__.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,19 @@
+##
+# Copyright (c) 2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+"""
+Load-testing tool.
+"""

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/config.plist
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/config.plist	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/config.plist	2012-05-02 18:54:12 UTC (rev 9221)
@@ -122,6 +122,13 @@
 						advertised. -->
 					<key>supportPush</key>
 					<false />
+
+					<key>supportAmpPush</key>
+					<true/>
+					<key>ampPushHost</key>
+					<string>localhost</string>
+					<key>ampPushPort</key>
+					<integer>62311</integer>
 				</dict>
 
 				<!-- The profiles define certain types of user behavior on top of the 

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/ical.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/ical.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/loadtest/ical.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -44,6 +44,7 @@
 from caldavclientlibrary.protocol.caldav.definitions import csxml
 
 from calendarserver.tools.notifications import PubSubClientFactory
+from calendarserver.push.amppush import subscribeToIDs
 
 from contrib.performance.httpclient import StringProducer, readBody
 from contrib.performance.httpauth import AuthHandlerAgent
@@ -263,7 +264,8 @@
 
     email = None
 
-    def __init__(self, reactor, root, record, auth, calendarHomePollInterval=None, supportPush=True):
+    def __init__(self, reactor, root, record, auth, calendarHomePollInterval=None, supportPush=True,
+        supportAmpPush=True, ampPushHost="localhost", ampPushPort=62311):
         
         self._client_id = str(uuid4())
 
@@ -277,7 +279,11 @@
         self.calendarHomePollInterval = calendarHomePollInterval
 
         self.supportPush = supportPush
-        
+
+        self.supportAmpPush = supportAmpPush
+        self.ampPushHost = ampPushHost
+        self.ampPushPort = ampPushPort
+
         self.supportSync = self._SYNC_REPORT
 
         # Keep track of the calendars on this account, keys are
@@ -298,6 +304,8 @@
         # values.
         self.xmpp = {}
 
+        self.ampPushKeys = {}
+
         # Keep track of push factories so we can unsubscribe at shutdown
         self._pushFactories = []
 
@@ -540,7 +548,16 @@
 
             if href == calendarHome:
                 text = results[href].getTextProperties()
+
                 try:
+                    pushkey = text[csxml.pushkey]
+                except KeyError:
+                    pass
+                else:
+                    if pushkey:
+                        self.ampPushKeys[href] = pushkey
+
+                try:
                     server = text[csxml.xmpp_server]
                     uri = text[csxml.xmpp_uri]
                     pushkey = text[csxml.pushkey]
@@ -918,7 +935,24 @@
         self._pushFactories.append(factory)
         self.reactor.connectTCP(host, port, factory)
 
+    def _receivedPush(self, inboundID):
+        for href, id in self.ampPushKeys.iteritems():
+            if inboundID == id:
+                self._checkCalendarsForEvents(href, push=True)
+                break
+        else:
+            # somehow we are not subscribed to this id
+            pass
 
+
+    def _monitorAmpPush(self, home, pushKeys):
+        """
+        Start monitoring for AMP-based push notifications
+        """
+        subscribeToIDs(self.ampPushHost, self.ampPushPort, pushKeys,
+            self._receivedPush, self.reactor)
+
+
     @inlineCallbacks
     def _unsubscribePubSub(self):
         for factory in self._pushFactories:
@@ -950,6 +984,11 @@
             self._monitorPubSub(calendarHome, self.xmpp[calendarHome])
             # Run indefinitely.
             yield Deferred()
+        elif self.supportAmpPush and calendarHome in self.ampPushKeys:
+            pushKeys = self.ampPushKeys.values()
+            self._monitorAmpPush(calendarHome, pushKeys)
+            # Run indefinitely.
+            yield Deferred()
         else:
             # This completes when the calendar home poll loop completes, which
             # currently it never will except due to an unexpected error.

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/sim
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/sim	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/contrib/performance/sim	2012-05-02 18:54:12 UTC (rev 9221)
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python
 ##
 # Copyright (c) 2011 Apple Inc. All rights reserved.
 #

Deleted: CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/draft-daboo-webdav-sync.txt
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/draft-daboo-webdav-sync.txt	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/draft-daboo-webdav-sync.txt	2012-05-02 18:54:12 UTC (rev 9221)
@@ -1,1624 +0,0 @@
-
-
-
-Network Working Group                                           C. Daboo
-Internet-Draft                                               Apple, Inc.
-Intended status: Standards Track                             A. Quillaud
-Expires: January 12, 2012                                         Oracle
-                                                           July 11, 2011
-
-
-                 Collection Synchronization for WebDAV
-                       draft-daboo-webdav-sync-06
-
-Abstract
-
-   This specification defines an extension to WebDAV that allows
-   efficient synchronization of the contents of a WebDAV collection.
-
-Editorial Note (To be removed by RFC Editor before publication)
-
-   Please send comments to the Distributed Authoring and Versioning
-   (WebDAV) working group at <mailto:w3c-dist-auth at w3.org>, which may be
-   joined by sending a message with subject "subscribe" to
-   <mailto:w3c-dist-auth-request at w3.org>.  Discussions of the WEBDAV
-   working group are archived at
-   <http://lists.w3.org/Archives/Public/w3c-dist-auth/>.
-
-Status of This Memo
-
-   This Internet-Draft is submitted in full conformance with the
-   provisions of BCP 78 and BCP 79.
-
-   Internet-Drafts are working documents of the Internet Engineering
-   Task Force (IETF).  Note that other groups may also distribute
-   working documents as Internet-Drafts.  The list of current Internet-
-   Drafts is at http://datatracker.ietf.org/drafts/current/.
-
-   Internet-Drafts are draft documents valid for a maximum of six months
-   and may be updated, replaced, or obsoleted by other documents at any
-   time.  It is inappropriate to use Internet-Drafts as reference
-   material or to cite them other than as "work in progress."
-
-   This Internet-Draft will expire on January 12, 2012.
-
-Copyright Notice
-
-   Copyright (c) 2011 IETF Trust and the persons identified as the
-   document authors.  All rights reserved.
-
-   This document is subject to BCP 78 and the IETF Trust's Legal
-   Provisions Relating to IETF Documents
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 1]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   (http://trustee.ietf.org/license-info) in effect on the date of
-   publication of this document.  Please review these documents
-   carefully, as they describe your rights and restrictions with respect
-   to this document.  Code Components extracted from this document must
-   include Simplified BSD License text as described in Section 4.e of
-   the Trust Legal Provisions and are provided without warranty as
-   described in the Simplified BSD License.
-
-   This document may contain material from IETF Documents or IETF
-   Contributions published or made publicly available before November
-   10, 2008.  The person(s) controlling the copyright in some of this
-   material may not have granted the IETF Trust the right to allow
-   modifications of such material outside the IETF Standards Process.
-   Without obtaining an adequate license from the person(s) controlling
-   the copyright in such materials, this document may not be modified
-   outside the IETF Standards Process, and derivative works of it may
-   not be created outside the IETF Standards Process, except to format
-   it for publication as an RFC or to translate it into languages other
-   than English.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 2]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-Table of Contents
-
-   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
-   2.  Conventions Used in This Document  . . . . . . . . . . . . . .  4
-   3.  WebDAV Synchronization . . . . . . . . . . . . . . . . . . . .  5
-     3.1.  Overview . . . . . . . . . . . . . . . . . . . . . . . . .  5
-     3.2.  DAV:sync-collection Report . . . . . . . . . . . . . . . .  6
-     3.3.  Depth behavior . . . . . . . . . . . . . . . . . . . . . .  8
-     3.4.  Types of Changes Reported on Initial Synchronization . . .  9
-     3.5.  Types of Changes Reported on Subsequent
-           Synchronizations . . . . . . . . . . . . . . . . . . . . .  9
-       3.5.1.  Changed Member . . . . . . . . . . . . . . . . . . . .  9
-       3.5.2.  Removed Member . . . . . . . . . . . . . . . . . . . . 10
-     3.6.  Truncation of Results  . . . . . . . . . . . . . . . . . . 10
-     3.7.  Limiting Results . . . . . . . . . . . . . . . . . . . . . 11
-     3.8.  Example: Initial DAV:sync-collection Report  . . . . . . . 11
-     3.9.  Example: DAV:sync-collection Report with Token . . . . . . 13
-     3.10. Example: Initial DAV:sync-collection Report with
-           Truncation . . . . . . . . . . . . . . . . . . . . . . . . 16
-     3.11. Example: Initial DAV:sync-collection Report with Limit . . 17
-     3.12. Example: DAV:sync-collection Report with Unsupported
-           Limit  . . . . . . . . . . . . . . . . . . . . . . . . . . 19
-     3.13. Example: Depth:infinity initial DAV:sync-collection
-           Report . . . . . . . . . . . . . . . . . . . . . . . . . . 19
-   4.  DAV:sync-token Property  . . . . . . . . . . . . . . . . . . . 22
-   5.  DAV:sync-token Use with If Header  . . . . . . . . . . . . . . 22
-     5.1.  Example: If Pre-Condition with PUT . . . . . . . . . . . . 23
-     5.2.  Example: If Pre-Condition with MKCOL . . . . . . . . . . . 23
-   6.  XML Element Definitions  . . . . . . . . . . . . . . . . . . . 24
-     6.1.  DAV:sync-collection XML Element  . . . . . . . . . . . . . 24
-     6.2.  DAV:sync-token XML Element . . . . . . . . . . . . . . . . 24
-     6.3.  DAV:multistatus XML Element  . . . . . . . . . . . . . . . 25
-   7.  Security Considerations  . . . . . . . . . . . . . . . . . . . 25
-   8.  IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 26
-   9.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 26
-   10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 26
-     10.1. Normative References . . . . . . . . . . . . . . . . . . . 26
-     10.2. Informative References . . . . . . . . . . . . . . . . . . 27
-   Appendix A.  Change History (to be removed prior to
-                publication as an RFC)  . . . . . . . . . . . . . . . 27
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 3]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-1.  Introduction
-
-   WebDAV [RFC4918] defines the concept of 'collections' which are
-   hierarchical groupings of WebDAV resources on an HTTP [RFC2616]
-   server.  Collections can be of arbitrary size and depth (i.e.,
-   collections within collections).  WebDAV clients that cache resource
-   content need a way to synchronize that data with the server (i.e.,
-   detect what has changed and update their cache).  This can currently
-   be done using a WebDAV PROPFIND request on a collection to list all
-   members of a collection along with their DAV:getetag property values,
-   which allows the client to determine which were changed, added or
-   deleted.  However, this does not scale well to large collections as
-   the XML response to the PROPFIND request will grow with the
-   collection size.
-
-   This specification defines a new WebDAV report that results in the
-   server returning to the client only information about those member
-   URIs that were added or deleted, or whose mapped resources were
-   changed, since a previous execution of the report on the collection.
-
-   Additionally, a new property is added to collection resources that is
-   used to convey a "synchronization token" that is guaranteed to change
-   when the collection's member URIs or their mapped resources have
-   changed.
-
-2.  Conventions Used in This Document
-
-   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
-   document are to be interpreted as described in [RFC2119].
-
-   This document uses XML DTD fragments ([W3C.REC-xml-20081126], Section
-   3.2) as a purely notational convention.  WebDAV request and response
-   bodies cannot be validated by a DTD due to the specific extensibility
-   rules defined in Section 17 of [RFC4918] and due to the fact that all
-   XML elements defined by this specification use the XML namespace name
-   "DAV:".  In particular:
-
-   1.  element names use the "DAV:" namespace,
-
-   2.  element ordering is irrelevant unless explicitly stated,
-
-   3.  extension elements (elements not already defined as valid child
-       elements) may be added anywhere, except when explicitly stated
-       otherwise,
-
-   4.  extension attributes (attributes not already defined as valid for
-       this element) may be added anywhere, except when explicitly
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 4]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-       stated otherwise.
-
-   When an XML element type in the "DAV:" namespace is referenced in
-   this document outside of the context of an XML fragment, the string
-   "DAV:" will be prefixed to the element type.
-
-   This document inherits, and sometimes extends, DTD productions from
-   Section 14 of [RFC4918].
-
-3.  WebDAV Synchronization
-
-3.1.  Overview
-
-   One way to synchronize data between two entities is to use some form
-   of synchronization token.  The token defines the state of the data
-   being synchronized at a particular point in time.  It can then be
-   used to determine what has changed since one point in time and
-   another.
-
-   This specification defines a new WebDAV report that is used to enable
-   client-server collection synchronization based on such a token.
-
-   In order to synchronize the contents of a collection between a server
-   and client, the server provides the client with a synchronization
-   token each time the synchronization report is executed.  That token
-   represents the state of the data being synchronized at that point in
-   time.  The client can then present that same token back to the server
-   at some later time and the server will return only those items that
-   are new, have changed or were deleted since that token was generated.
-   The server also returns a new token representing the new state at the
-   time the report was run.
-
-   Typically, the first time a client connects to the server it will
-   need to be informed of the entire state of the collection (i.e., a
-   full list of all member URIs that are currently in the collection).
-   That is done by the client sending an empty token value to the
-   server.  This indicates to the server that a full listing is
-   required.
-
-   As an alternative, the client might choose to do its first
-   synchronization using some other mechanism on the collection (e.g.
-   some other form of batch resource information retrieval such as
-   PROPFIND, SEARCH [RFC5323] , or specialized REPORTs such as those
-   defined in CalDAV [RFC4791] and CardDAV [I-D.ietf-vcarddav-carddav])
-   and ask for the DAV:sync-token property to be returned.  This
-   property (defined in Section 4) contains the same token that can be
-   used later on to issue a DAV:sync-collection report.
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 5]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   In some cases a server might only wish to maintain a limited amount
-   of history about changes to a collection.  In that situation it will
-   return an error to the client when the client presents a token that
-   is "out of date".  At that point the client has to fall back to
-   synchronizing the entire collection by re-running the report request
-   using an empty token value.
-
-3.2.  DAV:sync-collection Report
-
-   If the DAV:sync-collection report is implemented by a WebDAV server,
-   then the server MUST list the report in the "DAV:supported-report-
-   set" property on any collection supporting synchronization.
-
-   To implement the behavior for this report a server needs to keep
-   track of changes to any member URIs and their mapped resources in a
-   collection (as defined in Section 3 of [RFC4918]).  This includes
-   noting the addition of new member URIs, changes to the mapped
-   resources of existing member URIs, and removal of member URIs.  The
-   server will track each change and provide a synchronization "token"
-   to the client that describes the state of the server at a specific
-   point in time.  This "token" is returned as part of the response to
-   the "sync-collection" report.  Clients include the last token they
-   got from the server in the next "sync-collection" report that they
-   execute and the server provides the changes from the previous state,
-   represented by the token, to the current state, represented by the
-   new token returned.
-
-   The synchronization token itself is an "opaque" string - i.e., the
-   actual string data has no specific meaning or syntax.  However, the
-   token MUST be a valid URI to allow its use in an If pre-condition
-   request header (see Section 5).  For example, a simple implementation
-   of such a token could be a numeric counter that counts each change as
-   it occurs and relates that change to the specific object that
-   changed.  The numeric value could be appended to a "base" URI to form
-   the valid sync-token.
-
-   Marshalling:
-
-      The request URI MUST identify a collection.  The request body MUST
-      be a DAV:sync-collection XML element (see Section 6.1), which MUST
-      contain one DAV:sync-token XML element, and one DAV:prop XML
-      element, and MAY contain a DAV:limit XML element.
-
-      The request MUST include a Depth header with a value of "1" or
-      "infinity".
-
-      The response body for a successful request MUST be a DAV:
-      multistatus XML element, which MUST contain one DAV:sync-token
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 6]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-      element in addition to one DAV:response element for each member
-      URI that was added, has had its mapped resource changed, or was
-      deleted since the last synchronization operation as specified by
-      the DAV:sync-token provided in the request.  A given member URI
-      MUST appear only once in the response.  In the case where multiple
-      member URIs of the request-URI are mapped to the same resource, if
-      the resource is changed, each member URI MUST be returned in the
-      response.
-
-      The content of each DAV:response element differs depending on how
-      the member was altered:
-
-         For members that have changed (i.e., are new or have had their
-         mapped resource modified) the DAV:response MUST contain at
-         least one DAV:propstat element and MUST NOT contain any DAV:
-         status element.
-
-         For members that have been removed, the DAV:response MUST
-         contain one DAV:status with a value set to '404 Not Found' and
-         MUST NOT contain any DAV:propstat element.
-
-         For members that are collections and are unable to support the
-         DAV:sync-collection report, the DAV:response MUST contain one
-         DAV:status with a value set to '403 Forbidden', a DAV:error
-         containing DAV:supported-report or DAV:sync-traversal-supported
-         (see Section 3.3 for which is appropriate), and MUST NOT
-         contain any DAV:propstat element.
-
-      The conditions under which each type of change can occur is
-      further described in Section 3.5.
-
-   Preconditions:
-
-      (DAV:valid-sync-token): The DAV:sync-token element value MUST be a
-      valid token previously returned by the server.  A token can become
-      invalid as the result of being "out of date" (out of the range of
-      change history maintained by the server), or for other reasons
-      (e.g. collection deleted, then recreated, access control changes,
-      etc...).
-
-   Postconditions:
-
-      (DAV:number-of-matches-within-limits): The number of changes
-      reported in the response must fall within the client specified
-      limit.  This condition might be triggered if a client requests a
-      limit on the number of responses (as per Section 3.7) but the
-      server is unable to truncate the result set at or below that
-      limit.
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 7]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-3.3.  Depth behavior
-
-   Servers MUST support both Depth:1 and Depth:infinity behavior with
-   the DAV:sync-collection report.  Clients MUST include either a
-   Depth:1 or Depth:infinity request header with the DAV:sync-collection
-   report.
-
-   o  When the client specifies a Depth:1 request header, only
-      appropriate internal member URIs (immediate children) of the
-      collection specified as the request URI are reported.
-
-   o  When the client specifies a Depth:infinity request header, all
-      appropriate member URIs of the collection specified as the request
-      URI are reported, provided child collections themselves also
-      support the DAV:sync-collection report.
-
-   o  DAV:sync-token values returned by the server are not specific to
-      the value of the Depth header used in the request.  As such
-      clients MAY use a DAV:sync-token value from a request with one
-      Depth value for a similar request with a different Depth value,
-      however the utility of this is limited.
-
-   Note that when a server supports Depth:infinity reports, it might not
-   be possible to synchronize some child collections within the
-   collection targeted by the report.  When this occurs, the server MUST
-   include a DAV:response element for the child collection with status
-   '403 Forbidden'.  The 403 response MUST be sent once, when the
-   collection is first reported to the client.  In addition, the server
-   MUST include a DAV:error element in the DAV:response element,
-   indicating one of two possible causes for this:
-
-      The DAV:sync-collection report is not supported at all on the
-      child collection.  The DAV:error element MUST contain the DAV:
-      supported-report element.
-
-      The server is unwilling to report results for the child collection
-      when a Depth:infinity DAV:sync-collection report is executed on a
-      parent resource.  This might happen when, for example, the
-      synchronization state of the collection resource is controlled by
-      another sub-system.  In such cases clients can perform the DAV:
-      sync-collection report directly on the child collection instead.
-      The DAV:error element MUST contain the DAV:sync-traversal-
-      supported element.
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 8]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-3.4.  Types of Changes Reported on Initial Synchronization
-
-   When the DAV:sync-collection request contains an empty DAV:sync-token
-   element, the server MUST return all member URIs of the collection
-   (taking account of Depth header requirements as per Section 3.3, and
-   optional truncation of results set as per Section 3.6) and it MUST
-   NOT return any removed member URIs.  All types of member (collection
-   or non-collection) MUST be reported.
-
-3.5.  Types of Changes Reported on Subsequent Synchronizations
-
-   When the DAV:sync-collection request contains a valid value for the
-   DAV:sync-token element, two types of member URI state changes can be
-   returned (changed or removed).  This section defines what triggers
-   each of these to be returned.  It also clarifies the case where a
-   member URI might have undergone multiple changes between two
-   synchronization report requests.  In all cases, the Depth header
-   requirements as per Section 3.3, and optional truncation of results
-   set as per Section 3.6, are taken into account by the server.
-
-3.5.1.  Changed Member
-
-   A member URI MUST be reported as changed if it has been mapped as a
-   member of the target collection since the request sync-token was
-   generated.  This includes member URIs that have been mapped as the
-   result of a COPY, MOVE, BIND [RFC5842], or REBIND [RFC5842] request.
-   All types of member URI (collection or non-collection) MUST be
-   reported.
-
-   In the case where a mapping between a member URI and the target
-   collection was removed, then a new mapping with the same URI created,
-   the member URI MUST be reported as changed and MUST NOT be reported
-   as removed.
-
-   A member URI MUST be reported as changed if its mapped resource's
-   entity tag value (defined in Section 3.11 of [RFC2616]) has changed
-   since the request sync-token was generated.
-
-   A member URI MAY be reported as changed if the user issuing the
-   request was granted access to this member URI, due to access control
-   changes.
-
-   Collection member URIs MUST be returned as changed if they are mapped
-   to an underlying resource (i.e., entity body) and if the entity tag
-   associated with that resource changes.  There is no guarantee that
-   changes to members of a collection will result in a change in any
-   entity tag of that collection, so clients cannot rely on a series of
-   Depth:1 reports at multiple levels to track all changes within a
-
-
-
-Daboo & Quillaud        Expires January 12, 2012                [Page 9]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   collection.  Instead Depth:infinity has to be used.
-
-3.5.2.  Removed Member
-
-   A member MUST be reported as removed if its mapping under the target
-   collection has been removed since the request sync-token was
-   generated, and it has not been re-mapped since it was removed.  This
-   includes members that have been unmapped as the result of a MOVE,
-   UNBIND [RFC5842], or REBIND [RFC5842] operation.  This also includes
-   collection members that have been removed, including ones that
-   themselves do not support the DAV:sync-collection report.
-
-   If a member was added (and its mapped resource possibly modified),
-   then removed between two synchronization report requests, it MUST be
-   reported as removed.  This ensures that a client that adds a member
-   is informed of the removal of the member, if the removal occurs
-   before the client has had a chance to execute a synchronization
-   report.
-
-   A member MAY be reported as removed if the user issuing the request
-   no longer has access to this member, due to access control changes.
-
-   For a Depth:infinity report where a collection is removed, the server
-   MUST NOT report the removal of any members of the removed collection.
-   Clients MUST assume that if a collection is reported as being
-   removed, then all members of that collection have also been removed.
-
-3.6.  Truncation of Results
-
-   A server MAY limit the number of member URIs in a response, for
-   example, to limit the amount of work expended in processing a
-   request, or as the result of an explicit limit set by the client.  If
-   the result set is truncated, the response MUST use status code 207,
-   return a DAV:multistatus response body, and indicate a status of 507
-   (Insufficient Storage) for the request URI.  That DAV:response
-   element SHOULD include a DAV:error element with the DAV:number-of-
-   matches-within-limits precondition, as defined in [RFC3744] (Section
-   9.2).  DAV:response elements for all the changes being reported are
-   also included.
-
-   When truncation occurs, the DAV:sync-token value returned in the
-   response MUST represent the correct state for the partial set of
-   changes returned.  That allows the client to use the returned DAV:
-   sync-token to fetch the next set of changes.  In this way the client
-   can effectively "page" through the entire set of changes in a
-   consistent manner.
-
-   Clients MUST handle the 507 status on the request-URI in the response
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 10]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   to the report.
-
-   For example, consider a server that records changes using a
-   monotonically increasing integer to represent a "revision number" and
-   uses that quantity as the DAV:sync-token value (appropriately encoded
-   as a URI).  Assume the last DAV:sync-token used by the client was
-   "http://example.com/sync/10", and since then 15 additional changes
-   have occurred.  If the client executes a DAV:sync-collection request
-   with a DAV:sync-token of "http://example.com/sync/10", without a
-   limit the server would return 15 DAV:response elements and a DAV:
-   sync-token with value "http://example.com/sync/25".  But if the
-   server choose to limit responses to at most 10 changes, then it would
-   return only 10 DAV:response elements and a DAV:sync-token with value
-   "http://example.com/sync/20", together with an additional DAV:
-   response element for the request-URI with a status code of 507.
-   Subsequently, the client can re-issue the request with the DAV:sync-
-   token value returned from the server and fetch the remaining 5
-   changes.
-
-3.7.  Limiting Results
-
-   A client can limit the number of results returned by the server
-   through use of the DAV:limit element ([RFC5323], Section 5.17) in the
-   request body.  This is useful when clients have limited space or
-   bandwidth for the results.  If a server is unable to truncate the
-   result at or below the requested number, then it MUST fail the
-   request with a DAV:number-of-matches-within-limits post-condition
-   error.  When the results can be correctly limited by the server, the
-   server MUST follow the rules above for indicating a result set
-   truncation to the client.
-
-3.8.  Example: Initial DAV:sync-collection Report
-
-   In this example, the client is making its first synchronization
-   request to the server, so the DAV:sync-token element in the request
-   is empty.  It also asks for the DAV:getetag property and for a
-   proprietary property.  The server responds with the items currently
-   in the targeted collection.  The current synchronization token is
-   also returned.
-
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 11]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   >> Request <<
-
-
-   REPORT /home/cyrusdaboo/ HTTP/1.1
-   Host: webdav.example.com
-   Depth: 1
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:sync-collection xmlns:D="DAV:">
-     <D:sync-token/>
-     <D:prop xmlns:R="urn:ns.example.com:boxschema">
-       <D:getetag/>
-       <R:bigbox/>
-     </D:prop>
-   </D:sync-collection>
-
-
-   >> Response <<
-
-
-   HTTP/1.1 207 Multi-Status
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:multistatus xmlns:D="DAV:">
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00001-abcd1"</D:getetag>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema">
-             <R:BoxType>Box type A</R:BoxType>
-           </R:bigbox>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/vcard.vcf</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00002-abcd1"</D:getetag>
-         </D:prop>
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 12]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-       <D:propstat>
-         <D:prop>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/calendar.ics</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00003-abcd1"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-       <D:propstat>
-         <D:prop>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-     </D:response>
-     <D:sync-token>http://example.com/ns/sync/1234</D:sync-token>
-   </D:multistatus>
-
-
-3.9.  Example: DAV:sync-collection Report with Token
-
-   In this example, the client is making a synchronization request to
-   the server and is using the DAV:sync-token element returned from the
-   last report it ran on this collection.  The server responds, listing
-   the items that have been added, changed or removed.  The (new)
-   current synchronization token is also returned.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 13]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   >> Request <<
-
-
-   REPORT /home/cyrusdaboo/ HTTP/1.1
-   Host: webdav.example.com
-   Depth: 1
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:sync-collection xmlns:D="DAV:">
-     <D:sync-token>http://example.com/ns/sync/1234</D:sync-token>
-     <D:prop xmlns:R="urn:ns.example.com:boxschema">
-       <D:getetag/>
-       <R:bigbox/>
-     </D:prop>
-   </D:sync-collection>
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 14]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   >> Response <<
-
-
-   HTTP/1.1 207 Multi-Status
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:multistatus xmlns:D="DAV:">
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/file.xml</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00004-abcd1"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-       <D:propstat>
-         <D:prop>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/vcard.vcf</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00002-abcd2"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-       <D:propstat>
-         <D:prop>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
-       <D:status>HTTP/1.1 404 Not Found</D:status>
-     </D:response>
-     <D:sync-token>http://example.com/ns/sync/1238</D:sync-token>
-   </D:multistatus>
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 15]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-3.10.  Example: Initial DAV:sync-collection Report with Truncation
-
-   In this example, the client is making its first synchronization
-   request to the server, so the DAV:sync-token element in the request
-   is empty.  It also asks for the DAV:getetag property.  The server
-   responds with the items currently in the targeted collection, but
-   truncated at two items.  The synchronization token for the truncated
-   result set is returned.
-
-   >> Request <<
-
-
-   REPORT /home/cyrusdaboo/ HTTP/1.1
-   Host: webdav.example.com
-   Depth: 1
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:sync-collection xmlns:D="DAV:">
-     <D:sync-token/>
-     <D:prop>
-       <D:getetag/>
-     </D:prop>
-   </D:sync-collection>
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 16]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   >> Response <<
-
-
-   HTTP/1.1 207 Multi-Status
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:multistatus xmlns:D="DAV:">
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00001-abcd1"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/vcard.vcf</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00002-abcd1"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/</D:href>
-       <D:status>HTTP/1.1 507 Insufficient Storage</D:status>
-       <D:error><D:number-of-matches-within-limits/></D:error>
-     </D:response>
-     <D:sync-token>http://example.com/ns/sync/1233</D:sync-token>
-   </D:multistatus>
-
-
-3.11.  Example: Initial DAV:sync-collection Report with Limit
-
-   In this example, the client is making its first synchronization
-   request to the server, so the DAV:sync-token element in the request
-   is empty.  It requests a limit of 1 for the responses returned by the
-   server.  It also asks for the DAV:getetag property.  The server
-   responds with the items currently in the targeted collection, but
-   truncated at one item.  The synchronization token for the truncated
-   result set is returned.
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 17]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   >> Request <<
-
-
-   REPORT /home/cyrusdaboo/ HTTP/1.1
-   Host: webdav.example.com
-   Depth: 1
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:sync-collection xmlns:D="DAV:">
-     <D:sync-token/>
-     <D:limit>
-       <D:nresults>1</D:nresults>
-     </D:limit>
-     <D:prop>
-       <D:getetag/>
-     </D:prop>
-   </D:sync-collection>
-
-
-   >> Response <<
-
-
-   HTTP/1.1 207 Multi-Status
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:multistatus xmlns:D="DAV:">
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00001-abcd1"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href
-   >http://webdav.example.com/home/cyrusdaboo/</D:href>
-       <D:status>HTTP/1.1 507 Insufficient Storage</D:status>
-       <D:error><D:number-of-matches-within-limits/></D:error>
-     </D:response>
-     <D:sync-token>http://example.com/ns/sync/1232</D:sync-token>
-   </D:multistatus>
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 18]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-3.12.  Example: DAV:sync-collection Report with Unsupported Limit
-
-   In this example, the client is making a synchronization request to
-   the server with a valid DAV:sync-token element value.  It requests a
-   limit of 100 for the responses returned by the server.  It also asks
-   for the DAV:getetag property.  The server is unable to limit the
-   results to the maximum specified by the client, so it responds with a
-   507 status code and appropriate post-condition error code.
-
-   >> Request <<
-
-
-   REPORT /home/cyrusdaboo/ HTTP/1.1
-   Host: webdav.example.com
-   Depth: 1
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:sync-collection xmlns:D="DAV:">
-     <D:sync-token>http://example.com/ns/sync/1232</D:sync-token>
-     <D:limit>
-       <D:nresults>100</D:nresults>
-     </D:limit>
-     <D:prop>
-       <D:getetag/>
-     </D:prop>
-   </D:sync-collection>
-
-
-   >> Response <<
-
-
-   HTTP/1.1 507 Insufficient Storage
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:error xmlns:D="DAV:">
-     <D:number-of-matches-within-limits/>
-   </D:error>
-
-
-3.13.  Example: Depth:infinity initial DAV:sync-collection Report
-
-   In this example, the client is making its first synchronization
-   request to the server, so the DAV:sync-token element in the request
-   is empty, and it is using Depth:infinity.  It also asks for the DAV:
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 19]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   getetag property and for a proprietary property.  The server responds
-   with the items currently in the targeted collection.  The current
-   synchronization token is also returned.
-
-   The collection /home/cyrusdaboo/collection1/ exists and has one child
-   resource which is also reported.  The collection /home/cyrusdaboo/
-   collection2/ exists but has no child resources.  The collection
-   /home/cyrusdaboo/shared/ is returned with a 403 status indicating
-   that a collection exists but it is unable to report on changes within
-   it in the scope of the current Depth:infinity report.  Instead the
-   client can try a DAV:sync-collection report directly on the
-   collection URI.
-
-   >> Request <<
-
-
-   REPORT /home/cyrusdaboo/ HTTP/1.1
-   Host: webdav.example.com
-   Depth: 1
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:sync-collection xmlns:D="DAV:">
-     <D:sync-token/>
-     <D:prop xmlns:R="urn:ns.example.com:boxschema">
-       <D:getetag/>
-       <R:bigbox/>
-     </D:prop>
-   </D:sync-collection>
-
-
-   >> Response <<
-
-
-   HTTP/1.1 207 Multi-Status
-   Content-Type: text/xml; charset="utf-8"
-   Content-Length: xxxx
-
-   <?xml version="1.0" encoding="utf-8" ?>
-   <D:multistatus xmlns:D="DAV:">
-     <D:response>
-       <D:href>/home/cyrusdaboo/collection1/</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00001-abcd1"</D:getetag>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema">
-             <R:BoxType>Box type A</R:BoxType>
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 20]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-           </R:bigbox>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href>/home/cyrusdaboo/collection1/test.doc</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00001-abcd1"</D:getetag>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema">
-             <R:BoxType>Box type A</R:BoxType>
-           </R:bigbox>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href>/home/cyrusdaboo/collection2/</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-       <D:propstat>
-         <D:prop>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-       <D:href>/home/cyrusdaboo/calendar.ics</D:href>
-       <D:propstat>
-         <D:prop>
-           <D:getetag>"00003-abcd1"</D:getetag>
-         </D:prop>
-         <D:status>HTTP/1.1 200 OK</D:status>
-       </D:propstat>
-       <D:propstat>
-         <D:prop>
-           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
-         </D:prop>
-         <D:status>HTTP/1.1 404 Not Found</D:status>
-       </D:propstat>
-     </D:response>
-     <D:response>
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 21]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-       <D:href>/home/cyrusdaboo/shared/</D:href>
-       <D:status>HTTP/1.1 403 Forbidden</D:status>
-       <D:error><D:sync-traversal-supported/></D:error>
-     </D:response>
-     <D:sync-token>http://example.com/ns/sync/1234</D:sync-token>
-   </D:multistatus>
-
-
-4.  DAV:sync-token Property
-
-   Name:  sync-token
-
-   Namespace:  DAV:
-
-   Purpose:  Contains the value of the synchronization token as it would
-      be returned by a DAV:sync-collection report.
-
-   Value:  Any valid URI.
-
-   Protected:  MUST be protected because this value is created and
-      controlled by the server.
-
-   COPY/MOVE behavior:  This property value is dependent on the final
-      state of the destination resource, not the value of the property
-      on the source resource.
-
-   Description:  The DAV:sync-token property MUST be defined on all
-      resources that support the DAV:sync-collection report.  It
-      contains the value of the synchronization token as it would be
-      returned by a DAV:sync-collection report on that resource at the
-      same point in time.  It SHOULD NOT be returned by a PROPFIND DAV:
-      allprop request (as defined in Section 14.2 of [RFC4918]).
-
-   Definition:
-
-
-   <!ELEMENT sync-token #PCDATA>
-   <!-- Text MUST be a valid URI -->
-
-
-5.  DAV:sync-token Use with If Header
-
-   WebDAV provides an If pre-condition header that allows for "state
-   tokens" to be used as pre-conditions on HTTP requests (as defined in
-   Section 10.4 of [RFC4918]).  This specification allows the DAV:sync-
-   token value to be used as one such token in an If header.  By using
-   this, clients can ensure requests only complete when there have been
-   no changes to the content of a collection, by virtue of an un-changed
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 22]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   DAV:sync-token value.  Servers MUST support use of DAV:sync-token
-   values in If request headers.
-
-5.1.  Example: If Pre-Condition with PUT
-
-   In this example, the client has already used the DAV:sync-collection
-   report to synchronize the collection /home/cyrusdaboo/collection/.
-   Now it wants to add a new resource to the collection, but only if
-   there have been no other changes since the last synchronization.
-   Note, that because the DAV:sync-token is defined on the collection
-   and not on the resource targeted by the request, the If header value
-   needs to use the "Resource_Tag" construct for the header syntax to
-   correctly identify that the supplied state token refers to the
-   collection resource.
-
-   >> Request <<
-
-
-   PUT /home/cyrusdaboo/collection/newresource.txt HTTP/1.1
-   Host: webdav.example.com
-   If: </home/cyrusdaboo/collection/>
-     (<http://example.com/ns/sync/12345>)
-   Content-Type: text/plain; charset="utf-8"
-   Content-Length: xxxx
-
-   Some content here...
-
-
-   >> Response <<
-
-
-   HTTP/1.1 201 Created
-
-
-5.2.  Example: If Pre-Condition with MKCOL
-
-   In this example, the client has already used the DAV:sync-collection
-   report to synchronize the collection /home/cyrusdaboo/collection/.
-   Now it wants to add a new collection to the collection, but only if
-   there have been no other changes since the last synchronization.
-   Note, that because the DAV:sync-token is defined on the collection
-   and not on the resource targeted by the request, the If header value
-   needs to use the "Resource_Tag" construct for the header syntax to
-   correctly identify that the supplied state token refers to the
-   collection resource.  In this case the request fails as another
-   change has occurred to the collection corresponding to the supplied
-   DAV:sync-token.
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 23]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   >> Request <<
-
-
-   MKCOL /home/cyrusdaboo/collection/child/ HTTP/1.1
-   Host: webdav.example.com
-   If: </home/cyrusdaboo/collection/>
-     (<http://example.com/ns/sync/12346>)
-
-
-   >> Response <<
-
-
-   HTTP/1.1 412 Pre-condition Failed
-
-
-6.  XML Element Definitions
-
-6.1.  DAV:sync-collection XML Element
-
-   Name:  sync-collection
-
-   Namespace:  DAV:
-
-   Purpose:  WebDAV report used to synchronize data between client and
-      server.
-
-   Description:  See Section 3.
-
-
-
-   <!ELEMENT sync-collection (sync-token, DAV:limit?, DAV:prop)>
-
-   <!-- DAV:limit defined in RFC 5323, Section 5.17 -->
-   <!-- DAV:prop defined in RFC 4918, Section 14.18 -->
-
-
-6.2.  DAV:sync-token XML Element
-
-   Name:  sync-token
-
-   Namespace:  DAV:
-
-   Purpose:  The synchronization token provided by the server and
-      returned by the client.
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 24]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   Description:  See Section 3.
-
-
-
-   <!ELEMENT sync-token CDATA>
-   <!-- Text MUST be a URI -->
-
-
-6.3.  DAV:multistatus XML Element
-
-   Name:  multistatus
-
-   Namespace:  DAV:
-
-   Purpose:  Extends the DAV:multistatus element to include
-      synchronization details.
-
-   Description:  See Section 3.
-
-
-
-   <!ELEMENT multistatus (DAV:response*, DAV:responsedescription?,
-                          sync-token?) >
-
-   <!-- DAV:multistatus originally defined in RFC 4918, Section 14.16
-        but overridden here to add the DAV:sync-token element -->
-   <!-- DAV:response defined in RFC 4918, Section 14.24 -->
-   <!-- DAV:responsedescription defined in RFC 4918, Section 14.25 -->
-
-
-7.  Security Considerations
-
-   Servers MUST take care to limit the scope of DAV:sync-collection
-   requests so that clients cannot use excessive server resources by
-   executing, for example, a Depth:infinity report on the root URI.  For
-   example, CalDAV [RFC4791] servers might only support the DAV:sync-
-   collection report on user calendar home collections, and prevent use
-   of the report on the parent resource of all calendar homes (assuming
-   there is one).  That way each individual user's request is scoped to
-   changes only within their own calendar home and not across the entire
-   set of calendar users.
-
-   Beyond the above considerations, this extension does not introduce
-   any new security concerns than those already described in HTTP and
-   WebDAV.
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 25]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-8.  IANA Considerations
-
-   This document does not require any actions on the part of IANA.
-
-9.  Acknowledgments
-
-   The following individuals contributed their ideas and support for
-   writing this specification: Bernard Desruisseaux, Werner Donne, Mike
-   Douglass, Ciny Joy, Andrew McMillan, Julian Reschke, and Wilfredo
-   Sanchez.  We would like to thank the Calendaring and Scheduling
-   Consortium for facilitating interoperability testing for early
-   implementations of this specification.
-
-10.  References
-
-10.1.  Normative References
-
-   [RFC2119]                    Bradner, S., "Key words for use in RFCs
-                                to Indicate Requirement Levels", BCP 14,
-                                RFC 2119, March 1997.
-
-   [RFC2616]                    Fielding, R., Gettys, J., Mogul, J.,
-                                Frystyk, H., Masinter, L., Leach, P.,
-                                and T. Berners-Lee, "Hypertext Transfer
-                                Protocol -- HTTP/1.1", RFC 2616,
-                                June 1999.
-
-   [RFC3744]                    Clemm, G., Reschke, J., Sedlar, E., and
-                                J. Whitehead, "Web Distributed Authoring
-                                and Versioning (WebDAV)
-                                Access Control Protocol", RFC 3744,
-                                May 2004.
-
-   [RFC4918]                    Dusseault, L., "HTTP Extensions for Web
-                                Distributed Authoring and Versioning
-                                (WebDAV)", RFC 4918, June 2007.
-
-   [RFC5323]                    Reschke, J., Reddy, S., Davis, J., and
-                                A. Babich, "Web Distributed Authoring
-                                and Versioning (WebDAV) SEARCH",
-                                RFC 5323, November 2008.
-
-   [RFC5842]                    Clemm, G., Crawford, J., Reschke, J.,
-                                and J. Whitehead, "Binding Extensions to
-                                Web Distributed Authoring and Versioning
-                                (WebDAV)", RFC 5842, April 2010.
-
-   [W3C.REC-xml-20081126]       Paoli, J., Yergeau, F., Bray, T.,
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 26]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-                                Sperberg-McQueen, C., and E. Maler,
-                                "Extensible Markup Language (XML) 1.0
-                                (Fifth Edition)", World Wide Web
-                                Consortium Recommendation REC-xml-
-                                20081126, November 2008, <http://
-                                www.w3.org/TR/2008/REC-xml-20081126>.
-
-10.2.  Informative References
-
-   [I-D.ietf-vcarddav-carddav]  Daboo, C., "vCard Extensions to WebDAV
-                                (CardDAV)",
-                                draft-ietf-vcarddav-carddav-10 (work in
-                                progress), November 2009.
-
-   [RFC4791]                    Daboo, C., Desruisseaux, B., and L.
-                                Dusseault, "Calendaring Extensions to
-                                WebDAV (CalDAV)", RFC 4791, March 2007.
-
-Appendix A.  Change History (to be removed prior to publication as an
-             RFC)
-
-   Changes in -06:
-
-   1.  Changed the 405 error into a 403 with a DAV:error element.
-
-   2.  Stated more clearly that both depth:1 and depth:infinity must be
-       supported.
-
-   3.  Tied up sync-token as URI changes.
-
-   4.  Made BIND a normative reference.
-
-   5.  Take into account REBIND.
-
-   6.  Reworked text to more accurately make the distinction between
-       member URIs and resources, which should clarify the interaction
-       with extensions like BIND.
-
-   Changes in -05:
-
-   1.  Added option to use DAV:sync-token as an If pre-condition state
-       token.
-
-   2.  DAV:sync-token value now required to be a URI so it can be used
-       in the If header.
-
-   Changes in -04:
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 27]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   1.  Depth:infinity support added.
-
-   2.  Collection resources are now reported as changed if they have a
-       valid entity tag associated with them.
-
-   Changes in -03:
-
-   1.  Changed D:propstat to D:prop in marshalling.
-
-   2.  Added request for dead property in examples.
-
-   3.  Made D:prop mandatory in request so that D:response always
-       contains at least one D:propstat as per WebDAV definition.
-
-   4.  Removed DAV:status from response when resource is created/
-       modified, thus allowing to get rid of DAV:sync-response in favor
-       of a regular DAV:response.  As a consequence, there is no longer
-       any difference in the report between created and modified
-       resources.
-
-   5.  Resource created, then removed between 2 sync MUST be returned as
-       removed.
-
-   6.  Added ability for server to truncate results and indicate such to
-       the client.
-
-   7.  Added ability for client to request the server to limit the
-       result set.
-
-   Changes in -02:
-
-   1.  Added definition of sync-token WebDAV property.
-
-   2.  Added references to SEARCH, CalDAV, CardDAV as alternative ways
-       to first synchronize a collection.
-
-   3.  Added section defining under which condition each state change
-       (new, modified, removed) should be reported.  Added reference to
-       BIND.
-
-   4.  Incorporated feedback from Julian Reschke and Ciny Joy.
-
-   5.  More details on the use of the DAV:valid-sync-token precondition.
-
-   Changes in -01:
-
-   1.  Updated to 4918 reference.
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 28]
-
-Internet-Draft                 WebDAV Sync                     July 2011
-
-
-   2.  Fixed examples to properly include DAV:status in DAV:propstat
-
-   3.  Switch to using XML conventions text from RFC5323.
-
-Authors' Addresses
-
-   Cyrus Daboo
-   Apple Inc.
-   1 Infinite Loop
-   Cupertino, CA  95014
-   USA
-
-   EMail: cyrus at daboo.name
-   URI:   http://www.apple.com/
-
-
-   Arnaud Quillaud
-   Oracle Corporation
-   180, Avenue de l'Europe
-   Saint Ismier cedex,   38334
-   France
-
-   EMail: arnaud.quillaud at oracle.com
-   URI:   http://www.oracle.com/
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Daboo & Quillaud        Expires January 12, 2012               [Page 29]
-

Copied: CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/rfc6578-WebDAV Sync.txt (from rev 9220, CalendarServer/trunk/doc/RFC/rfc6578-WebDAV Sync.txt)
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/rfc6578-WebDAV Sync.txt	                        (rev 0)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/doc/RFC/rfc6578-WebDAV Sync.txt	2012-05-02 18:54:12 UTC (rev 9221)
@@ -0,0 +1,1627 @@
+
+
+
+
+
+
+Internet Engineering Task Force (IETF)                          C. Daboo
+Request for Comments: 6578                                    Apple Inc.
+Category: Standards Track                                    A. Quillaud
+ISSN: 2070-1721                                                   Oracle
+                                                              March 2012
+
+
+                       Collection Synchronization
+         for Web Distributed Authoring and Versioning (WebDAV)
+
+Abstract
+
+   This specification defines an extension to Web Distributed Authoring
+   and Versioning (WebDAV) that allows efficient synchronization of the
+   contents of a WebDAV collection.
+
+Status of This Memo
+
+   This is an Internet Standards Track document.
+
+   This document is a product of the Internet Engineering Task Force
+   (IETF).  It represents the consensus of the IETF community.  It has
+   received public review and has been approved for publication by the
+   Internet Engineering Steering Group (IESG).  Further information on
+   Internet Standards is available in Section 2 of RFC 5741.
+
+   Information about the current status of this document, any errata,
+   and how to provide feedback on it may be obtained at
+   http://www.rfc-editor.org/info/rfc6578.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 1]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+Copyright Notice
+
+   Copyright (c) 2012 IETF Trust and the persons identified as the
+   document authors.  All rights reserved.
+
+   This document is subject to BCP 78 and the IETF Trust's Legal
+   Provisions Relating to IETF Documents
+   (http://trustee.ietf.org/license-info) in effect on the date of
+   publication of this document.  Please review these documents
+   carefully, as they describe your rights and restrictions with respect
+   to this document.  Code Components extracted from this document must
+   include Simplified BSD License text as described in Section 4.e of
+   the Trust Legal Provisions and are provided without warranty as
+   described in the Simplified BSD License.
+
+   This document may contain material from IETF Documents or IETF
+   Contributions published or made publicly available before November
+   10, 2008.  The person(s) controlling the copyright in some of this
+   material may not have granted the IETF Trust the right to allow
+   modifications of such material outside the IETF Standards Process.
+   Without obtaining an adequate license from the person(s) controlling
+   the copyright in such materials, this document may not be modified
+   outside the IETF Standards Process, and derivative works of it may
+   not be created outside the IETF Standards Process, except to format
+   it for publication as an RFC or to translate it into languages other
+   than English.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 2]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+Table of Contents
+
+   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
+   2.  Conventions Used in This Document  . . . . . . . . . . . . . .  4
+   3.  WebDAV Synchronization . . . . . . . . . . . . . . . . . . . .  5
+     3.1.  Overview . . . . . . . . . . . . . . . . . . . . . . . . .  5
+     3.2.  DAV:sync-collection Report . . . . . . . . . . . . . . . .  6
+     3.3.  Depth Behavior . . . . . . . . . . . . . . . . . . . . . .  8
+     3.4.  Types of Changes Reported on Initial Synchronization . . .  9
+     3.5.  Types of Changes Reported on Subsequent
+           Synchronizations . . . . . . . . . . . . . . . . . . . . . 10
+       3.5.1.  Changed Member . . . . . . . . . . . . . . . . . . . . 10
+       3.5.2.  Removed Member . . . . . . . . . . . . . . . . . . . . 10
+     3.6.  Truncation of Results  . . . . . . . . . . . . . . . . . . 11
+     3.7.  Limiting Results . . . . . . . . . . . . . . . . . . . . . 12
+     3.8.  Example: Initial DAV:sync-collection Report  . . . . . . . 12
+     3.9.  Example: DAV:sync-collection Report with Token . . . . . . 14
+     3.10. Example: Initial DAV:sync-collection Report with
+           Truncation . . . . . . . . . . . . . . . . . . . . . . . . 16
+     3.11. Example: Initial DAV:sync-collection Report with Limit . . 17
+     3.12. Example: DAV:sync-collection Report with Unsupported
+           Limit  . . . . . . . . . . . . . . . . . . . . . . . . . . 18
+     3.13. Example: DAV:sync-level Set to Infinite, Initial
+           DAV:sync-collection Report . . . . . . . . . . . . . . . . 19
+   4.  DAV:sync-token Property  . . . . . . . . . . . . . . . . . . . 22
+   5.  DAV:sync-token Use with If Header  . . . . . . . . . . . . . . 22
+     5.1.  Example: If Precondition with PUT  . . . . . . . . . . . . 22
+     5.2.  Example: If Precondition with MKCOL  . . . . . . . . . . . 23
+   6.  XML Element Definitions  . . . . . . . . . . . . . . . . . . . 24
+     6.1.  DAV:sync-collection XML Element  . . . . . . . . . . . . . 24
+     6.2.  DAV:sync-token XML Element . . . . . . . . . . . . . . . . 24
+     6.3.  DAV:sync-level XML Element . . . . . . . . . . . . . . . . 24
+     6.4.  DAV:multistatus XML Element  . . . . . . . . . . . . . . . 25
+   7.  Security Considerations  . . . . . . . . . . . . . . . . . . . 25
+   8.  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . 25
+   9.  References . . . . . . . . . . . . . . . . . . . . . . . . . . 25
+     9.1.  Normative References . . . . . . . . . . . . . . . . . . . 25
+     9.2.  Informative References . . . . . . . . . . . . . . . . . . 26
+   Appendix A.  Backwards-Compatible Handling of Depth  . . . . . . . 27
+   Appendix B.  Example of a Client Synchronization Approach  . . . . 27
+
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 3]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+1.  Introduction
+
+   WebDAV [RFC4918] defines the concept of 'collections', which are
+   hierarchical groupings of WebDAV resources on an HTTP [RFC2616]
+   server.  Collections can be of arbitrary size and depth (i.e.,
+   collections within collections).  WebDAV clients that cache resource
+   content need a way to synchronize that data with the server (i.e.,
+   detect what has changed and update their cache).  Currently, this can
+   be done using a WebDAV PROPFIND request on a collection to list all
+   members of a collection along with their DAV:getetag property values,
+   which allows the client to determine which were changed, added, or
+   deleted.  However, this does not scale well to large collections, as
+   the XML response to the PROPFIND request will grow with the
+   collection size.
+
+   This specification defines a new WebDAV report that results in the
+   server returning to the client only information about those member
+   URLs that were added or deleted, or whose mapped resources were
+   changed, since a previous execution of the report on the collection.
+
+   Additionally, a new property is added to collection resources that is
+   used to convey a "synchronization token" that is guaranteed to change
+   when the collection's member URLs or their mapped resources have
+   changed.
+
+2.  Conventions Used in This Document
+
+   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
+   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
+   document are to be interpreted as described in [RFC2119].
+
+   This document uses XML DTD fragments ([W3C.REC-xml-20081126], Section
+   3.2) as a purely notational convention.  WebDAV request and response
+   bodies cannot be validated by a DTD due to the specific extensibility
+   rules defined in Section 17 of [RFC4918] and due to the fact that all
+   XML elements defined by this specification use the XML namespace name
+   "DAV:".  In particular:
+
+   1.  Element names use the "DAV:" namespace.
+
+   2.  Element ordering is irrelevant unless explicitly stated
+       otherwise.
+
+   3.  Extension elements (elements not already defined as valid child
+       elements) may be added anywhere, except when explicitly stated
+       otherwise.
+
+
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 4]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   4.  Extension attributes (attributes not already defined as valid for
+       this element) may be added anywhere, except when explicitly
+       stated otherwise.
+
+   When an XML element type in the "DAV:" namespace is referenced in
+   this document outside of the context of an XML fragment, the string
+   "DAV:" will be prefixed to the element type.
+
+   This document inherits, and sometimes extends, DTD productions from
+   Section 14 of [RFC4918].
+
+3.  WebDAV Synchronization
+
+3.1.  Overview
+
+   One way to synchronize data between two entities is to use some form
+   of synchronization token.  The token defines the state of the data
+   being synchronized at a particular point in time.  It can then be
+   used to determine what has changed between one point in time and
+   another.
+
+   This specification defines a new WebDAV report that is used to enable
+   client-server collection synchronization based on such a token.
+
+   In order to synchronize the contents of a collection between a server
+   and client, the server provides the client with a synchronization
+   token each time the synchronization report is executed.  That token
+   represents the state of the data being synchronized at that point in
+   time.  The client can then present that same token back to the server
+   at some later time, and the server will return only those items that
+   are new, have changed, or were deleted since that token was
+   generated.  The server also returns a new token representing the new
+   state at the time the report was run.
+
+   Typically, the first time a client connects to the server it will
+   need to be informed of the entire state of the collection (i.e., a
+   full list of all member URLs that are currently in the collection).
+   That is done by the client sending an empty token value to the
+   server.  This indicates to the server that a full listing is
+   required.
+
+   As an alternative, the client might choose to do its first
+   synchronization using some other mechanism on the collection (e.g.,
+   some other form of batch resource information retrieval such as
+   PROPFIND, SEARCH [RFC5323], or specialized REPORTs such as those
+   defined in CalDAV [RFC4791] and CardDAV [RFC6352]) and ask for the
+
+
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 5]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   DAV:sync-token property to be returned.  This property (defined in
+   Section 4) contains the same token that can be used later to issue a
+   DAV:sync-collection report.
+
+   In some cases, a server might only wish to maintain a limited amount
+   of history about changes to a collection.  In that situation, it will
+   return an error to the client when the client presents a token that
+   is "out of date".  At that point, the client has to fall back to
+   synchronizing the entire collection by re-running the report request
+   using an empty token value.
+
+   Typically, a client will use the synchronization report to retrieve
+   the list of changes and will follow that with requests to retrieve
+   the content of changed resources.  It is possible that additional
+   changes to the collection could occur between the time of the
+   synchronization report and resource content retrieval, which could
+   result in an inconsistent view of the collection.  When clients use
+   this method of synchronization, they need to be aware that such
+   additional changes could occur and track them, e.g., by differences
+   between the ETag values returned in the synchronization report and
+   those returned when actually fetching resource content, by using
+   conditional requests as described in Section 5, or by repeating the
+   synchronization process until no changes are returned.
+
+3.2.  DAV:sync-collection Report
+
+   If the DAV:sync-collection report is implemented by a WebDAV server,
+   then the server MUST list the report in the
+   "DAV:supported-report-set" property on any collection that supports
+   synchronization.
+
+   To implement the behavior for this report, a server needs to keep
+   track of changes to any member URLs and their mapped resources in a
+   collection (as defined in Section 3 of [RFC4918]).  This includes
+   noting the addition of new member URLs, the changes to the mapped
+   resources of existing member URLs, and the removal of member URLs.
+   The server will track each change and provide a synchronization
+   "token" to the client that describes the state of the server at a
+   specific point in time.  This "token" is returned as part of the
+   response to the "sync-collection" report.  Clients include the last
+   token they got from the server in the next "sync-collection" report
+   that they execute, and the server provides the changes from the
+   previous state (represented by the token) to the current state
+   (represented by the new token returned).
+
+   The synchronization token itself MUST be treated as an "opaque"
+   string by the client, i.e., the actual string data has no specific
+   meaning or syntax.  However, the token MUST be a valid URI to allow
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 6]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   its use in an If precondition request header (see Section 5).  For
+   example, a simple implementation of such a token could be a numeric
+   counter that counts each change as it occurs and relates that change
+   to the specific object that changed.  The numeric value could be
+   appended to a "base" URI to form the valid sync-token.
+
+   Marshalling:
+
+      The request-URI MUST identify a collection.  The request body MUST
+      be a DAV:sync-collection XML element (see Section 6.1), which MUST
+      contain one DAV:sync-token XML element, one DAV:sync-level
+      element, and one DAV:prop XML element, and MAY contain a DAV:limit
+      XML element.
+
+      This report is only defined when the Depth header has value "0";
+      other values result in a 400 (Bad Request) error response.  Note
+      that [RFC3253], Section 3.6, states that if the Depth header is
+      not present, it defaults to a value of "0".
+
+      The response body for a successful request MUST be a
+      DAV:multistatus XML element, which MUST contain one DAV:sync-token
+      element in addition to one DAV:response element for each member
+      URL that was added, has had its mapped resource changed, or was
+      deleted since the last synchronization operation as specified by
+      the DAV:sync-token provided in the request.  A given member URL
+      MUST appear only once in the response.  In the case where multiple
+      member URLs of the request-URI are mapped to the same resource, if
+      the resource is changed, each member URL MUST be returned in the
+      response.
+
+      The content of each DAV:response element differs depending on how
+      the member was altered:
+
+         For members that have changed (i.e., are new or have had their
+         mapped resource modified), the DAV:response MUST contain at
+         least one DAV:propstat element and MUST NOT contain any
+         DAV:status element.
+
+         For members that have been removed, the DAV:response MUST
+         contain one DAV:status with a value set to '404 Not Found' and
+         MUST NOT contain any DAV:propstat element.
+
+         For members that are collections and are unable to support the
+         DAV:sync-collection report, the DAV:response MUST contain one
+         DAV:status with a value set to '403 Forbidden', a DAV:error
+         containing DAV:supported-report or DAV:sync-traversal-supported
+         (see Section 3.3 for which is appropriate) and MUST NOT contain
+         any DAV:propstat element.
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 7]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+      The conditions under which each type of change can occur are
+      further described in Section 3.5.
+
+   Preconditions:
+
+      (DAV:valid-sync-token): The DAV:sync-token element value MUST be a
+      valid token previously returned by the server for the collection
+      targeted by the request-URI.  Servers might need to invalidate
+      tokens previously returned to clients.  Doing so will cause the
+      clients to fall back to doing full synchronization using the
+      report, though that will not require clients to download resources
+      that are already cached and have not changed.  Even so, servers
+      MUST limit themselves to invalidating tokens only when absolutely
+      necessary.  Specific reasons include:
+
+      *  Servers might be unable to maintain all of the change data for
+         a collection due to storage or performance reasons, e.g.,
+         servers might only be able to maintain up to 3 weeks worth of
+         changes to a collection, or only up to 10,000 total changes, or
+         not wish to maintain changes for a deleted collection.
+
+      *  Change to server implementation: servers might be upgraded to a
+         new implementation that tracks the history in a different
+         manner, and thus previous synchronization history is no longer
+         valid.
+
+   Postconditions:
+
+      (DAV:number-of-matches-within-limits): The number of changes
+      reported in the response must fall within the client-specified
+      limit.  This condition might be triggered if a client requests a
+      limit on the number of responses (as per Section 3.7), but the
+      server is unable to truncate the result set at or below that
+      limit.
+
+3.3.  Depth Behavior
+
+   Servers MUST support only Depth:0 behavior with the
+   DAV:sync-collection report, i.e., the report targets only the
+   collection being synchronized in a single request.  However, clients
+   do need to "scope" the synchronization to different levels within
+   that collection -- specifically, immediate children (level "1") and
+   all children at any depth (level "infinite").  To specify which level
+   to use, clients MUST include a DAV:sync-level XML element in the
+   request.
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 8]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   o  When the client specifies the DAV:sync-level XML element with a
+      value of "1", only appropriate internal member URLs (immediate
+      children) of the collection specified as the request-URI are
+      reported.
+
+   o  When the client specifies the DAV:sync-level XML element with a
+      value of "infinite", all appropriate member URLs of the collection
+      specified as the request-URI are reported, provided child
+      collections themselves also support the DAV:sync-collection
+      report.
+
+   o  DAV:sync-token values returned by the server are not specific to
+      the value of the DAV:sync-level XML element used in the request.
+      As such, clients MAY use a DAV:sync-token value from a request
+      with one DAV:sync-level XML element value for a similar request
+      with a different DAV:sync-level XML element value; however, the
+      utility of this is limited.
+
+   Note that when a server supports a DAV:sync-level XML element with a
+   value of "infinite", it might not be possible to synchronize some
+   child collections within the collection targeted by the report.  When
+   this occurs, the server MUST include a DAV:response element for the
+   child collection with status 403 (Forbidden).  The 403 response MUST
+   be sent once, when the collection is first reported to the client.
+   In addition, the server MUST include a DAV:error element in the
+   DAV:response element, indicating one of two possible causes for this:
+
+      The DAV:sync-collection report is not supported at all on the
+      child collection.  The DAV:error element MUST contain the
+      DAV:supported-report element.
+
+      The server is unwilling to report results for the child collection
+      when a DAV:sync-collection report with the DAV:sync-level XML
+      element set to "infinite" is executed on a parent resource.  This
+      might happen when, for example, the synchronization state of the
+      collection resource is controlled by another subsystem.  In such
+      cases clients can perform the DAV:sync-collection report directly
+      on the child collection instead.  The DAV:error element MUST
+      contain the DAV:sync-traversal-supported element.
+
+3.4.  Types of Changes Reported on Initial Synchronization
+
+   When the DAV:sync-collection request contains an empty DAV:sync-token
+   element, the server MUST return all member URLs of the collection
+   (taking account of the DAV:sync-level XML element value as per
+   Section 3.3, and optional truncation of the result set as per
+   Section 3.6) and it MUST NOT return any removed member URLs.  All
+   types of member (collection or non-collection) MUST be reported.
+
+
+
+Daboo & Quillaud             Standards Track                    [Page 9]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+3.5.  Types of Changes Reported on Subsequent Synchronizations
+
+   When the DAV:sync-collection request contains a valid value for the
+   DAV:sync-token element, two types of member URL state changes can be
+   returned (changed or removed).  This section defines what triggers
+   each of these to be returned.  It also clarifies the case where a
+   member URL might have undergone multiple changes between two
+   synchronization report requests.  In all cases, the DAV:sync-level
+   XML element value (as per Section 3.3) and optional truncation of the
+   result set (as per Section 3.6) are taken into account by the server.
+
+3.5.1.  Changed Member
+
+   A member URL MUST be reported as changed if it has been newly mapped
+   as a member of the target collection since the request sync-token was
+   generated (e.g., when a new resource has been created as a child of
+   the collection).  For example, this includes member URLs that have
+   been newly mapped as the result of a COPY, MOVE, BIND [RFC5842], or
+   REBIND [RFC5842] request.  All types of member URL (collection or
+   non-collection) MUST be reported.
+
+   In the case where a mapping between a member URL and the target
+   collection was removed, then a new mapping with the same URI was
+   created, the member URL MUST be reported as changed and MUST NOT be
+   reported as removed.
+
+   A member URL MUST be reported as changed if its mapped resource's
+   entity tag value (defined in Section 3.11 of [RFC2616]) has changed
+   since the request sync-token was generated.
+
+   A member URL MAY be reported as changed if the user issuing the
+   request was granted access to this member URL, due to access control
+   changes.
+
+   Collection member URLs MUST be returned as changed if they are mapped
+   to an underlying resource (i.e., entity body) and if the entity tag
+   associated with that resource changes.  There is no guarantee that
+   changes to members of a collection will result in a change in any
+   entity tag of that collection, so clients cannot rely on a series of
+   reports using the DAV:sync-level XML element value set to "1" at
+   multiple levels to track all changes within a collection.  Instead, a
+   DAV:sync-level XML element with a value of "infinite" has to be used.
+
+3.5.2.  Removed Member
+
+   A member MUST be reported as removed if its mapping under the target
+   collection has been removed since the request sync-token was
+   generated, and it has not been remapped since it was removed.  For
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 10]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   example, this includes members that have been unmapped as the result
+   of a MOVE, UNBIND [RFC5842], or REBIND [RFC5842] operation.  This
+   also includes collection members that have been removed, including
+   ones that themselves do not support the DAV:sync-collection report.
+
+   If a member was added (and its mapped resource possibly modified),
+   then removed between two synchronization report requests, it MUST be
+   reported as removed.  This ensures that a client that adds a member
+   is informed of the removal of the member, if the removal occurs
+   before the client has had a chance to execute a synchronization
+   report.
+
+   A member MAY be reported as removed if the user issuing the request
+   no longer has access to this member, due to access control changes.
+
+   For a report with the DAV:sync-level XML element value set to
+   "infinite", where a collection is removed, the server MUST NOT report
+   the removal of any members of the removed collection.  Clients MUST
+   assume that if a collection is reported as being removed, then all
+   members of that collection have also been removed.
+
+3.6.  Truncation of Results
+
+   A server MAY limit the number of member URLs in a response, for
+   example, to limit the amount of work expended in processing a
+   request, or as the result of an explicit limit set by the client.  If
+   the result set is truncated, the response MUST use status code 207
+   (Multi-Status), return a DAV:multistatus response body, and indicate
+   a status of 507 (Insufficient Storage) for the request-URI.  That
+   DAV:response element SHOULD include a DAV:error element with the
+   DAV:number-of-matches-within-limits precondition, as defined in
+   [RFC3744] (Section 9.2).  DAV:response elements for all the changes
+   being reported are also included.
+
+   When truncation occurs, the DAV:sync-token value returned in the
+   response MUST represent the correct state for the partial set of
+   changes returned.  That allows the client to use the returned
+   DAV:sync-token to fetch the next set of changes.  In this way, the
+   client can effectively "page" through the entire set of changes in a
+   consistent manner.
+
+   Clients MUST handle the 507 status on the request-URI in the response
+   to the report.
+
+   For example, consider a server that records changes using a strictly
+   increasing integer to represent a "revision number" and uses that
+   quantity as the DAV:sync-token value (appropriately encoded as a
+   URI).  Assume the last DAV:sync-token used by the client was
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 11]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   "http://example.com/sync/10", and since then 15 additional changes to
+   different resources have occurred.  If the client executes a
+   DAV:sync-collection request with a DAV:sync-token of
+   "http://example.com/sync/10", without a limit, the server would
+   return 15 DAV:response elements and a DAV:sync-token with value
+   "http://example.com/sync/25".  But if the server chooses to limit
+   responses to at most 10 changes, then it would return only 10
+   DAV:response elements and a DAV:sync-token with value
+   "http://example.com/sync/20", together with an additional
+   DAV:response element for the request-URI with a status code of 507.
+   Subsequently, the client can reissue the request with the
+   DAV:sync-token value returned from the server and fetch the remaining
+   5 changes.
+
+3.7.  Limiting Results
+
+   A client can limit the number of results returned by the server
+   through use of the DAV:limit element ([RFC5323], Section 5.17) in the
+   request body.  This is useful when clients have limited space or
+   bandwidth for the results.  If a server is unable to truncate the
+   result at or below the requested number, then it MUST fail the
+   request with a DAV:number-of-matches-within-limits postcondition
+   error.  When the results can be correctly limited by the server, the
+   server MUST follow the rules above for indicating a result set
+   truncation to the client.
+
+3.8.  Example: Initial DAV:sync-collection Report
+
+   In this example, the client is making its first synchronization
+   request to the server, so the DAV:sync-token element in the request
+   is empty.  It also asks for the DAV:getetag property and for a
+   proprietary property.  The server responds with the items currently
+   in the targeted collection.  The current synchronization token is
+   also returned.
+
+   >> Request <<
+
+
+   REPORT /home/cyrusdaboo/ HTTP/1.1
+   Host: webdav.example.com
+   Depth: 0
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:sync-collection xmlns:D="DAV:">
+     <D:sync-token/>
+     <D:sync-level>1</D:sync-level>
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 12]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+     <D:prop xmlns:R="urn:ns.example.com:boxschema">
+       <D:getetag/>
+       <R:bigbox/>
+     </D:prop>
+   </D:sync-collection>
+
+
+   >> Response <<
+
+
+   HTTP/1.1 207 Multi-Status
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:multistatus xmlns:D="DAV:">
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00001-abcd1"</D:getetag>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema">
+             <R:BoxType>Box type A</R:BoxType>
+           </R:bigbox>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/vcard.vcf</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00002-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+       <D:propstat>
+         <D:prop>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/calendar.ics</D:href>
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 13]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00003-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+       <D:propstat>
+         <D:prop>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+     </D:response>
+     <D:sync-token>http://example.com/ns/sync/1234</D:sync-token>
+   </D:multistatus>
+
+3.9.  Example: DAV:sync-collection Report with Token
+
+   In this example, the client is making a synchronization request to
+   the server and is using the DAV:sync-token element returned from the
+   last report it ran on this collection.  The server responds, listing
+   the items that have been added, changed, or removed.  The (new)
+   current synchronization token is also returned.
+
+   >> Request <<
+
+
+   REPORT /home/cyrusdaboo/ HTTP/1.1
+   Host: webdav.example.com
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:sync-collection xmlns:D="DAV:">
+     <D:sync-token>http://example.com/ns/sync/1234</D:sync-token>
+     <D:sync-level>1</D:sync-level>
+     <D:prop xmlns:R="urn:ns.example.com:boxschema">
+       <D:getetag/>
+       <R:bigbox/>
+     </D:prop>
+   </D:sync-collection>
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 14]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   >> Response <<
+
+
+   HTTP/1.1 207 Multi-Status
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:multistatus xmlns:D="DAV:">
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/file.xml</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00004-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+       <D:propstat>
+         <D:prop>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/vcard.vcf</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00002-abcd2"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+       <D:propstat>
+         <D:prop>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
+       <D:status>HTTP/1.1 404 Not Found</D:status>
+     </D:response>
+     <D:sync-token>http://example.com/ns/sync/1238</D:sync-token>
+   </D:multistatus>
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 15]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+3.10.  Example: Initial DAV:sync-collection Report with Truncation
+
+   In this example, the client is making its first synchronization
+   request to the server, so the DAV:sync-token element in the request
+   is empty.  It also asks for the DAV:getetag property.  The server
+   responds with the items currently in the targeted collection but
+   truncated at two items.  The synchronization token for the truncated
+   result set is returned.
+
+   >> Request <<
+
+
+   REPORT /home/cyrusdaboo/ HTTP/1.1
+   Host: webdav.example.com
+   Depth: 0
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:sync-collection xmlns:D="DAV:">
+     <D:sync-token/>
+     <D:sync-level>1</D:sync-level>
+     <D:prop>
+       <D:getetag/>
+     </D:prop>
+   </D:sync-collection>
+
+
+   >> Response <<
+
+
+   HTTP/1.1 207 Multi-Status
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:multistatus xmlns:D="DAV:">
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00001-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 16]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/vcard.vcf</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00002-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/</D:href>
+       <D:status>HTTP/1.1 507 Insufficient Storage</D:status>
+       <D:error><D:number-of-matches-within-limits/></D:error>
+     </D:response>
+     <D:sync-token>http://example.com/ns/sync/1233</D:sync-token>
+   </D:multistatus>
+
+3.11.  Example: Initial DAV:sync-collection Report with Limit
+
+   In this example, the client is making its first synchronization
+   request to the server, so the DAV:sync-token element in the request
+   is empty.  It requests a limit of 1 for the responses returned by the
+   server.  It also asks for the DAV:getetag property.  The server
+   responds with the items currently in the targeted collection, but
+   truncated at one item.  The synchronization token for the truncated
+   result set is returned.
+
+   >> Request <<
+
+
+   REPORT /home/cyrusdaboo/ HTTP/1.1
+   Host: webdav.example.com
+   Depth: 0
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:sync-collection xmlns:D="DAV:">
+     <D:sync-token/>
+     <D:sync-level>1</D:sync-level>
+     <D:limit>
+       <D:nresults>1</D:nresults>
+     </D:limit>
+     <D:prop>
+       <D:getetag/>
+     </D:prop>
+   </D:sync-collection>
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 17]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   >> Response <<
+
+
+   HTTP/1.1 207 Multi-Status
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:multistatus xmlns:D="DAV:">
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/test.doc</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00001-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href
+   >http://webdav.example.com/home/cyrusdaboo/</D:href>
+       <D:status>HTTP/1.1 507 Insufficient Storage</D:status>
+       <D:error><D:number-of-matches-within-limits/></D:error>
+     </D:response>
+     <D:sync-token>http://example.com/ns/sync/1232</D:sync-token>
+   </D:multistatus>
+
+3.12.  Example: DAV:sync-collection Report with Unsupported Limit
+
+   In this example, the client is making a synchronization request to
+   the server with a valid DAV:sync-token element value.  It requests a
+   limit of 100 for the responses returned by the server.  It also asks
+   for the DAV:getetag property.  The server is unable to limit the
+   results to the maximum specified by the client, so it responds with a
+   507 status code and appropriate postcondition error code.
+
+   >> Request <<
+
+
+   REPORT /home/cyrusdaboo/ HTTP/1.1
+   Host: webdav.example.com
+   Depth: 0
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 18]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:sync-collection xmlns:D="DAV:">
+     <D:sync-token>http://example.com/ns/sync/1232</D:sync-token>
+     <D:sync-level>1</D:sync-level>
+     <D:limit>
+       <D:nresults>100</D:nresults>
+     </D:limit>
+     <D:prop>
+       <D:getetag/>
+     </D:prop>
+   </D:sync-collection>
+
+
+   >> Response <<
+
+
+   HTTP/1.1 507 Insufficient Storage
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:error xmlns:D="DAV:">
+     <D:number-of-matches-within-limits/>
+   </D:error>
+
+3.13.  Example: DAV:sync-level Set to Infinite, Initial
+       DAV:sync-collection Report
+
+   In this example, the client is making its first synchronization
+   request to the server, so the DAV:sync-token element in the request
+   is empty, and it is using DAV:sync-level set to "infinite".  It also
+   asks for the DAV:getetag property and for a proprietary property.
+   The server responds with the items currently in the targeted
+   collection.  The current synchronization token is also returned.
+
+   The collection /home/cyrusdaboo/collection1/ exists and has one child
+   resource that is also reported.  The collection /home/cyrusdaboo/
+   collection2/ exists but has no child resources.  The collection
+   /home/cyrusdaboo/shared/ is returned with a 403 status indicating
+   that a collection exists, but it is unable to report on changes
+   within it in the scope of the current DAV:sync-level "infinite"
+   report.  Instead, the client can try a DAV:sync-collection report
+   directly on the collection URI.
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 19]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   >> Request <<
+
+
+   REPORT /home/cyrusdaboo/ HTTP/1.1
+   Host: webdav.example.com
+   Depth: 0
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:sync-collection xmlns:D="DAV:">
+     <D:sync-token/>
+     <D:sync-level>infinite</D:sync-level>
+     <D:prop xmlns:R="urn:ns.example.com:boxschema">
+       <D:getetag/>
+       <R:bigbox/>
+     </D:prop>
+   </D:sync-collection>
+
+
+   >> Response <<
+
+
+   HTTP/1.1 207 Multi-Status
+   Content-Type: text/xml; charset="utf-8"
+   Content-Length: xxxx
+
+   <?xml version="1.0" encoding="utf-8" ?>
+   <D:multistatus xmlns:D="DAV:">
+     <D:response>
+       <D:href>/home/cyrusdaboo/collection1/</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00001-abcd1"</D:getetag>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema">
+             <R:BoxType>Box type A</R:BoxType>
+           </R:bigbox>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href>/home/cyrusdaboo/collection1/test.doc</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00001-abcd1"</D:getetag>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema">
+             <R:BoxType>Box type A</R:BoxType>
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 20]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+           </R:bigbox>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href>/home/cyrusdaboo/collection2/</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+       <D:propstat>
+         <D:prop>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href>/home/cyrusdaboo/calendar.ics</D:href>
+       <D:propstat>
+         <D:prop>
+           <D:getetag>"00003-abcd1"</D:getetag>
+         </D:prop>
+         <D:status>HTTP/1.1 200 OK</D:status>
+       </D:propstat>
+       <D:propstat>
+         <D:prop>
+           <R:bigbox xmlns:R="urn:ns.example.com:boxschema"/>
+         </D:prop>
+         <D:status>HTTP/1.1 404 Not Found</D:status>
+       </D:propstat>
+     </D:response>
+     <D:response>
+       <D:href>/home/cyrusdaboo/shared/</D:href>
+       <D:status>HTTP/1.1 403 Forbidden</D:status>
+       <D:error><D:sync-traversal-supported/></D:error>
+     </D:response>
+     <D:sync-token>http://example.com/ns/sync/1234</D:sync-token>
+   </D:multistatus>
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 21]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+4.  DAV:sync-token Property
+
+   Name:  sync-token
+
+   Namespace:  DAV:
+
+   Purpose:  Contains the value of the synchronization token as it would
+      be returned by a DAV:sync-collection report.
+
+   Value:  Any valid URI.
+
+   Protected:  MUST be protected because this value is created and
+      controlled by the server.
+
+   COPY/MOVE behavior:  This property value is dependent on the final
+      state of the destination resource, not the value of the property
+      on the source resource.
+
+   Description:  The DAV:sync-token property MUST be defined on all
+      resources that support the DAV:sync-collection report.  It
+      contains the value of the synchronization token as it would be
+      returned by a DAV:sync-collection report on that resource at the
+      same point in time.  It SHOULD NOT be returned by a PROPFIND
+      DAV:allprop request (as defined in Section 14.2 of [RFC4918]).
+
+   Definition:
+
+   <!ELEMENT sync-token #PCDATA>
+
+   <!-- Text MUST be a valid URI -->
+
+5.  DAV:sync-token Use with If Header
+
+   WebDAV provides an If precondition header that allows for "state
+   tokens" to be used as preconditions on HTTP requests (as defined in
+   Section 10.4 of [RFC4918]).  This specification allows the
+   DAV:sync-token value to be used as one such token in an If header.
+   By using this, clients can ensure requests only complete when there
+   have been no changes to the content of a collection, by virtue of an
+   unchanged DAV:sync-token value.  Servers MUST support use of
+   DAV:sync-token values in If request headers.
+
+5.1.  Example: If Precondition with PUT
+
+   In this example, the client has already used the DAV:sync-collection
+   report to synchronize the collection /home/cyrusdaboo/collection/.
+   Now it wants to add a new resource to the collection, but only if
+   there have been no other changes since the last synchronization.
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 22]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   Note that because the DAV:sync-token is defined on the collection and
+   not on the resource targeted by the request, the If header value
+   needs to use the "Resource_Tag" construct for the header syntax to
+   correctly identify that the supplied state token refers to the
+   collection resource.
+
+   >> Request <<
+
+
+   PUT /home/cyrusdaboo/collection/newresource.txt HTTP/1.1
+   Host: webdav.example.com
+   If: </home/cyrusdaboo/collection/>
+     (<http://example.com/ns/sync/12345>)
+   Content-Type: text/plain; charset="utf-8"
+   Content-Length: xxxx
+
+   Some content here...
+
+
+   >> Response <<
+
+
+   HTTP/1.1 201 Created
+
+5.2.  Example: If Precondition with MKCOL
+
+   In this example, the client has already used the DAV:sync-collection
+   report to synchronize the collection /home/cyrusdaboo/collection/.
+   Now, it wants to add a new collection to the collection, but only if
+   there have been no other changes since the last synchronization.
+   Note that because the DAV:sync-token is defined on the collection and
+   not on the resource targeted by the request, the If header value
+   needs to use the "Resource_Tag" construct for the header syntax to
+   correctly identify that the supplied state token refers to the
+   collection resource.  In this case, the request fails as another
+   change has occurred to the collection corresponding to the supplied
+   DAV:sync-token.
+
+   >> Request <<
+
+
+   MKCOL /home/cyrusdaboo/collection/child/ HTTP/1.1
+   Host: webdav.example.com
+   If: </home/cyrusdaboo/collection/>
+     (<http://example.com/ns/sync/12346>)
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 23]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   >> Response <<
+
+
+   HTTP/1.1 412 Precondition Failed
+
+6.  XML Element Definitions
+
+6.1.  DAV:sync-collection XML Element
+
+   Name:  sync-collection
+
+   Namespace:  DAV:
+
+   Purpose:  WebDAV report used to synchronize data between client and
+      server.
+
+   Description:  See Section 3.
+
+   <!ELEMENT sync-collection (sync-token, sync-level, limit?, prop)>
+
+   <!-- DAV:limit defined in RFC 5323, Section 5.17 -->
+   <!-- DAV:prop defined in RFC 4918, Section 14.18 -->
+
+6.2.  DAV:sync-token XML Element
+
+   Name:  sync-token
+
+   Namespace:  DAV:
+
+   Purpose:  The synchronization token provided by the server and
+      returned by the client.
+
+   Description:  See Section 3.
+
+   <!ELEMENT sync-token CDATA>
+
+   <!-- Text MUST be a URI -->
+
+6.3.  DAV:sync-level XML Element
+
+   Name:  sync-level
+
+   Namespace:  DAV:
+
+   Purpose:  Indicates the "scope" of the synchronization report
+      request.
+
+   Description:  See Section 3.3.
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 24]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   <!ELEMENT sync-level CDATA>
+
+   <!-- Text MUST be either "1" or "infinite" -->
+
+6.4.  DAV:multistatus XML Element
+
+   Name:  multistatus
+
+   Namespace:  DAV:
+
+   Purpose:  Extends the DAV:multistatus element to include
+      synchronization details.
+
+   Description:  See Section 3.
+
+   <!ELEMENT multistatus (response*, responsedescription?,
+                          sync-token?) >
+
+   <!-- DAV:multistatus originally defined in RFC 4918, Section 14.16
+        but overridden here to add the DAV:sync-token element -->
+   <!-- DAV:response defined in RFC 4918, Section 14.24 -->
+   <!-- DAV:responsedescription defined in RFC 4918, Section 14.25 -->
+
+7.  Security Considerations
+
+   This extension does not introduce any new security concerns beyond
+   those already described in HTTP and WebDAV.
+
+8.  Acknowledgments
+
+   The following individuals contributed their ideas and support for
+   writing this specification: Bernard Desruisseaux, Werner Donne, Mike
+   Douglass, Ciny Joy, Andrew McMillan, Julian Reschke, and Wilfredo
+   Sanchez.  We would like to thank the Calendaring and Scheduling
+   Consortium for facilitating interoperability testing for early
+   implementations of this specification.
+
+9.  References
+
+9.1.  Normative References
+
+   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
+              Requirement Levels", BCP 14, RFC 2119, March 1997.
+
+   [RFC2616]  Fielding, R., Gettys, J., Mogul, J., Frystyk, H.,
+              Masinter, L., Leach, P., and T. Berners-Lee, "Hypertext
+              Transfer Protocol -- HTTP/1.1", RFC 2616, June 1999.
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 25]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+   [RFC3253]  Clemm, G., Amsden, J., Ellison, T., Kaler, C., and J.
+              Whitehead, "Versioning Extensions to WebDAV
+              (Web Distributed Authoring and Versioning)", RFC 3253,
+              March 2002.
+
+   [RFC3744]  Clemm, G., Reschke, J., Sedlar, E., and J. Whitehead, "Web
+              Distributed Authoring and Versioning (WebDAV)
+              Access Control Protocol", RFC 3744, May 2004.
+
+   [RFC4918]  Dusseault, L., "HTTP Extensions for Web Distributed
+              Authoring and Versioning (WebDAV)", RFC 4918, June 2007.
+
+   [RFC5323]  Reschke, J., Reddy, S., Davis, J., and A. Babich, "Web
+              Distributed Authoring and Versioning (WebDAV) SEARCH",
+              RFC 5323, November 2008.
+
+   [W3C.REC-xml-20081126]
+              Sperberg-McQueen, C., Yergeau, F., Paoli, J., Maler, E.,
+              and T. Bray, "Extensible Markup Language (XML) 1.0 (Fifth
+              Edition)", World Wide Web Consortium
+              Recommendation REC-xml-20081126, November 2008,
+              <http://www.w3.org/TR/2008/REC-xml-20081126>.
+
+9.2.  Informative References
+
+   [RFC4791]  Daboo, C., Desruisseaux, B., and L. Dusseault,
+              "Calendaring Extensions to WebDAV (CalDAV)", RFC 4791,
+              March 2007.
+
+   [RFC5842]  Clemm, G., Crawford, J., Reschke, J., and J. Whitehead,
+              "Binding Extensions to Web Distributed Authoring and
+              Versioning (WebDAV)", RFC 5842, April 2010.
+
+   [RFC6352]  Daboo, C., "CardDAV: vCard Extensions to Web Distributed
+              Authoring and Versioning (WebDAV)", RFC 6352, August 2011.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 26]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+Appendix A.  Backwards-Compatible Handling of Depth
+
+   In prior draft versions of this specification, the Depth request
+   header was used instead of the DAV:sync-level element to indicate the
+   "scope" of the synchronization request.  Servers that wish to be
+   backwards compatible with clients conforming to the older
+   specification should do the following: if a DAV:sync-level element is
+   not present in the request body, use the Depth header value as the
+   equivalent value for the missing DAV:sync-level element.
+
+Appendix B.  Example of a Client Synchronization Approach
+
+   This appendix gives an example of how a client might accomplish
+   collection synchronization using the WebDAV sync report defined in
+   this specification.  Note that this is provided purely as an example,
+   and is not meant to be treated as a normative "algorithm" for client
+   synchronization.
+
+   This example assumes a WebDAV client interacting with a WebDAV server
+   supporting the sync report.  The client keeps a local cache of
+   resources in a targeted collection, "/collection/".  Local changes
+   are assumed to not occur.  The client is only tracking changes to the
+   immediate children of the collection resource.
+
+      ** Initial State **
+
+      The client starts out with an empty local cache.
+
+      The client starts out with no DAV:sync-token stored for
+      "/collection/".
+
+
+      ** Initial Synchronization **
+
+      The client issues a sync report request to the server with an
+      empty DAV:sync-token element, and DAV:sync-level set to "1".  The
+      request asks for the server to return the DAV:getetag WebDAV
+      property for each resource it reports.
+
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 27]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+      The server returns a response containing the list of current
+      resources (with their associated DAV:getetag properties) as well
+      as a new DAV:sync-token value.
+
+      The client associates the new DAV:sync-token value with the
+      collection.
+
+      For each reported resource, the client creates a set of (resource
+      path, DAV:getetag) tuples.
+
+      For each tuple, the client issues an HTTP GET request to the
+      server to retrieve its content, and updates the (resource path,
+      DAV:getetag) entry in its local cache for that resource with the
+      ETag response header value returned in the GET request.
+
+
+      ** Routine Synchronization **
+
+      The client issues a sync report request to the server with the
+      DAV:sync-token set to the current cached value from the last sync,
+      and DAV:sync-level set to "1".  The request asks for the server to
+      return the DAV:getetag WebDAV property for each resource it
+      reports.
+
+      The server returns a response containing the list of changes as
+      well as a new DAV:sync-token value.
+
+      The client associates the new DAV:sync-token value with the
+      collection.
+
+        * Process Removed Resources *
+
+      For each resource reported with a 404 response status, the client
+      removes the corresponding resource from its local cache.
+
+        * Process Resources *
+
+      For each remaining reported resource, the client creates a new set
+      of (resource path, DAV:getetag) tuples.
+
+      The client then determines which resources are in the new set but
+      not in the current cache, and which resources are in the new set
+      and the current cache but have a different DAV:getetag value.  For
+      each of those, the client issues an HTTP GET request to the server
+      to retrieve the resource content, and updates the (resource path,
+      DAV:getetag) entry in its local cache for that resource with the
+      ETag response header value returned in the GET request.
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 28]
+
+RFC 6578                       WebDAV Sync                    March 2012
+
+
+Authors' Addresses
+
+   Cyrus Daboo
+   Apple Inc.
+   1 Infinite Loop
+   Cupertino, CA  95014
+   USA
+
+   EMail: cyrus at daboo.name
+   URI:   http://www.apple.com/
+
+
+   Arnaud Quillaud
+   Oracle Corporation
+   180, Avenue de l'Europe
+   Saint Ismier cedex  38334
+   France
+
+   EMail: arnaud.quillaud at oracle.com
+   URI:   http://www.oracle.com/
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo & Quillaud             Standards Track                   [Page 29]
+

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/sim
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/sim	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/sim	2012-05-02 18:54:12 UTC (rev 9221)
@@ -21,6 +21,6 @@
 
 wd="$(cd "$(dirname "$0")" && pwd -L)";
 
-export PYTHONPATH="$("${wd}/run" -p)";
+source "${wd}/support/shell.sh";
 
 exec "${wd}/contrib/performance/sim" "$@";

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/support/build.sh
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/support/build.sh	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/support/build.sh	2012-05-02 18:54:12 UTC (rev 9221)
@@ -246,7 +246,7 @@
             exit 1;
           fi;
 
-          if egrep '^${pkg_host} ' "${HOME}/.ssh/known_hosts" > /dev/null 2>&1; then
+          if egrep "^${pkg_host}" "${HOME}/.ssh/known_hosts" > /dev/null 2>&1; then
             echo "Copying cache file up to ${pkg_host}.";
             if ! scp "${tmp}" "${pkg_host}:/www/hosts/${pkg_host}${pkg_path}/${cache_basename}"; then
               echo "Failed to copy cache file up to ${pkg_host}.";
@@ -733,7 +733,7 @@
     "${pypi}/p/python-ldap/${ld}.tar.gz";
 
   # XXX actually PyCalendar should be imported in-place.
-  py_dependency -fe -i "src" -r 190 \
+  py_dependency -fe -i "src" -r 191 \
     "pycalendar" "pycalendar" "pycalendar" \
     "http://svn.mulberrymail.com/repos/PyCalendar/branches/server";
 

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/support/pydoctor
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/support/pydoctor	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/support/pydoctor	2012-05-02 18:54:12 UTC (rev 9221)
@@ -5,6 +5,6 @@
 
 wd="$(cd "$(dirname "$0")/.." && pwd)";
 
-export PYTHONPATH="$(${wd}/run -p)";
+source "${wd}/support/shell.sh";
 
 "${wd}/../pydoctor-0.3/bin/pydoctor" "$@";

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/support/shell.sh
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/support/shell.sh	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/support/shell.sh	2012-05-02 18:54:12 UTC (rev 9221)
@@ -25,7 +25,7 @@
     wd="$(pwd)";
 fi;
 
-. ${wd}/support/build.sh;
+source "${wd}/support/build.sh";
 do_setup=false;
 do_get=false;
 do_run=false;

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/test
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/test	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/test	2012-05-02 18:54:12 UTC (rev 9221)
@@ -83,11 +83,10 @@
 mkdir -p "${wd}/data";
 cd "${wd}" && "${python}" "${trial}" --temp-directory="${wd}/data/trial" --rterrors ${random} ${until_fail} ${no_colour} ${coverage} ${test_modules};
 
-tmp="$(mktemp "/tmp/calendarserver_test.XXXXX")";
-
 if ${flaky}; then
   echo "";
   echo "Running pyflakes...";
+  tmp="$(mktemp "/tmp/calendarserver_test_flakes.XXXXX")";
   cd "${wd}" && ./pyflakes ${test_modules} | tee "${tmp}" 2>&1;
   if [ -s "${tmp}" ]; then
     echo "**** Pyflakes says you have some code to clean up. ****";
@@ -95,3 +94,12 @@
   fi;
   rm -f "${tmp}";
 fi;
+
+tmp="$(mktemp "/tmp/calendarserver_test_emtpy.XXXXX")";
+find "${wd}" '!' '(' -type d '(' -path '*/.*' -o -name data ')' -prune ')' -type f -size 0 > "${tmp}";
+if [ -s "${tmp}" ]; then
+    echo "**** Empty files: ****";
+    cat "${tmp}";
+    exit 1;
+fi;
+rm -f "${tmp}";

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/testserver
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/testserver	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/testserver	2012-05-02 18:54:12 UTC (rev 9221)
@@ -61,7 +61,7 @@
 # Do The Right Thing
 ##
 
-. "${wd}/support/shell.sh"
+source "${wd}/support/shell.sh";
 
 if [ ! -e "${documentroot}/calendars/__uids__/user09" ]; then
   curl "http://localhost:8008/calendars/__uids__/user09/";

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twext/web2/dav/resource.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twext/web2/dav/resource.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twext/web2/dav/resource.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -970,7 +970,7 @@
 
         if authHeader is not None:
             if authHeader[0] not in request.credentialFactories:
-                log.err(
+                log.debug(
                     "Client authentication scheme %s is not provided by server %s"
                     % (authHeader[0], request.credentialFactories.keys())
                 )

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/__init__.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/__init__.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/__init__.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -1,6 +1,6 @@
 # -*- test-case-name: twistedcaldav -*-
 ##
-# Copyright (c) 2005-2011 Apple Inc. All rights reserved.
+# Copyright (c) 2005-2012 Apple Inc. All rights reserved.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -21,6 +21,10 @@
 See RFC 4791.
 """
 
+# Make sure we have twext's required Twisted patches loaded before we do
+# anything at all.
+__import__("twext")
+
 #
 # Load in suitable file extension/content-type map from OS X
 #

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/directory/ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/directory/ldapdirectory.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/directory/ldapdirectory.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -105,7 +105,7 @@
             "authMethod": "LDAP",
             "rdnSchema": {
                 "base": "dc=example,dc=com",
-                "guidAttr": None,
+                "guidAttr": "entryUUID",
                 "users": {
                     "rdn": "ou=People",
                     "attr": "uid", # used only to synthesize email address

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/ical.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/ical.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/ical.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -56,6 +56,8 @@
 from pycalendar.timezone import PyCalendarTimezone
 from pycalendar.utcoffsetvalue import PyCalendarUTCOffsetValue
 
+import base64
+
 log = Logger()
 
 iCalendarProductID = "-//CALENDARSERVER.ORG//NONSGML Version 1//EN"
@@ -2171,6 +2173,9 @@
         except IndexError:
             return False
 
+        # Need to add property to indicate this was added by the server
+        valarm.addProperty(Property("X-APPLE-DEFAULT-ALARM", "TRUE"))
+
         # ACTION:NONE not added
         changed = False
         action = valarm.propertyValue("ACTION")
@@ -2400,6 +2405,28 @@
         
         self.replacePropertyInAllComponents(Property("DTSTAMP", PyCalendarDateTime.getNowUTC()))
             
+    def sequenceInSync(self, oldcalendar):
+        """
+        Make sure SEQUENCE does not decrease in any components.
+        """
+        
+        
+        def maxSequence(calendar):
+            seqs = calendar.getAllPropertiesInAnyComponent("SEQUENCE", depth=1)
+            return max(seqs, key=lambda x:x.value()).value() if seqs else 0
+
+        def minSequence(calendar):
+            seqs = calendar.getAllPropertiesInAnyComponent("SEQUENCE", depth=1)
+            return min(seqs, key=lambda x:x.value()).value() if seqs else 0
+
+        # Determine value to bump to from old calendar (if exists) or self
+        oldseq = maxSequence(oldcalendar)
+        currentseq = minSequence(self)
+            
+        # Sync all components
+        if oldseq and currentseq < oldseq:
+            self.replacePropertyInAllComponents(Property("SEQUENCE", oldseq))
+            
     def normalizeAll(self):
         
         # Normalize all properties
@@ -2577,7 +2604,7 @@
                     if config.Scheduling.Options.V1Compatibility:
                         if cuaddr.startswith("http") or cuaddr.startswith("/"):
                             prop.setParameter("CALENDARSERVER-OLD-CUA",
-                                prop.value())
+                                "base64-%s" % (base64.b64encode(prop.value())))
 
                     # Always re-write value to urn:uuid
                     prop.setValue("urn:uuid:%s" % (guid,))
@@ -2588,6 +2615,8 @@
                     # Restore old CUA
                     oldExternalCUA = prop.parameterValue("CALENDARSERVER-OLD-CUA")
                     if oldExternalCUA:
+                        if oldExternalCUA.startswith("base64-"):
+                            oldExternalCUA = base64.b64decode(oldExternalCUA[7:])
                         newaddr = oldExternalCUA
                         prop.removeParameter("CALENDARSERVER-OLD-CUA")
                     elif oldemail:

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/instance.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/instance.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/instance.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -304,7 +304,11 @@
         if rrules is not None and rulestart is not None:
             # Do recurrence set expansion
             expanded = []
-            limited = rrules.expand(rulestart, PyCalendarPeriod(start, limit), expanded)
+            # Begin expansion far in the past because there may be RDATEs earlier
+            # than the master DTSTART, and if we exclude those, the associated
+            # overridden instances will cause an InvalidOverriddenInstance.
+            limited = rrules.expand(rulestart,
+                PyCalendarPeriod(PyCalendarDateTime(1900,1,1), limit), expanded)
             for startDate in expanded:
                 startDate = normalizeForIndex(startDate)
                 endDate = startDate + duration

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/notify.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/notify.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/notify.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -1480,7 +1480,7 @@
         for key, settings in config.Notifications.Services.iteritems():
             if settings["Enabled"]:
                 notifier = namedClass(settings["Service"]).makeService(settings,
-                    store)
+                    store, config.ServerHostName)
                 notifier.setServiceParent(multiService)
                 notifiers.append(notifier)
 
@@ -1497,7 +1497,7 @@
 class SimpleLineNotifierService(service.Service):
 
     @classmethod
-    def makeService(cls, settings, store):
+    def makeService(cls, settings, store, serverHostName):
         return cls(settings)
 
     def __init__(self, settings):
@@ -1518,7 +1518,7 @@
 class XMPPNotifierService(service.Service):
 
     @classmethod
-    def makeService(cls, settings, store):
+    def makeService(cls, settings, store, serverHostName):
         return cls(settings)
 
     def __init__(self, settings):

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/resource.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/resource.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/resource.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -2347,6 +2347,7 @@
 
         elif qname == (customxml.calendarserver_namespace, "pushkey"):
             if (config.Notifications.Services.XMPPNotifier.Enabled or
+                config.Notifications.Services.AMPNotifier.Enabled or
                 config.Notifications.Services.ApplePushNotifier.Enabled):
                 nodeName = (yield self._newStoreHome.nodeName())
                 if nodeName:

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/implicit.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/implicit.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/implicit.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -71,6 +71,14 @@
         existing_type = "schedule" if is_scheduling_object else "calendar"
         new_type = "schedule" if (yield self.checkImplicitState()) else "calendar"
 
+        # If the types do not currently match, re-check the stored one. We need this to work around the possibility
+        # that data exists using the older algorithm of determining a scheduling object resource, and that could be
+        # wrong.
+        if existing_type != new_type and resource and resource.exists():
+            resource.isScheduleObject = None
+            is_scheduling_object = (yield self.checkSchedulingObjectResource(resource))
+            existing_type = "schedule" if is_scheduling_object else "calendar"
+            
         if existing_type == "calendar":
             self.action = "create" if new_type == "schedule" else "none"
         else:
@@ -230,13 +238,9 @@
                 except ValueError:
                     # We have different ORGANIZERs in the same iCalendar object - this is an error
                     returnValue(False)
-                organizerPrincipal = resource.principalForCalendarUserAddress(organizer) if organizer else None
-                implicit = organizerPrincipal != None
-                log.debug("Implicit - checked scheduling object resource state for UID: '%s', result: %s" % (
-                    calendar.resourceUID(),
-                    implicit,
-                ))
-                returnValue(implicit)
+                    
+                # Any ORGANIZER => a scheduling object resource
+                returnValue(organizer is not None)
 
         returnValue(False)
         
@@ -283,7 +287,7 @@
         elif self.state == "attendee":
             yield self.doImplicitAttendee()
         elif self.state == "attendee-missing":
-            self.doImplicitMissingAttendee()
+            yield self.doImplicitMissingAttendee()
         else:
             returnValue(None)
 
@@ -528,6 +532,10 @@
             self.oldAttendeesByInstance = self.oldcalendar.getAttendeesByInstance(True, onlyScheduleAgentServer=True)
             self.coerceAttendeesPartstatOnModify()
             
+            # Don't allow any SEQUENCE to decrease
+            if self.oldcalendar:
+                self.calendar.sequenceInSync(self.oldcalendar)
+
             # Significant change
             no_change, self.changed_rids, self.needs_action_rids, reinvites, recurrence_reschedule = self.isOrganizerChangeInsignificant()
             if no_change:
@@ -1064,6 +1072,7 @@
                 log.debug("Implicit - attendee '%s' is updating UID without server scheduling: '%s'" % (self.attendee, self.uid))
                 # Nothing else to do
 
+    @inlineCallbacks
     def doImplicitMissingAttendee(self):
 
         if self.action == "remove":
@@ -1075,6 +1084,19 @@
             # with an schedule-status error and schedule-agent none
             log.debug("Missing attendee is allowed to update UID: '%s' with invalid organizer '%s'" % (self.uid, self.organizer))
             
+            # Make sure ORGANIZER is not changed if originally SCHEDULE-AGENT=SERVER
+            if self.resource.exists():
+                self.oldcalendar = (yield self.resource.iCalendarForUser(self.request))
+                oldOrganizer = self.oldcalendar.getOrganizer()
+                newOrganizer = self.calendar.getOrganizer()
+                if oldOrganizer != newOrganizer and self.oldcalendar.getOrganizerScheduleAgent():
+                    log.error("Cannot change ORGANIZER: UID:%s" % (self.uid,))
+                    raise HTTPError(ErrorResponse(
+                        responsecode.FORBIDDEN,
+                        (caldav_namespace, "valid-attendee-change"),
+                        "Cannot change organizer",
+                    ))
+
             # Check SCHEDULE-AGENT and coerce SERVER to NONE
             if self.calendar.getOrganizerScheduleAgent():
                 self.calendar.setParameterToValueForPropertyWithValue("SCHEDULE-AGENT", "NONE", "ORGANIZER", None)

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/test/test_implicit.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/test/test_implicit.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/scheduling/test/test_implicit.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -17,11 +17,14 @@
 from pycalendar.datetime import PyCalendarDateTime
 from pycalendar.timezone import PyCalendarTimezone
 from twext.web2 import responsecode
+from twext.web2.test.test_server import SimpleRequest
 from twisted.internet.defer import succeed, inlineCallbacks
 from twistedcaldav.ical import Component
 from twistedcaldav.scheduling.implicit import ImplicitScheduler
 from twistedcaldav.scheduling.scheduler import ScheduleResponseQueue
+from twistedcaldav.test.util import HomeTestCase
 import twistedcaldav.test.util
+from twext.web2.http import HTTPError
 
 class FakeScheduler(object):
     """
@@ -837,3 +840,179 @@
             self.assertEqual(count, result_count)
             self.assertEqual(len(recipients), result_count)
             self.assertEqual(set(recipients), set(result_set))
+
+class ImplicitRequests (HomeTestCase):
+    """
+    Test twistedcaldav.scheduyling.implicit with a Request object. 
+    """
+
+    @inlineCallbacks
+    def test_testImplicitSchedulingPUT_ScheduleState(self):
+        """
+        Test that checkImplicitState() always returns True for any organizer, valid or not.
+        """
+        
+        data = (
+            (
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+END:VEVENT
+END:VCALENDAR
+""",
+                False,
+            ),
+            (
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:wsanchez at example.com
+ATTENDEE:mailto:wsanchez at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                True,
+            ),
+            (
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:bogus at bogus.com
+ATTENDEE:mailto:wsanchez at example.com
+ATTENDEE:mailto:bogus at bogus.com
+END:VEVENT
+END:VCALENDAR
+""",
+                True,
+            ),
+        )
+
+        for calendar, result in data:
+            calendar = Component.fromString(calendar)
+
+            request = SimpleRequest(self.site, "PUT", "/calendar/1.ics")
+            calresource = yield request.locateResource("/calendar/1.ics")
+            self.assertEqual(calresource.isScheduleObject, None)
+            
+            scheduler = ImplicitScheduler()
+            doAction, isScheduleObject = (yield scheduler.testImplicitSchedulingPUT(request, calresource, "/calendar/1.ics", calendar, False))
+            self.assertEqual(doAction, result)
+            self.assertEqual(isScheduleObject, result)
+            request._newStoreTransaction.abort()
+
+    @inlineCallbacks
+    def test_testImplicitSchedulingPUT_FixScheduleState(self):
+        """
+        Test that testImplicitSchedulingPUT will fix an old cached schedule object state by
+        re-evaluating the calendar data.
+        """
+        
+        request = SimpleRequest(self.site, "PUT", "/calendar/1.ics")
+        calresource = yield request.locateResource("/calendar/1.ics")
+        self.assertEqual(calresource.isScheduleObject, None)
+        calresource.isScheduleObject = False
+        
+        calendarOld = Component.fromString("""BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 02":mailto:user2 at example.com
+ATTENDEE:mailto:wsanchez at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""")
+
+
+        calendarNew = Component.fromString("""BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 02":mailto:user2 at example.com
+ATTENDEE:mailto:wsanchez at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""")
+
+        calresource.exists = lambda :True
+        calresource.iCalendarForUser = lambda request:succeed(calendarOld)
+        
+        scheduler = ImplicitScheduler()
+        try:
+            doAction, isScheduleObject = (yield scheduler.testImplicitSchedulingPUT(request, calresource, "/calendars/users/user01/calendar/1.ics", calendarNew, False))
+        except:
+            self.fail("Exception must not be raised")
+        self.assertTrue(doAction)
+        self.assertTrue(isScheduleObject)
+
+    @inlineCallbacks
+    def test_testImplicitSchedulingPUT_NoChangeScheduleState(self):
+        """
+        Test that testImplicitSchedulingPUT will prevent attendees from changing the
+        schedule object state.
+        """
+        
+        request = SimpleRequest(self.site, "PUT", "/calendar/1.ics")
+        calresource = yield request.locateResource("/calendar/1.ics")
+        self.assertEqual(calresource.isScheduleObject, None)
+        calresource.isScheduleObject = False
+        
+        calendarOld = Component.fromString("""BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+END:VEVENT
+END:VCALENDAR
+""")
+
+
+        calendarNew = Component.fromString("""BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 02":mailto:user2 at example.com
+ATTENDEE:mailto:wsanchez at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""")
+
+        calresource.exists = lambda :True
+        calresource.iCalendarForUser = lambda request:succeed(calendarOld)
+        
+        scheduler = ImplicitScheduler()
+        try:
+            yield scheduler.testImplicitSchedulingPUT(request, calresource, "/calendars/users/user01/calendar/1.ics", calendarNew, False)
+        except HTTPError:
+            pass
+        except:
+            self.fail("HTTPError exception must be raised")
+        else:
+            self.fail("Exception must be raised")
+        request._newStoreTransaction.abort()

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/simpleresource.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/simpleresource.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/simpleresource.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -32,6 +32,7 @@
 from twisted.internet.defer import succeed
 
 from twistedcaldav.resource import CalDAVResource
+from twistedcaldav.config import config
 
 class SimpleResource (
     CalDAVResource,
@@ -94,4 +95,4 @@
         self._kwargs = kwargs
 
     def renderHTTP(self, request):
-        return http.RedirectResponse(request.unparseURL(**self._kwargs))
+        return http.RedirectResponse(request.unparseURL(host=config.ServerHostName, **self._kwargs))

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/stdconfig.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/stdconfig.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -84,7 +84,7 @@
         "authMethod": "LDAP",
         "rdnSchema": {
             "base": "dc=example,dc=com",
-            "guidAttr": None,
+            "guidAttr": "entryUUID",
             "users": {
                 "rdn": "ou=People",
                 "attr": "uid", # used only to synthesize email address
@@ -755,6 +755,13 @@
                     "Topic" : "",
                 },
             },
+            "AMPNotifier" : {
+                "Service" : "calendarserver.push.amppush.AMPPushNotifierService",
+                "Enabled" : True,
+                "Port" : 62311,
+                "EnableStaggering" : False,
+                "StaggerSeconds" : 3,
+            },
             "XMPPNotifier" : {
                 "Service" : "twistedcaldav.notify.XMPPNotifierService",
                 "Enabled" : False,

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_icalendar.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_icalendar.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_icalendar.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -433,6 +433,17 @@
 SEQUENCE:4
 RECURRENCE-ID;TZID=America/Los_Angeles:20111215T143000
 END:VEVENT
+BEGIN:VEVENT
+CREATED:20111206T203543Z
+UID:5F7FF5FB-2253-4895-8BF1-76E8ED868B4C
+DTEND;TZID=America/Los_Angeles:20001214T163000
+TRANSP:OPAQUE
+SUMMARY:bogus instance
+DTSTART;TZID=America/Los_Angeles:20001214T153000
+DTSTAMP:20111206T203606Z
+SEQUENCE:4
+RECURRENCE-ID;TZID=America/Los_Angeles:20001215T143000
+END:VEVENT
 END:VCALENDAR
 """
         # Ensure it starts off invalid
@@ -451,6 +462,8 @@
         # Now it should pass without fixing
         calendar.validCalendarData(doFix=False, validateRecurrences=True)
 
+        # Verify expansion works, even for an RDATE prior to master DTSTART:
+        calendar.expandTimeRanges(PyCalendarDateTime(2100, 1, 1))
 
         # Test EXDATEs *prior* to master (as the result of client splitting a
         # a recurring event and copying *all* EXDATEs to new event):
@@ -5604,6 +5617,263 @@
             self.assertEqual(len(dtstamps1 & dtstamps2), 0, "Failed comparison: %s\n%s" % (title, diff,))
 
 
+    def test_sequenceInSync(self):
+        """
+        Test Component.sequenceInSync to make sure it bumps SEQUENCE when needed.
+        """
+
+        data = (
+            (
+                "Simple no sequence, no sequence change",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+END:VEVENT
+END:VCALENDAR
+""",
+            ),
+            (
+                "Simple sequence, no sequence change",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+""",
+            ),
+            (
+                "Simple no sequence, sequence change up",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+""",
+            ),
+            (
+                "Simple sequence, sequence change down",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""",
+            ),
+            (
+                "Recurrence sequence, sequence change down",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+RRULE:FREQ=DAILY;COUNT=10
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:20080602T120000Z
+DTSTART:20080602T120000Z
+DTEND:20080602T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+RRULE:FREQ=DAILY;COUNT=10
+SUMMARY:Test
+SEQUENCE:1
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:20080602T120000Z
+DTSTART:20080602T120000Z
+DTEND:20080602T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+DTSTAMP:20080601T120000Z
+RRULE:FREQ=DAILY;COUNT=10
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+BEGIN:VEVENT
+UID:12345-67890
+RECURRENCE-ID:20080602T120000Z
+DTSTART:20080602T120000Z
+DTEND:20080602T130000Z
+DTSTAMP:20080601T120000Z
+SUMMARY:Test
+SEQUENCE:2
+END:VEVENT
+END:VCALENDAR
+""",
+            ),
+        )
+        
+        for title, old_txt, ical_txt, result_txt in data:
+            old = Component.fromString(old_txt)
+            ical = Component.fromString(ical_txt)
+            result = Component.fromString(result_txt)
+            ical.sequenceInSync(old)
+            
+            ical1 = str(ical).split("\n")
+            ical2 = str(result).split("\n")
+            
+            diff = "\n".join(unified_diff(ical1, ical2))
+            self.assertEqual("\n".join(ical1), "\n".join(ical2), "Failed comparison: %s\n%s" % (title, diff,))
+
+
     def test_hasInstancesAfter(self):
         data = (
             ("In the past (single)", False,
@@ -5973,7 +6243,7 @@
             ),
         )
         cutoff = PyCalendarDateTime(2011, 11, 30, 0, 0, 0)
-        for title, expected, body in data:
+        for _ignore_title, expected, body in data:
             ical = Component.fromString(body)
             self.assertEquals(expected, ical.hasInstancesAfter(cutoff))
 
@@ -5993,7 +6263,7 @@
 ATTENDEE:urn:uuid:foo
 ATTENDEE:urn:uuid:bar
 ATTENDEE:urn:uuid:baz
-ATTENDEE;CALENDARSERVER-OLD-CUA="http://example.com/principals/users/buz":urn:uuid:buz
+ATTENDEE;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2V4YW1wbGUuY29tL3ByaW5jaXBhbHMvdXNlcnMvYnV6":urn:uuid:buz
 DTSTAMP:20071114T000000Z
 END:VEVENT
 END:VCALENDAR
@@ -6075,17 +6345,17 @@
         self.patch(config.Scheduling.Options, "V1Compatibility", True)
         component.normalizeCalendarUserAddresses(lookupFunction, None, toUUID=True)
 
-        # /principal CUAs are not stored in CALENDARSERVER-OLD-CUA
+        # /principal CUAs are stored in CALENDARSERVER-OLD-CUA
         prop = component.getAttendeeProperty(("urn:uuid:foo",))
         self.assertEquals("urn:uuid:foo", prop.value())
         self.assertEquals(prop.parameterValue("CALENDARSERVER-OLD-CUA"),
-            "/principals/users/foo")
+            "base64-L3ByaW5jaXBhbHMvdXNlcnMvZm9v")
 
         # http CUAs are stored in CALENDARSERVER-OLD-CUA
         prop = component.getAttendeeProperty(("urn:uuid:buz",))
         self.assertEquals("urn:uuid:buz", prop.value())
         self.assertEquals(prop.parameterValue("CALENDARSERVER-OLD-CUA"),
-            "http://example.com/principals/users/buz")
+            "base64-aHR0cDovL2V4YW1wbGUuY29tL3ByaW5jaXBhbHMvdXNlcnMvYnV6")
 
 
     def test_serializationCaching(self):

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_sharing.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_sharing.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/twistedcaldav/test/test_sharing.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -76,7 +76,7 @@
         CalDAVResource.sendInvite = lambda self, record, request: succeed(True)
         CalDAVResource.removeInvite = lambda self, record, request: succeed(True)
 
-        CalDAVResource.principalForCalendarUserAddress = lambda self, cuaddr: SharingTests.FakePrincipal(cuaddr)
+        self.patch(CalDAVResource, "principalForCalendarUserAddress", lambda self, cuaddr: SharingTests.FakePrincipal(cuaddr))
 
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/__init__.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/__init__.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/__init__.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -1,6 +1,6 @@
 # -*- test-case-name: txdav -*-
 ##
-# Copyright (c) 2010 Apple Inc. All rights reserved.
+# Copyright (c) 2010-2012 Apple Inc. All rights reserved.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -18,3 +18,8 @@
 """
 WebDAV support for Twisted.
 """
+
+# Make sure we have twext's required Twisted patches loaded before we do
+# anything at all.
+__import__("twext")
+

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/file.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/file.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/file.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -187,11 +187,11 @@
                         self.log_error("Cannot undo DataStoreTransaction")
                 raise
 
-        for operation in self._postCommitOperations:
+        for (operation, ignored) in self._postCommitOperations:
             operation()
 
-    def postCommit(self, operation):
-        self._postCommitOperations.append(operation)
+    def postCommit(self, operation, immediately=False):
+        self._postCommitOperations.append((operation, immediately))
 
     def postAbort(self, operation):
         self._postAbortOperations.append(operation)

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/util.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/util.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/base/datastore/util.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -59,22 +59,34 @@
         self.cacheExpireSeconds = cacheExpireSeconds
 
     def set(self, key, value):
-        super(QueryCacher, self).set(key, value, expireTime=self.cacheExpireSeconds)
+        return super(QueryCacher, self).set(key, value, expireTime=self.cacheExpireSeconds)
 
+    def delete(self, key):
+        return super(QueryCacher, self).delete(key)
+
+
+    def setAfterCommit(self, transaction, key, value):
+        transaction.postCommit(lambda: self.set(key, value), immediately=True)
+
+    def invalidateAfterCommit(self, transaction, key):
+        # Invalidate now (so that operations within this transaction see it)
+        # and *also* post-commit (because there could be a scheduled setAfterCommit
+        # for this key)
+        transaction.postCommit(lambda: self.delete(key), immediately=True)
+        return self.delete(key)
+
+    # Home child objects by name
+
     def keyForObjectWithName(self, homeResourceID, name):
         return "objectWithName:%s:%s" % (homeResourceID, name)
 
-    def getObjectWithName(self, homeResourceID, name):
-        key = self.keyForObjectWithName(homeResourceID, name)
-        return self.get(key)
+    # Home metadata (Created/Modified)
 
-    def setObjectWithName(self, transaction, homeResourceID, name, value):
-        key = self.keyForObjectWithName(homeResourceID, name)
-        transaction.postCommit(lambda:self.set(key, value))
+    def keyForHomeMetaData(self, homeResourceID):
+        return "homeMetaData:%s" % (homeResourceID)
 
-    def invalidateObjectWithName(self, transaction, homeResourceID, name):
-        key = self.keyForObjectWithName(homeResourceID, name)
-        # Invalidate immediately and post-commit in case a calendar was created and deleted
-        # within the same transaction
-        self.delete(key)
-        transaction.postCommit(lambda:self.delete(key))
+    # HomeChild metadata (Created/Modified (and SUPPORTED_COMPONENTS))
+
+    def keyForHomeChildMetaData(self, resourceID):
+        return "homeChildMetaData:%s" % (resourceID)
+

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/sql.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/sql.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -29,6 +29,7 @@
 from twext.python.vcomponent import VComponent
 from txdav.xml.rfc2518 import ResourceType
 from twext.web2.http_headers import MimeType, generateContentType
+from twext.python.filepath import CachingFilePath
 
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.internet.error import ConnectionLost
@@ -82,6 +83,8 @@
 
 from zope.interface.declarations import implements
 
+import os
+import tempfile
 import uuid
 
 class CalendarHome(CommonHome):
@@ -423,13 +426,14 @@
         """
         return self._name == "inbox"
 
+
     @inlineCallbacks
     def setSupportedComponents(self, supported_components):
         """
         Update the database column with the supported components. Technically this should only happen once
         on collection creation, but for migration we may need to change after the fact - hence a separate api.
         """
-        
+
         cal = self._homeChildMetaDataSchema
         yield Update(
             {
@@ -439,6 +443,11 @@
         ).on(self._txn)
         self._supportedComponents = supported_components
 
+        queryCacher = self._txn.store().queryCacher
+        if queryCacher is not None:
+            cacheKey = queryCacher.keyForHomeChildMetaData(self._resourceID)
+            yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+
     def getSupportedComponents(self):
         return self._supportedComponents
 
@@ -1038,7 +1047,7 @@
             yield self._recurrenceMaxByIDQuery.on(txn,
                                          resourceID=self._resourceID)
         )[0][0]
-        returnValue(parseSQLDateToPyCalendar(rMax))
+        returnValue(parseSQLDateToPyCalendar(rMax) if rMax is not None else None)
 
 
     @inlineCallbacks
@@ -1165,14 +1174,34 @@
 
 class AttachmentStorageTransport(StorageTransportBase):
 
+    _TEMPORARY_UPLOADS_DIRECTORY = "Temporary"
+
     def __init__(self, attachment, contentType, creating=False):
         super(AttachmentStorageTransport, self).__init__(
             attachment, contentType)
-        self._buf = ''
+
+        fileDescriptor, fileName = self._temporaryFile()
+        # Wrap the file descriptor in a file object we can write to
+        self._file = os.fdopen(fileDescriptor, "w")
+        self._path = CachingFilePath(fileName)
         self._hash = hashlib.md5()
         self._creating = creating
 
 
+    def _temporaryFile(self):
+        """
+        Returns a (file descriptor, absolute path) tuple for a temporary file within
+        the Attachments/Temporary directory (creating the Temporary subdirectory
+        if it doesn't exist).  It is the caller's responsibility to remove the
+        file.
+        """
+        attachmentRoot = self._txn._store.attachmentsPath
+        tempUploadsPath = attachmentRoot.child(self._TEMPORARY_UPLOADS_DIRECTORY)
+        if not tempUploadsPath.exists():
+            tempUploadsPath.createDirectory()
+        return tempfile.mkstemp(dir=tempUploadsPath.path)
+
+
     @property
     def _txn(self):
         return self._attachment._txn
@@ -1181,7 +1210,7 @@
     def write(self, data):
         if isinstance(data, buffer):
             data = str(data)
-        self._buf += data
+        self._file.write(data)
         self._hash.update(data)
 
 
@@ -1200,18 +1229,20 @@
                     self._attachment._ownerHomeID))
 
         oldSize = self._attachment.size()
-
+        newSize = self._file.tell()
+        self._file.close()
         allowed = home.quotaAllowedBytes()
         if allowed is not None and allowed < ((yield home.quotaUsedBytes())
-                                              + (len(self._buf) - oldSize)):
+                                              + (newSize - oldSize)):
+            self._path.remove()
             if self._creating:
                 yield self._attachment._internalRemove()
             raise QuotaExceeded()
 
-        self._attachment._path.setContent(self._buf)
+        self._path.moveTo(self._attachment._path)
         self._attachment._contentType = self._contentType
         self._attachment._md5 = self._hash.hexdigest()
-        self._attachment._size = len(self._buf)
+        self._attachment._size = newSize
         att = schema.ATTACHMENT
         self._attachment._created, self._attachment._modified = map(
             sqltime,

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/common.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/common.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -43,6 +43,7 @@
 from txdav.common.icommondatastore import ObjectResourceNameAlreadyExistsError
 from txdav.common.inotifications import INotificationObject
 from txdav.common.datastore.test.util import CommonCommonTests
+from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE, _BIND_MODE_READ
 
 from txdav.caldav.icalendarstore import (
     ICalendarObject, ICalendarHome,
@@ -324,22 +325,22 @@
 
 
     @inlineCallbacks
-    def homeUnderTest(self, txn=None):
+    def homeUnderTest(self, txn=None, name="home1"):
         """
         Get the calendar home detailed by C{requirements['home1']}.
         """
         if txn is None:
             txn = self.transactionUnderTest()
-        returnValue((yield txn.calendarHomeWithUID("home1")))
+        returnValue((yield txn.calendarHomeWithUID(name)))
 
 
     @inlineCallbacks
-    def calendarUnderTest(self, txn=None):
+    def calendarUnderTest(self, txn=None, name="calendar_1", home="home1"):
         """
         Get the calendar detailed by C{requirements['home1']['calendar_1']}.
         """
         returnValue((yield
-            (yield self.homeUnderTest(txn)).calendarWithName("calendar_1"))
+            (yield self.homeUnderTest(txn, home)).calendarWithName(name))
         )
 
 
@@ -984,6 +985,57 @@
 
 
     @inlineCallbacks
+    def test_shareWith(self):
+        """
+        L{ICalendar.shareWith} will share a calendar with a given home UID.
+        """
+        cal = yield self.calendarUnderTest()
+        OTHER_HOME_UID = "home_splits"
+        other = yield self.homeUnderTest(name=OTHER_HOME_UID)
+        newCalName = yield cal.shareWith(other, _BIND_MODE_WRITE)
+        yield self.commit()
+        normalCal = yield self.calendarUnderTest()
+        otherHome = yield self.homeUnderTest(name=OTHER_HOME_UID)
+        otherCal = yield otherHome.sharedChildWithName(newCalName)
+        self.assertNotIdentical(otherCal, None)
+        self.assertEqual(
+            (yield
+             (yield otherCal.calendarObjectWithName("1.ics")).component()),
+            (yield
+             (yield normalCal.calendarObjectWithName("1.ics")).component())
+        )
+        # Check legacy shares database too, since that's what the protocol layer
+        # is still using to list things.
+        self.assertEqual(
+            [(record.shareuid, record.localname) for record in
+             (yield otherHome.retrieveOldShares().allRecords())],
+            [(newCalName, newCalName)]
+        )
+
+
+    @inlineCallbacks
+    def test_shareAgainChangesMode(self):
+        """
+        If a calendar is already shared with a given calendar home,
+        L{ICalendar.shareWith} will change the sharing mode.
+        """
+        yield self.test_shareWith()
+        # yield self.commit() # txn is none? why?
+        OTHER_HOME_UID = "home_splits"
+        cal = yield self.calendarUnderTest()
+        other = yield self.homeUnderTest(name=OTHER_HOME_UID)
+        newName = yield cal.shareWith(other, _BIND_MODE_READ)
+        otherCal = yield other.sharedChildWithName(newName)
+        self.assertNotIdentical(otherCal, None)
+
+        # FIXME: permission information should be visible on the retrieved
+        # calendar object, we shoudln't need to go via the legacy API.
+        invites = yield cal.retrieveOldInvites().allRecords()
+        self.assertEqual(len(invites), 1)
+        self.assertEqual(invites[0].access, "read-only")
+
+
+    @inlineCallbacks
     def test_hasCalendarResourceUIDSomewhereElse(self):
         """
         L{ICalendarHome.hasCalendarResourceUIDSomewhereElse} will determine if

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_file.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_file.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_file.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -465,6 +465,14 @@
         return self.calendarStore
 
 
+    def test_shareWith(self):
+        """
+        Overridden to be skipped.
+        """
+
+    test_shareWith.skip = "Not implemented for file store yet."
+
+
     def test_init(self):
         """
         L{CalendarStore} has a C{_path} attribute which refers to its

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_sql.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/caldav/datastore/test/test_sql.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -1168,4 +1168,18 @@
             self.fail("Expected an exception")
         self.assertFalse(resource2._locked)
 
+
+    @inlineCallbacks
+    def test_recurrenceMax(self):
+        """
+        Test CalendarObjectResource.recurrenceMax to make sure it handles a None value.
+        """
+        
+        # Valid object
+        resource = yield self.calendarObjectUnderTest()
+        
+        # Valid lock
+        rMax = yield resource.recurrenceMax()
+        self.assertEqual(rMax, None)
+
         
\ No newline at end of file

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/sql.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/sql.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -25,6 +25,7 @@
     "CommonHome",
 ]
 
+from uuid import uuid4
 
 from zope.interface import implements, directlyProvides
 
@@ -82,6 +83,7 @@
 from twistedcaldav.customxml import NotificationType
 from twistedcaldav.dateops import datetimeMktime, parseSQLTimestamp,\
     pyCalendarTodatetime
+from txdav.xml.rfc2518 import DisplayName
 
 from cStringIO import StringIO
 from sqlparse import parse
@@ -167,22 +169,21 @@
 
     def eachCalendarHome(self):
         """
-        @see L{ICalendarStore.eachCalendarHome}
+        @see: L{ICalendarStore.eachCalendarHome}
         """
         return []
 
 
     def eachAddressbookHome(self):
         """
-        @see L{IAddressbookStore.eachAddressbookHome}
+        @see: L{IAddressbookStore.eachAddressbookHome}
         """
         return []
 
 
-
     def newTransaction(self, label="unlabeled"):
         """
-        @see L{IDataStore.newTransaction}
+        @see: L{IDataStore.newTransaction}
         """
         txn = CommonStoreTransaction(
             self,
@@ -513,11 +514,11 @@
         return self._apnSubscriptionsBySubscriberQuery.on(self, subscriberGUID=guid)
 
 
-    def postCommit(self, operation):
+    def postCommit(self, operation, immediately=False):
         """
         Run things after C{commit}.
         """
-        self._postCommitOperations.append(operation)
+        self._postCommitOperations.append((operation, immediately))
 
 
     def postAbort(self, operation):
@@ -551,16 +552,22 @@
         object to execute SQL on.
 
         @param thunk: a 1-argument callable which returns a Deferred when it is
-            done.  If this Deferred fails, 
+            done.  If this Deferred fails, the sub-transaction will be rolled
+            back.
+        @type thunk: L{callable}
 
         @param retries: the number of times to re-try C{thunk} before deciding
             that it's legitimately failed.
+        @type retries: L{int}
 
         @param failureOK: it is OK if this subtransaction fails so do not log.
+        @type failureOK: L{bool}
 
         @return: a L{Deferred} which fires or fails according to the logic in
             C{thunk}.  If it succeeds, it will return the value that C{thunk}
-            returned.
+            returned.  If C{thunk} fails or raises an exception more than
+            C{retries} times, then the L{Deferred} resulting from
+            C{subtransaction} will fail with L{AllRetriesFailed}.
         """
         # Right now this code is covered mostly by the automated property store
         # tests.  It should have more direct test coverage.
@@ -659,10 +666,14 @@
         """
         Commit the transaction and execute any post-commit hooks.
         """
+        @inlineCallbacks
         def postCommit(ignored):
-            for operation in self._postCommitOperations:
-                operation()
-            return ignored
+            for operation, immediately in self._postCommitOperations:
+                if immediately:
+                    yield operation()
+                else:
+                    operation()
+            returnValue(ignored)
 
         if self._stats:
             s = StringIO()
@@ -884,8 +895,24 @@
 
         if result:
             self._resourceID = result[0][0]
-            self._created, self._modified = (yield self._metaDataQuery.on(
-                self._txn, resourceID=self._resourceID))[0]
+
+            queryCacher = self._txn.store().queryCacher
+            if queryCacher:
+                # Get cached copy
+                cacheKey = queryCacher.keyForHomeMetaData(self._resourceID)
+                data = yield queryCacher.get(cacheKey)
+            else:
+                data = None
+            if data is None:
+                # Don't have a cached copy
+                data = (yield self._metaDataQuery.on(
+                    self._txn, resourceID=self._resourceID))[0]
+                if queryCacher:
+                    # Cache the data
+                    yield queryCacher.setAfterCommit(self._txn, cacheKey, data)
+
+            self._created, self._modified = data
+
             yield self._loadPropertyStore()
             returnValue(self)
         else:
@@ -1442,6 +1469,11 @@
             
         try:
             self._modified = (yield self._txn.subtransaction(_bumpModified, retries=0, failureOK=True))[0][0]
+            queryCacher = self._txn.store().queryCacher
+            if queryCacher is not None:
+                cacheKey = queryCacher.keyForHomeMetaData(self._resourceID)
+                yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
+
         except AllRetriesFailed:
             log.debug("CommonHome.bumpModified failed")
         
@@ -1918,6 +1950,84 @@
         return cls._allHomeChildrenQuery(False)
 
 
+    @classproperty
+    def _insertInviteQuery(cls): #@NoSelf
+        inv = schema.INVITE
+        return Insert(
+            {
+                inv.INVITE_UID: Parameter("uid"),
+                inv.NAME: Parameter("name"),
+                inv.HOME_RESOURCE_ID: Parameter("homeID"),
+                inv.RESOURCE_ID: Parameter("resourceID"),
+                inv.RECIPIENT_ADDRESS: Parameter("recipient")
+            }
+        )
+
+
+    @classproperty
+    def _updateBindQuery(cls): #@NoSelf
+        bind = cls._bindSchema
+        return Update({bind.BIND_MODE: Parameter("mode"),
+                       bind.BIND_STATUS: Parameter("status"),
+                       bind.MESSAGE: Parameter("message")},
+                      Where=
+                      (bind.RESOURCE_ID == Parameter("resourceID"))
+                      .And(bind.HOME_RESOURCE_ID == Parameter("homeID")),
+                      Return=bind.RESOURCE_NAME)
+
+
+    @inlineCallbacks
+    def shareWith(self, shareeHome, mode):
+        """
+        Share this (owned) L{CommonHomeChild} with another home.
+
+        @param shareeHome: The home of the sharee.
+        @type shareeHome: L{CommonHome}
+
+        @param mode: The sharing mode; L{_BIND_MODE_READ} or
+            L{_BIND_MODE_WRITE}.
+        @type mode: L{str}
+
+        @return: the name of the shared calendar in the new calendar home.
+        @rtype: L{str}
+        """
+        dn = PropertyName.fromElement(DisplayName)
+        dnprop = (self.properties().get(dn) or
+                  DisplayName.fromString(self.name()))
+        # FIXME: honor current home type
+        @inlineCallbacks
+        def doInsert(subt):
+            newName = str(uuid4())
+            yield self._bindInsertQuery.on(
+                subt, homeID=shareeHome._resourceID,
+                resourceID=self._resourceID, name=newName, mode=mode,
+                seenByOwner=True, seenBySharee=True,
+                bindStatus=_BIND_STATUS_ACCEPTED,
+            )
+            yield self._insertInviteQuery.on(
+                subt, uid=newName, name=str(dnprop),
+                homeID=shareeHome._resourceID, resourceID=self._resourceID,
+                recipient=shareeHome.uid()
+            )
+            returnValue(newName)
+        try:
+            sharedName = yield self._txn.subtransaction(doInsert)
+        except AllRetriesFailed:
+            # FIXME: catch more specific exception
+            sharedName = (yield self._updateBindQuery.on(
+                self._txn,
+                mode=mode, status=_BIND_STATUS_ACCEPTED, message=None,
+                resourceID=self._resourceID, homeID=shareeHome._resourceID
+            ))[0][0]
+            # Invite already exists; no need to update it, since the name will
+            # remain the same.
+
+        shareeProps = yield PropertyStore.load(shareeHome.uid(), self._txn,
+                                               self._resourceID)
+        shareeProps[dn] = dnprop
+        returnValue(sharedName)
+
+
     @classmethod
     @inlineCallbacks
     def loadAllObjects(cls, home, owned):
@@ -2033,9 +2143,12 @@
         # Only caching non-shared objects so that we don't need to invalidate
         # in sql_legacy
         if owned and queryCacher:
-            data = yield queryCacher.getObjectWithName(home._resourceID, name)
+            # Retrieve data from cache
+            cacheKey = queryCacher.keyForObjectWithName(home._resourceID, name)
+            data = yield queryCacher.get(cacheKey)
 
         if data is None:
+            # No cached copy
             if owned:
                 query = cls._resourceIDOwnedByHomeByName
             else:
@@ -2043,11 +2156,12 @@
             data = yield query.on(home._txn,
                                   objectName=name, homeID=home._resourceID)
             if owned and data and queryCacher:
-                queryCacher.setObjectWithName(home._txn, home._resourceID,
-                    name, data)
+                # Cache the result
+                queryCacher.setAfterCommit(home._txn, cacheKey, data)
 
         if not data:
             returnValue(None)
+
         resourceID = data[0][0]
         child = cls(home, name, resourceID, owned)
         yield child.initFromStore()
@@ -2110,18 +2224,21 @@
 
 
     @classproperty
-    def _initialOwnerBind(cls): #@NoSelf
+    def _bindInsertQuery(cls, **kw):
         """
-        DAL statement to create a bind entry for a particular home value.
+        DAL statement to create a bind entry that connects a collection to its
+        owner's home.
         """
         bind = cls._bindSchema
-        return Insert({bind.HOME_RESOURCE_ID: Parameter("homeID"),
-                       bind.RESOURCE_ID: Parameter("resourceID"),
-                       bind.RESOURCE_NAME: Parameter("name"),
-                       bind.BIND_MODE: _BIND_MODE_OWN,
-                       bind.SEEN_BY_OWNER: True,
-                       bind.SEEN_BY_SHAREE: True,
-                       bind.BIND_STATUS: _BIND_STATUS_ACCEPTED})
+        return Insert({
+            bind.HOME_RESOURCE_ID: Parameter("homeID"),
+            bind.RESOURCE_ID: Parameter("resourceID"),
+            bind.RESOURCE_NAME: Parameter("name"),
+            bind.BIND_MODE: Parameter("mode"),
+            bind.BIND_STATUS: Parameter("bindStatus"),
+            bind.SEEN_BY_OWNER: Parameter("seenByOwner"),
+            bind.SEEN_BY_SHAREE: Parameter("seenBySharee"),
+        })
 
 
     @classmethod
@@ -2144,8 +2261,11 @@
                                                   resourceID=resourceID))[0]
 
         # Bind table needs entry
-        yield cls._initialOwnerBind.on(home._txn, homeID=home._resourceID,
-                                       resourceID=resourceID, name=name)
+        yield cls._bindInsertQuery.on(
+            home._txn, homeID=home._resourceID, resourceID=resourceID,
+            name=name, mode=_BIND_MODE_OWN, seenByOwner=True,
+            seenBySharee=True, bindStatus=_BIND_STATUS_ACCEPTED
+        )
 
         # Initialize other state
         child = cls(home, name, resourceID, True)
@@ -2181,9 +2301,22 @@
         resource ID. We read in and cache all the extra metadata from the DB to
         avoid having to do DB queries for those individually later.
         """
-        dataRows = (
-            yield self._metadataByIDQuery.on(self._txn,
-                                          resourceID=self._resourceID))[0]
+        queryCacher = self._txn.store().queryCacher
+        if queryCacher:
+            # Retrieve from cache
+            cacheKey = queryCacher.keyForHomeChildMetaData(self._resourceID)
+            dataRows = yield queryCacher.get(cacheKey)
+        else:
+            dataRows = None
+        if dataRows is None:
+            # No cached copy
+            dataRows = (
+                yield self._metadataByIDQuery.on(self._txn,
+                    resourceID=self._resourceID))[0]
+            if queryCacher:
+                # Cache the results
+                yield queryCacher.setAfterCommit(self._txn, cacheKey, dataRows)
+
         for attr, value in zip(self.metadataAttributes(), dataRows):
             setattr(self, attr, value)
         yield self._loadPropertyStore()
@@ -2244,8 +2377,8 @@
 
         queryCacher = self._home._txn.store().queryCacher
         if queryCacher:
-            queryCacher.invalidateObjectWithName(self._home._txn,
-                self._home._resourceID, oldName)
+            cacheKey = queryCacher.keyForObjectWithName(self._home._resourceID, oldName)
+            yield queryCacher.invalidateAfterCommit(self._home._txn, cacheKey)
 
         yield self._renameQuery.on(self._txn, name=name,
                                    resourceID=self._resourceID,
@@ -2277,8 +2410,8 @@
 
         queryCacher = self._home._txn.store().queryCacher
         if queryCacher:
-            queryCacher.invalidateObjectWithName(self._home._txn,
-                self._home._resourceID, self._name)
+            cacheKey = queryCacher.keyForObjectWithName(self._home._resourceID, self._name)
+            yield queryCacher.invalidateAfterCommit(self._home._txn, cacheKey)
 
         yield self._deletedSyncToken()
         yield self._deleteQuery.on(self._txn, NoSuchHomeChildError,
@@ -2665,6 +2798,7 @@
                       Where=schema.RESOURCE_ID == Parameter("resourceID"),
                       Return=schema.MODIFIED)
 
+
     @inlineCallbacks
     def bumpModified(self):
         """
@@ -2683,6 +2817,11 @@
             
         try:
             self._modified = (yield self._txn.subtransaction(_bumpModified, retries=0, failureOK=True))[0][0]
+
+            queryCacher = self._txn.store().queryCacher
+            if queryCacher is not None:
+                cacheKey = queryCacher.keyForHomeChildMetaData(self._resourceID)
+                yield queryCacher.invalidateAfterCommit(self._txn, cacheKey)
         except AllRetriesFailed:
             log.debug("CommonHomeChild.bumpModified failed")
         

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/test/util.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/common/datastore/test/util.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -98,6 +98,9 @@
                 "-c log_lock_waits=TRUE",
                 "-c log_statement=all",
                 "-c log_line_prefix='%p.%x '",
+                "-c fsync=FALSE",
+                "-c synchronous_commit=off",
+                "-c full_page_writes=FALSE",
             ],
             testMode=True
         )

Modified: CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/idav.py
===================================================================
--- CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/idav.py	2012-05-02 15:47:14 UTC (rev 9220)
+++ CalendarServer/branches/users/gaya/ldapdirectorybacker/txdav/idav.py	2012-05-02 18:54:12 UTC (rev 9221)
@@ -207,7 +207,7 @@
         """
 
 
-    def postCommit(operation):
+    def postCommit(operation, immediately=False):
         """
         Registers an operation to be executed after the transaction is
         committed.
@@ -216,6 +216,8 @@
         in the order which they were registered.
 
         @param operation: a callable.
+        @param immediately: a boolean; True means finish this operation *before* the
+            commit( ) call completes, defaults to False.
         """
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20120502/2ecc0e7b/attachment-0001.html>


More information about the calendarserver-changes mailing list