[CalendarServer-changes] [8346] CalendarServer/branches/users/cdaboo/component-set-fixes

source_changes at macosforge.org source_changes at macosforge.org
Mon Nov 28 13:07:53 PST 2011


Revision: 8346
          http://trac.macosforge.org/projects/calendarserver/changeset/8346
Author:   cdaboo at apple.com
Date:     2011-11-28 13:07:52 -0800 (Mon, 28 Nov 2011)
Log Message:
-----------
Merged from trunk.

Modified Paths:
--------------
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/applepush.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/test/test_applepush.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/caldav.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/longlines.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/test_caldav.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/anonymize.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/cmdline.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/shell.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/util.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/conf/caldavd-test.plist
    CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/loadtest/sim.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/support/build.sh
    CalendarServer/branches/users/cdaboo/component-set-fixes/support/submit
    CalendarServer/branches/users/cdaboo/component-set-fixes/support/version.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/dav/method/prop_common.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/test/test_server.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/directory.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/ldapdirectory.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/test/test_ldapdirectory.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/ical.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/mail.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcachepool.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcacher.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/put_common.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/report_multiget_common.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/implicit.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/processing.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/test/test_implicit.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/test/test_mail.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/upgrade.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/file.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/sql.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/common.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/test_util.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/util.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/icalendarstore.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/iaddressbookstore.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/file.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/sql.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/upgrade/migrate.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/icommondatastore.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/idav.py

Added Paths:
-----------
    CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/__init__.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/doc/Admin/MultiServerDeployment.txt

Removed Paths:
-------------
    CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/__init__.py

Property Changed:
----------------
    CalendarServer/branches/users/cdaboo/component-set-fixes/
    CalendarServer/branches/users/cdaboo/component-set-fixes/support/build.sh
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/index_file.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/test_index_file.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/datastore/index_file.py
    CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/datastore/test/test_index_file.py


Property changes on: CalendarServer/branches/users/cdaboo/component-set-fixes
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:8130-8260
   + /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:8130-8344

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/applepush.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/applepush.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/applepush.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -26,7 +26,7 @@
 from twext.web2.server import parsePOSTData
 from twisted.application import service
 from twisted.internet import reactor, protocol
-from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.internet.defer import inlineCallbacks, returnValue, succeed
 from twisted.internet.protocol import ClientFactory, ReconnectingClientFactory
 from twistedcaldav.extensions import DAVResource, DAVResourceWithoutChildrenMixin
 from twistedcaldav.resource import ReadOnlyNoCopyResourceMixIn
@@ -374,7 +374,12 @@
 
     def dataReceived(self, data):
         self.log_debug("FeedbackProtocol dataReceived %d bytes" % (len(data),))
-        timestamp, tokenLength, binaryToken = struct.unpack("!IH32s", data)
+        try:
+            timestamp, tokenLength, binaryToken = struct.unpack("!IH32s", data)
+        except struct.error:
+            self.log_warn("FeedbackProtocol received malformed data: %s" %
+                (data.encode("hex"),))
+            return succeed(None)
         token = binaryToken.encode("hex").lower()
         return self.processFeedback(timestamp, token)
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/test/test_applepush.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/test/test_applepush.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/push/test/test_applepush.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -115,9 +115,12 @@
         yield txn.commit()
         self.assertEquals(len(subscriptions), 2)
 
+        # Simulate malformed feedback
+        connector = service.feedbacks["CalDAV"].testConnector
+        yield connector.receiveData("malformed")
+
         # Simulate feedback
         timestamp = 2000
-        connector = service.feedbacks["CalDAV"].testConnector
         binaryToken = token.decode("hex")
         feedbackData = struct.pack("!IH32s", timestamp, len(binaryToken),
             binaryToken)

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/caldav.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/caldav.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/caldav.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -1639,6 +1639,7 @@
     """
 
     MAX_LENGTH = 1024
+    CONTINUED_TEXT = " (truncated, continued)"
     tag = None
     exceeded = False            # Am I in the middle of parsing a long line?
     _buffer = ''
@@ -1679,10 +1680,29 @@
         A very long line is being received.  Log it immediately and forget
         about buffering it.
         """
-        for i in range(len(line)/self.MAX_LENGTH):
-            self.lineReceived(line[i*self.MAX_LENGTH:(i+1)*self.MAX_LENGTH]
-                              + " (truncated, continued)")
+        segments = self._breakLineIntoSegments(line)
+        for segment in segments:
+            self.lineReceived(segment)
+            
 
+    def _breakLineIntoSegments(self, line):
+        """
+        Break a line into segments no longer than self.MAX_LENGTH.  Each
+        segment (except for the final one) has self.CONTINUED_TEXT appended.
+        Returns the array of segments.
+        @param line: The line to break up
+        @type line: C{str}
+        @return: array of C{str}
+        """
+        length = len(line)
+        numSegments = length/self.MAX_LENGTH + (1 if length%self.MAX_LENGTH else 0)
+        segments = []
+        for i in range(numSegments):
+            msg = line[i*self.MAX_LENGTH:(i+1)*self.MAX_LENGTH]
+            if i < numSegments - 1: # not the last segment
+                msg += self.CONTINUED_TEXT
+            segments.append(msg)
+        return segments
 
 
 class DelayedStartupLoggingProtocol(ProcessProtocol):

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/longlines.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/longlines.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/longlines.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -19,7 +19,7 @@
 length = int(sys.argv[1])
 
 data = (("x" * length) +
-        ("y" * length) + "\n" +
+        ("y" * (length + 1)) + "\n" +
         ("z" + "\n"))
 
 sys.stdout.write(data)

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/test_caldav.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/test_caldav.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tap/test/test_caldav.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -1031,11 +1031,13 @@
         def assertions(result):
             self.assertEquals(["[Dummy] x",
                                "[Dummy] y",
+                               "[Dummy] y", # final segment
                                "[Dummy] z"],
                               [''.join(evt['message'])[:len('[Dummy]') + 2]
                                for evt in logged])
             self.assertEquals([" (truncated, continued)",
                                " (truncated, continued)",
+                               "[Dummy] y",
                                "[Dummy] z"],
                               [''.join(evt['message'])[-len(" (truncated, continued)"):]
                                for evt in logged])
@@ -1043,6 +1045,36 @@
         return d
 
 
+    def test_breakLineIntoSegments(self):
+        """
+        Exercise the line-breaking logic with various key lengths
+        """
+        testLogger = DelayedStartupLineLogger()
+        testLogger.MAX_LENGTH = 10
+        for input, output in [
+            ("", []),
+            ("a", ["a"]),
+            ("abcde", ["abcde"]),
+            ("abcdefghij", ["abcdefghij"]),
+            ("abcdefghijk",
+                ["abcdefghij (truncated, continued)",
+                 "k"
+                ]
+            ),
+            ("abcdefghijklmnopqrst",
+                ["abcdefghij (truncated, continued)",
+                 "klmnopqrst"
+                ]
+            ),
+            ("abcdefghijklmnopqrstuv",
+                ["abcdefghij (truncated, continued)",
+                 "klmnopqrst (truncated, continued)",
+                 "uv"]
+            ),
+        ]:
+            self.assertEquals(output, testLogger._breakLineIntoSegments(input))
+
+
     def test_acceptDescriptorInheritance(self):
         """
         If a L{TwistdSlaveProcess} specifies some file descriptors to be

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/anonymize.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/anonymize.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/anonymize.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -224,6 +224,10 @@
 
                     data = anonymizeData(directoryMap, data)
 
+                    if data is None:
+                        # Ignore data we can't parse
+                        continue
+
                     destResource = os.path.join(destCal, resource)
                     with open(destResource, "w") as res:
                         res.write(data)
@@ -277,7 +281,11 @@
 
 
 def anonymizeData(directoryMap, data):
-    pyobj = PyCalendar.parseText(data)
+    try:
+        pyobj = PyCalendar.parseText(data)
+    except Exception, e:
+        print "Failed to parse (%s): %s" % (e, data)
+        return None
 
     # Delete property from the top level
     try:

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/cmdline.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/cmdline.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/cmdline.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -74,13 +74,8 @@
         reactor.addSystemEventTrigger("during", "startup", service.startService)
         reactor.addSystemEventTrigger("before", "shutdown", service.stopService)
 
-    except ConfigurationError, e:
+    except (ConfigurationError, OSError), e:
         sys.stderr.write("Error: %s\n" % (e,))
         return
 
-    except OSError: # Permission
-        sys.stderr.write("Please run as root\n")
-        return
-
     reactor.run()
-

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/shell.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/shell.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/shell.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -21,15 +21,19 @@
 
 import os
 import sys
-import traceback
+import tty
+import termios
 from shlex import shlex
 
-#from twisted.python import log
+from twisted.python import log
+from twisted.python.log import startLogging
 from twisted.python.text import wordWrap
 from twisted.python.usage import Options, UsageError
-from twisted.internet.defer import succeed, maybeDeferred
-from twisted.conch.stdio import runWithProtocol as shellWithProtocol
-from twisted.conch.recvline import HistoricRecvLine
+from twisted.internet.defer import succeed, Deferred
+from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.internet.stdio import StandardIO
+from twisted.conch.recvline import HistoricRecvLine as ReceiveLineProtocol
+from twisted.conch.insults.insults import ServerProtocol
 from twisted.application.service import Service
 
 from txdav.common.icommondatastore import NotFoundError
@@ -37,6 +41,7 @@
 from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
 
 from calendarserver.tools.cmdline import utilityMain
+from calendarserver.tools.util import getDirectory
 
 
 def usage(e=None):
@@ -75,29 +80,62 @@
 
 
 class ShellService(Service, object):
-    def __init__(self, store, options, reactor, config):
+    def __init__(self, store, directory, options, reactor, config):
         super(ShellService, self).__init__()
-        self.store   = store
-        self.options = options
-        self.reactor = reactor
-        self.config = config
+        self.store      = store
+        self.directory  = directory
+        self.options    = options
+        self.reactor    = reactor
+        self.config     = config
+        self.terminalFD = None
+        self.protocol   = None
 
     def startService(self):
         """
         Start the service.
         """
+        # For debugging
+        f = open("/tmp/shell.log", "w")
+        startLogging(f)
+
         super(ShellService, self).startService()
-        shellWithProtocol(lambda: ShellProtocol(self.store))
-        self.reactor.stop()
 
+        # Set up the terminal for interactive action
+        self.terminalFD = sys.__stdin__.fileno()
+        self._oldTerminalSettings = termios.tcgetattr(self.terminalFD)
+        tty.setraw(self.terminalFD)
+
+        self.protocol = ServerProtocol(lambda: ShellProtocol(self))
+        StandardIO(self.protocol)
+
     def stopService(self):
         """
         Stop the service.
         """
+        # Restore terminal settings
+        termios.tcsetattr(self.terminalFD, termios.TCSANOW, self._oldTerminalSettings)
+        os.write(self.terminalFD, "\r\x1bc\r")
 
 
-class ShellProtocol(HistoricRecvLine):
+class UsageError (Exception):
     """
+    Usage error.
+    """
+
+class UnknownArguments (UsageError):
+    """
+    Unknown arguments.
+    """
+    def __init__(self, arguments):
+        Exception.__init__(self, "Unknown arguments: %s" % (arguments,))
+        self.arguments = arguments
+
+
+EMULATE_EMACS = object()
+EMULATE_VI    = object()
+
+class ShellProtocol(ReceiveLineProtocol):
+    """
     Data store shell protocol.
     """
 
@@ -107,22 +145,30 @@
 
     ps = ("ds% ", "... ")
 
-    def __init__(self, store):
-        HistoricRecvLine.__init__(self)
-        self.wd = RootDirectory(store)
+    def __init__(self, service):
+        ReceiveLineProtocol.__init__(self)
+        self.service = service
+        self.wd = RootFolder(service)
+        self.inputLines = []
+        self.activeCommand = None
+        self.emulate = EMULATE_EMACS
 
     def connectionMade(self):
-        HistoricRecvLine.connectionMade(self)
+        ReceiveLineProtocol.connectionMade(self)
 
-        CTRL_C         = "\x03"
-        CTRL_D         = "\x04"
-        CTRL_L         = "\x0c"
-        CTRL_BACKSLASH = "\x1c"
+        self.keyHandlers['\x03'] = self.handle_INT   # Control-C
+        self.keyHandlers['\x04'] = self.handle_EOF   # Control-D
+        self.keyHandlers['\x1c'] = self.handle_QUIT  # Control-\
+        self.keyHandlers['\x0c'] = self.handle_FF    # Control-L
 
-        self.keyHandlers[CTRL_C        ] = self.handle_INT
-        self.keyHandlers[CTRL_D        ] = self.handle_EOF
-        self.keyHandlers[CTRL_L        ] = self.handle_FF
-        self.keyHandlers[CTRL_BACKSLASH] = self.handle_QUIT
+        if self.emulate == EMULATE_EMACS:
+            # EMACS key bindinds
+            self.keyHandlers['\x10'] = self.handle_UP     # Control-P
+            self.keyHandlers['\x0e'] = self.handle_DOWN   # Control-N
+            self.keyHandlers['\x02'] = self.handle_LEFT   # Control-B
+            self.keyHandlers['\x06'] = self.handle_RIGHT  # Control-F
+            self.keyHandlers['\x01'] = self.handle_HOME   # Control-A
+            self.keyHandlers['\x05'] = self.handle_END    # Control-E
 
     def handle_INT(self):
         """
@@ -132,22 +178,24 @@
         self.pn = 0
         self.lineBuffer = []
         self.lineBufferIndex = 0
-        self.interpreter.resetBuffer()
 
         self.terminal.nextLine()
         self.terminal.write("KeyboardInterrupt")
         self.terminal.nextLine()
-        self.terminal.write(self.ps[self.pn])
+        self.exit()
 
     def handle_EOF(self):
         if self.lineBuffer:
-            self.terminal.write("\a")
+            if self.emulate == EMULATE_EMACS:
+                self.handle_DELETE()
+            else:
+                self.terminal.write('\a')
         else:
             self.handle_QUIT()
 
     def handle_FF(self):
         """
-        Handle a "form feed" byte - generally used to request a screen
+        Handle a 'form feed' byte - generally used to request a screen
         refresh/redraw.
         """
         self.terminal.eraseDisplay()
@@ -155,246 +203,601 @@
         self.drawInputLine()
 
     def handle_QUIT(self):
+        self.exit()
+
+    def exit(self):
         self.terminal.loseConnection()
+        self.service.reactor.stop()
 
-    def prompt(self):
-        pass
-
     def lineReceived(self, line):
-        try:
-            lexer = shlex(line)
-            lexer.whitespace_split = True
+        if self.activeCommand is not None:
+            self.inputLines.append(line)
+            return
 
-            tokens = []
-            while True:
-                token = lexer.get_token()
-                if not token:
-                    break
-                tokens.append(token)
+        lexer = shlex(line)
+        lexer.whitespace_split = True
 
-            if tokens:
-                cmd = tokens.pop(0)
-                #print "Arguments: %r" % (tokens,)
+        tokens = []
+        while True:
+            token = lexer.get_token()
+            if not token:
+                break
+            tokens.append(token)
 
-                m = getattr(self, "cmd_%s" % (cmd,), None)
-                if m:
-                    def onError(f):
-                        print "Error: %s" % (f.getErrorMessage(),)
-                        print "-"*80
-                        f.printTraceback()
-                        print "-"*80
+        if tokens:
+            cmd = tokens.pop(0)
+            #print "Arguments: %r" % (tokens,)
 
-                    d = maybeDeferred(m, tokens)
-                    d.addCallback(lambda _: self.prompt)
-                    d.addErrback(onError)
-                    return d
+            m = getattr(self, "cmd_%s" % (cmd,), None)
+            if m:
+                def handleUsageError(f):
+                    f.trap(UsageError)
+                    self.terminal.write("%s\n" % (f.value,))
+
+                def handleException(f):
+                    self.terminal.write("Error: %s\n" % (f.value,))
+                    if not f.check(NotImplementedError, NotFoundError):
+                        log.msg("-"*80 + "\n")
+                        log.msg(f.getTraceback())
+                        log.msg("-"*80 + "\n")
+
+                def next(_):
+                    self.activeCommand = None
+                    self.drawInputLine()
+                    if self.inputLines:
+                        line = self.inputLines.pop(0)
+                        self.lineReceived(line)
+
+                d = self.activeCommand = Deferred()
+                d.addCallback(lambda _: m(tokens))
+                if True:
+                    d.callback(None)
                 else:
-                    print "Unknown command: %s" % (cmd,)
+                    # Add time to test callbacks
+                    self.service.reactor.callLater(4, d.callback, None)
+                d.addErrback(handleUsageError)
+                d.addErrback(handleException)
+                d.addCallback(next)
+            else:
+                self.terminal.write("Unknown command: %s\n" % (cmd,))
+                self.drawInputLine()
+        else:
+            self.drawInputLine()
 
-        except Exception, e:
-            print "Error: %s" % (e,)
-            print "-"*80
-            traceback.print_exc()
-            print "-"*80
+    def _getTarget(self, tokens):
+        if tokens:
+            return self.wd.locate(tokens.pop(0).split("/"))
+        else:
+            return succeed(self.wd)
 
-    def cmd_pwd(self, tokens):
+    @inlineCallbacks
+    def _getTargets(self, tokens):
+        if tokens:
+            result = []
+            for token in tokens:
+                result.append((yield self.wd.locate(token.split("/"))))
+            returnValue(result)
+        else:
+            returnValue((self.wd,))
+
+    def cmd_help(self, tokens):
         """
-        Print working directory.
+        Show help.
+
+        usage: help [command]
         """
         if tokens:
-            print "Unknown arguments: %s" % (tokens,)
-            return
-        print self.wd
+            command = tokens.pop(0)
+        else:
+            command = None
 
-    def cmd_cd(self, tokens):
+        if tokens:
+            raise UnknownArguments(tokens)
+
+        if command:
+            m = getattr(self, "cmd_%s" % (command,), None)
+            if m:
+                doc = m.__doc__.split("\n")
+
+                # Throw out first and last line if it's empty
+                if doc:
+                    if not doc[0].strip():
+                        doc.pop(0)
+                    if not doc[-1].strip():
+                        doc.pop()
+
+                if doc:
+                    # Get length of indentation
+                    i = len(doc[0]) - len(doc[0].lstrip())
+
+                    for line in doc:
+                        self.terminal.write(line[i:])
+                        self.terminal.nextLine()
+
+                else:
+                    self.terminal.write("(No documentation available for %s)\n" % (command,))
+            else:
+                raise NotFoundError("Unknown command: %s" % (command,))
+        else:
+            self.terminal.write("Available commands:\n")
+
+            result = []
+
+            for attr in dir(self):
+                if attr.startswith("cmd_"):
+                    m = getattr(self, attr)
+
+                    if hasattr(m, "hidden"):
+                        continue
+
+                    for line in m.__doc__.split("\n"):
+                        line = line.strip()
+                        if line:
+                            doc = line
+                            break
+                    else:
+                        doc = "(no info available)"
+
+                    result.append((attr[4:], doc))
+
+            for info in sorted(result):
+                self.terminal.write("  %s - %s\n" % (info))
+
+    def cmd_emulate(self, tokens):
         """
-        Change working directory.
+        Emulate editor behavior.
+        The only correct argument is: emacs
+        Other choices include: vi, none
+
+        usage: emulate editor
         """
+        if not tokens:
+            raise UsageError("Editor not specified.")
+
+        editor = tokens.pop(0).lower()
+
         if tokens:
-            dirname = tokens.pop(0)
+            raise UnknownArguments(tokens)
+
+        if editor == "emacs":
+            self.terminal.write("Emulating EMACS.")
+            self.emulate = EMULATE_EMACS
+        elif editor == "vi":
+            self.terminal.write("Seriously?!?!?")
+            self.emulate = EMULATE_VI
+        elif editor == "none":
+            self.terminal.write("Disabling emulation.")
+            self.emulate = None
         else:
-            return
+            raise UsageError("Unknown editor: %s" % (editor,))
+        self.terminal.nextLine()
 
+    def cmd_pwd(self, tokens):
+        """
+        Print working folder.
+
+        usage: pwd
+        """
         if tokens:
-            print "Unknown arguments: %s" % (tokens,)
+            raise UnknownArguments(tokens)
+
+        self.terminal.write("%s\n" % (self.wd,))
+
+    @inlineCallbacks
+    def cmd_cd(self, tokens):
+        """
+        Change working folder.
+
+        usage: cd [folder]
+        """
+        if not tokens:
             return
 
-        path = dirname.split("/")
+        dirname = tokens.pop(0)
 
-        def notFound(f):
-            f.trap(NotFoundError)
-            print "No such directory: %s" % (dirname,)
+        if tokens:
+            raise UnknownArguments(tokens)
 
-        def setWD(wd):
-            self.wd = wd
+        wd = (yield self.wd.locate(dirname.split("/")))
 
-        d = self.wd.locate(path)
-        d.addCallback(setWD)
-        d.addErrback(notFound)
-        return d
+        if not isinstance(wd, Folder):
+            raise NotFoundError("Not a folder: %s" % (wd,))
 
+        log.msg("wd -> %s" % (wd,))
+        self.wd = wd
+
+    @inlineCallbacks
     def cmd_ls(self, tokens):
         """
-        List working directory.
+        List folder contents.
+
+        usage: ls [folder]
         """
+        target = (yield self._getTarget(tokens))
+
         if tokens:
-            print "Unknown arguments: %s" % (tokens,)
-            return
+            raise UnknownArguments(tokens)
 
-        for name in self.wd.list():
-            print name
+        listing = (yield target.list())
 
+        #
+        # FIXME: this can be ugly if, for example, there are
+        # zillions of calendar homes or events to output. Paging
+        # would be good.
+        #
+        for name in listing:
+            self.terminal.write("%s\n" % (name,))
+
+    @inlineCallbacks
     def cmd_info(self, tokens):
         """
-        Print information about working directory.
+        Print information about a folder.
+
+        usage: info [folder]
         """
-        d = self.wd.describe()
-        d.addCallback(lambda x: sys.stdout.write(x))
-        return d
+        target = (yield self._getTarget(tokens))
 
+        if tokens:
+            raise UnknownArguments(tokens)
+
+        description = (yield target.describe())
+        self.terminal.write(description)
+        self.terminal.nextLine()
+
+    @inlineCallbacks
+    def cmd_cat(self, tokens):
+        """
+        Show contents of target.
+
+        usage: cat target [target ...]
+        """
+        for target in (yield self._getTargets(tokens)):
+            if hasattr(target, "text"):
+                text = (yield target.text())
+                self.terminal.write(text)
+
     def cmd_exit(self, tokens):
         """
         Exit the shell.
+
+        usage: exit
         """
-        self.terminal.loseConnection()
-        # FIXME: This is insufficient.
+        self.exit()
 
     def cmd_python(self, tokens):
         """
         Switch to a python prompt.
+
+        usage: python
         """
         # Crazy idea #19568: switch to an interactive python prompt
         # with self exposed in globals.
         raise NotImplementedError()
 
+    cmd_python.hidden = "Not implemented"
 
-class Directory(object):
+
+class File(object):
     """
-    Location in virtual data hierarchy.
+    Object in virtual data hierarchy.
     """
-    def __init__(self, store, path):
+    def __init__(self, service, path):
         assert type(path) is tuple
 
-        self.store = store
-        self.path = path
+        self.service = service
+        self.path    = path
 
     def __str__(self):
         return "/" + "/".join(self.path)
 
     def describe(self):
-        return succeed(str(self))
+        return succeed("%s (%s)" % (self, self.__class__.__name__))
 
+    def list(self):
+        return succeed(("%s" % (self,),))
+
+
+class Folder(File):
+    """
+    Location in virtual data hierarchy.
+    """
+    def __init__(self, service, path):
+        File.__init__(self, service, path)
+
+        self._children = {}
+        self._childClasses = {}
+
+    @inlineCallbacks
     def locate(self, path):
         if not path:
-            return succeed(RootDirectory(self.store))
+            returnValue(RootFolder(self.service))
 
         name = path[0]
-        if not name:
-            return succeed(self.locate(path[1:]))
+        if name:
+            target = (yield self.child(name))
+            if len(path) > 1:
+                target = (yield target.locate(path[1:]))
+        else:
+            target = (yield RootFolder(self.service).locate(path[1:]))
 
-        path = list(path)
+        returnValue(target)
 
-        if name.startswith("/"):
-            path[0] = path[0][1:]
-            subdir = succeed(RootDirectory(self.store))
-        else:
-            path.pop(0)
-            subdir = self.subdir(name)
+    @inlineCallbacks
+    def child(self, name):
+        # FIXME: Move this logic to locate()
+        #if not name:
+        #    return succeed(self)
+        #if name == ".":
+        #    return succeed(self)
+        #if name == "..":
+        #    path = self.path[:-1]
+        #    if not path:
+        #        path = "/"
+        #    return RootFolder(self.service).locate(path)
 
-        if path:
-            return subdir.addCallback(lambda path: self.locate(path))
-        else:
-            return subdir
+        if name in self._children:
+            returnValue(self._children[name])
 
-    def subdir(self, name):
-        if not name:
-            return succeed(self)
-        if name == ".":
-            return succeed(self)
-        if name == "..":
-            return RootDirectory(self.store).locate(self.path[:-1])
+        if name in self._childClasses:
+            child = (yield self._childClasses[name](self.service, self.path + (name,)))
+            self._children[name] = child
+            returnValue(child)
 
-        raise NotFoundError("Directory %r has no subdirectory %r" % (str(self), name))
+        raise NotFoundError("Folder %r has no child %r" % (str(self), name))
 
     def list(self):
-        return ()
+        result = set()
+        result.update(self._children)
+        result.update(self._childClasses)
+        return succeed(result)
 
 
-class RootDirectory(Directory):
+class RootFolder(Folder):
     """
     Root of virtual data hierarchy.
     """
-    def __init__(self, store):
-        Directory.__init__(self, store, ())
+    def __init__(self, service):
+        Folder.__init__(self, service, ())
 
-        self._children = {}
+        self._childClasses["uids"] = UIDFolder
 
-        self._childClasses = {
-            "uids": UIDDirectory,
-        }
 
-    def subdir(self, name):
-        if name in self._children:
-            return succeed(self._children[name])
+class UIDFolder(Folder):
+    """
+    Folder containing all principals by UID.
+    """
+    def child(self, name):
+        return PrincipalHomeFolder(self.service, self.path + (name,), name)
 
-        if name in self._childClasses:
-            self._children[name] = self._childClasses[name](self.store, self.path + (name,))
-            return succeed(self._children[name])
+    @inlineCallbacks
+    def list(self):
+        result = []
 
-        return Directory.subdir(self, name)
+        # FIXME: This should be the merged total of calendar homes and address book homes.
+        # FIXME: Merge in directory UIDs also?
+        # FIXME: Add directory info (eg. name) to listing
 
-    def list(self):
-        return ("%s/" % (n,) for n in self._childClasses)
+        for txn, home in (yield self.service.store.eachCalendarHome()):
+            result.append("%s/" % (home.uid(),))
 
+        returnValue(result)
 
-class UIDDirectory(Directory):
+
+class PrincipalHomeFolder(Folder):
     """
-    Directory containing all principals by UID.
+    Folder containing everything related to a given principal.
     """
-    def subdir(self, name):
-        txn = self.store.newTransaction()
+    def __init__(self, service, path, uid):
+        Folder.__init__(self, service, path)
 
-        def gotHome(home):
+        self.uid = uid
+
+    @inlineCallbacks
+    def _initChildren(self):
+        if not hasattr(self, "_didInitChildren"):
+            txn  = self.service.store.newTransaction()
+
+            home = (yield txn.calendarHomeWithUID(self.uid))
             if home:
-                return HomeDirectory(self.store, self.path + (name,), home)
+                self._children["calendars"] = CalendarHomeFolder(
+                    self.service,
+                    self.path + ("calendars",),
+                    home,
+                )
 
-            return Directory.subdir(self, name)
+            home = (yield txn.addressbookHomeWithUID(self.uid))
+            if home:
+                self._children["addressbooks"] = AddressBookHomeFolder(
+                    self.service,
+                    self.path + ("addressbooks",),
+                    home,
+                )
 
-        d = txn.calendarHomeWithUID(name)
-        d.addCallback(gotHome)
-        return d
+        self._didInitChildren = True
 
+    def _needsChildren(m):
+        def decorate(self, *args, **kwargs):
+            d = self._initChildren()
+            d.addCallback(lambda _: m(self, *args, **kwargs))
+            return d
+        return decorate
+
+    @_needsChildren
+    def child(self, name):
+        return Folder.child(self, name)
+
+    @_needsChildren
     def list(self):
-        for (txn, home) in self.store.eachCalendarHome():
-            yield home.uid()
+        return Folder.list(self)
 
 
-class HomeDirectory(Directory):
+class CalendarHomeFolder(Folder):
     """
-    Home directory.
+    Calendar home folder.
     """
-    def __init__(self, store, path, home):
-        Directory.__init__(self, store, path)
+    def __init__(self, service, path, home):
+        Folder.__init__(self, service, path)
 
         self.home = home
 
+    @inlineCallbacks
+    def child(self, name):
+        calendar = (yield self.home.calendarWithName(name))
+        if calendar:
+            returnValue(CalendarFolder(self.service, self.path + (name,), calendar))
+        else:
+            raise NotFoundError("Calendar home %r has no calendar %r" % (self, name))
+
+    @inlineCallbacks
+    def list(self):
+        calendars = (yield self.home.calendars())
+        returnValue(("%s/" % (c.name(),) for c in calendars))
+
+    @inlineCallbacks
     def describe(self):
-        return succeed(
-            """Calendar home for UID: %(uid)s\n"""
-            """Quota: %(quotaUsed)s of %(quotaMax)s (%(quotaPercent).2s%%)\n"""
-            % {
-                "uid"          : self.home.uid(),
-                "quotaUsed"    : self.home.quotaUsed(),
-                "quotaMax"     : self.home.quotaAllowedBytes(),
-                "quotaPercent" : self.home.quotaUsed() / self.home.quotaAllowedBytes(),
-            }
-        )
+        # created() -> int
+        # modified() -> int
+        # properties -> IPropertyStore
 
+        uid          = (yield self.home.uid())
+        created      = (yield self.home.created())
+        modified     = (yield self.home.modified())
+        quotaUsed    = (yield self.home.quotaUsedBytes())
+        quotaAllowed = (yield self.home.quotaAllowedBytes())
+        properties   = (yield self.home.properties())
 
-def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
+        result = []
+        result.append("Calendar home for UID: %s" % (uid,))
+        if created is not None:
+            # FIXME: convert to string
+            result.append("Created: %s" % (created,))
+        if modified is not None:
+            # FIXME: convert to string
+            result.append("Last modified: %s" % (modified,))
+        if quotaUsed is not None:
+            result.append("Quota: %s of %s (%.2s%%)"
+                          % (quotaUsed, quotaAllowed, quotaUsed / quotaAllowed))
+
+        if properties:
+            for name in sorted(properties):
+                result.append("%s: %s" % (name, properties[name]))
+
+        returnValue("\n".join(result))
+
+
+class CalendarFolder(Folder):
     """
-    Do the export.
+    Calendar.
     """
+    def __init__(self, service, path, calendar):
+        Folder.__init__(self, service, path)
+
+        self.calendar = calendar
+
+    @inlineCallbacks
+    def _childWithObject(self, object):
+        name = (yield object.uid())
+        returnValue(CalendarObject(self.service, self.path + (name,), object))
+
+    @inlineCallbacks
+    def child(self, name):
+        object = (yield self.calendar.calendarObjectWithUID(name))
+
+        if not object:
+            raise NotFoundError("Calendar %r has no object %r" % (str(self), name))
+
+        child = (yield self._childWithObject(object))
+        returnValue(child)
+
+    @inlineCallbacks
+    def list(self):
+        result = []
+
+        for object in (yield self.calendar.calendarObjects()):
+            object = (yield self._childWithObject(object))
+            items = (yield object.list())
+            assert len(items) == 1
+            result.append(items[0])
+
+        returnValue(result)
+
+
+class CalendarObject(File):
+    """
+    Calendar object.
+    """
+    def __init__(self, service, path, calendarObject):
+        File.__init__(self, service, path)
+
+        self.object = calendarObject
+
+    @inlineCallbacks
+    def list(self):
+        component = (yield self.object.component())
+        mainComponent = component.mainComponent()
+        componentType = mainComponent.name()
+        uid = mainComponent.propertyValue("UID")
+        summary = mainComponent.propertyValue("SUMMARY")
+
+        assert uid == self.object.uid()
+        assert componentType == (yield self.object.componentType())
+
+        returnValue(("%s %s: %s" % (uid, componentType, summary),))
+
+    @inlineCallbacks
+    def text(self):
+        log.msg("text(%r)" % (self,))
+        component = (yield self.object.component())
+        returnValue(str(component))
+
+    @inlineCallbacks
+    def describe(self):
+        component = (yield self.object.component())
+        mainComponent = component.mainComponent()
+        componentType = mainComponent.name()
+
+        uid = mainComponent.propertyValue("UID")
+        summary = mainComponent.propertyValue("SUMMARY")
+
+        assert uid == self.object.uid()
+        assert componentType == (yield self.object.componentType())
+
+        result = []
+
+        result.append("Calendar object (%s) for UID: %s" % (componentType, uid))
+        result.append("Summary: %s" % (summary,))
+
+        #
+        # Organizer
+        #
+        organizer = mainComponent.getProperty("ORGANIZER")
+        organizerName = organizer.parameterValue("CN")
+        organizerEmail = organizer.parameterValue("EMAIL")
+
+        if organizer:
+            name  = " (%s)" % (organizerName ,) if organizerName  else ""
+            email = " <%s>" % (organizerEmail,) if organizerEmail else ""
+            result.append("Organized by: %s%s%s" % (organizer.value(), name, email))
+
+        #
+        # Attachments
+        #
+#        attachments = (yield self.object.attachments())
+#        log.msg("%r" % (attachments,))
+#        for attachment in attachments:
+#            log.msg("%r" % (attachment,))
+#            # FIXME: Not getting any results here
+
+        returnValue("\n".join(result))
+
+class AddressBookHomeFolder(Folder):
+    """
+    Address book home folder.
+    """
+    # FIXME
+
+
+def main(argv=sys.argv, stderr=sys.stderr, reactor=None):
     if reactor is None:
         from twisted.internet import reactor
 
@@ -406,7 +809,8 @@
 
     def makeService(store):
         from twistedcaldav.config import config
-        return ShellService(store, options, reactor, config)
+        directory = getDirectory()
+        return ShellService(store, directory, options, reactor, config)
 
     print "Initializing shell..."
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/util.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/calendarserver/tools/util.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -155,6 +155,7 @@
         directories.append(resourceDirectory)
 
     aggregate = MyDirectoryService(directories, None)
+    aggregate.augmentService = augmentService
 
     #
     # Wire up the resource hierarchy

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/conf/caldavd-test.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/conf/caldavd-test.plist	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/conf/caldavd-test.plist	2011-11-28 21:07:52 UTC (rev 8346)
@@ -811,7 +811,7 @@
         <false/>
         <key>AllowResourceAsOrganizer</key>
         <false/>
-        <key>AttendeeRefreshInterval</key>
+        <key>AttendeeRefreshBatch</key>
         <integer>0</integer>
       </dict>
 

Deleted: CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/__init__.py
===================================================================
Copied: CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/__init__.py (from rev 8344, CalendarServer/trunk/contrib/performance/__init__.py)
===================================================================
Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/loadtest/sim.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/loadtest/sim.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/contrib/performance/loadtest/sim.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -364,10 +364,6 @@
         self.reactor.run()
 
 
-main = LoadSimulator.main
-
-
-
 def attachService(reactor, service):
     """
     Attach a given L{IService} provider to the given L{IReactorCore}; cause it
@@ -523,6 +519,7 @@
 
 
 
+main = LoadSimulator.main
+
 if __name__ == '__main__':
     main()
-

Copied: CalendarServer/branches/users/cdaboo/component-set-fixes/doc/Admin/MultiServerDeployment.txt (from rev 8344, CalendarServer/trunk/doc/Admin/MultiServerDeployment.txt)
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/doc/Admin/MultiServerDeployment.txt	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/doc/Admin/MultiServerDeployment.txt	2011-11-28 21:07:52 UTC (rev 8346)
@@ -0,0 +1,205 @@
+==========================
+Multi-server Deployment
+==========================
+
+Calendar Server version 3 and later uses a database as the primary data store, instead of the filesystem store used by previous versions. This allows for a familiar profile of scalability to multiple servers by offloading the DB to a dedicated server or cluster, and then adding front-end Calendar Server hosts as needed. This document highlights the key elements of a multi-server Calendar Server deployment. 
+
+* `Database Connectivity`_: By default, Calendar Server assumes the DB is hosted locally on a unix domain socket, so you must add configuration to connect to an external DB service over the network.
+
+* `Database Setup and Schema Management`_: When connecting to an external DB, the administrator is responsible for applying our database schema to initialize the database for use by Calendar Server, using the calendarserver_bootstrap_database tool.
+
+* `Memcached`_: All Calendar Server hosts need to share access to memcached
+
+* `Proxy Database`_: Normally the Proxy (delegation) database is kept in a local sqlite database, which is not sharable. Create an additional database on the DB server to hold the Proxy DB, then configure all the servers to use it.
+
+* `Directory Services`_: All servers should have access to the same directory services data that defines users, groups, resources, and locations. Calendar Server provides a highly flexible LDAP client to leverage existing directory servers, or you can use local XML files.
+
+* `Client Connectivity`_: Use either a load balancer or round robin dns, and configure all servers to use the same ServerHostName in caldavd.plist
+
+* `Shared Storage for Attachments`_: AttachmentsRoot should point to storage shared across all servers, e.g. an NFS mount. Used for file attachments to calendar events.
+
+* `General Advise`_: *No one wants advice - only corroboration.*  --John Steinbeck
+
+---------------------
+Database Connectivity
+---------------------
+
+There are a few configuration parameters in caldavd.plist that control Calendar Server's behavior with respect to database use. The relevant caldavd.plist entries and their default values are shown and described below (as defined in `stdconfig.py <https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/stdconfig.py>`_)
+
+::
+
+   "UseDatabase"  : True, # True: database; False: files
+
+   "DBType"       : "",   # 2 possible values: empty, meaning 'spawn postgres
+                          # yourself', or 'postgres', meaning 'connect to a
+                          # postgres database as specified by the 'DSN'
+                          # configuration key.  Will support more values in
+                          # the future.
+
+   "DSN"          : "",   # Data Source Name.  Used to connect to an external
+                          # database if DBType is non-empty.  Format varies
+                          # depending on database type. 
+
+All of the above are top-level keys in caldavd.plist.
+
+The DSN is a colon separated string defined in PyGreSQL-4.0/pgdb.py and has the following structure:
+
+::
+
+ dbhost = params[0]
+ dbbase = params[1]
+ dbuser = params[2]
+ dbpasswd = params[3]
+ dbopt = params[4]
+ dbtty = params[5]
+
+When no hostname is specified, Calendar Server assumes the use of a local unix domain socket (found in the directory defined by the RunRoot config key)
+
+Example of a 'remote postgres via TCP' configuration:
+
+::
+
+ <key>DBType</key>
+ <string>postgres</string>
+ <key>DSN</key>
+ <string>hostname:dbname:dbuser:dbpass::</string>
+
+
+------------------------------------
+Database Setup and Schema Management
+------------------------------------
+
+Whenever DBType is set, Calendar Server is not responsible for the lifecycle of the database, nor is it responsible for the setup and schema population - these tasks are now the responsibility of the administrator. Once caldavd.plist is configured for your database, use the `calendarserver_bootstrap_database <https://trac.calendarserver.org/browser/CalendarServer/trunk/bin/calendarserver_bootstrap_database>`_ `tool <https://trac.calendarserver.org/browser/CalendarServer/trunk/calendarserver/tools/bootstrapdatabase.py>`_ to populate calendar server `schema <https://trac.calendarserver.org/browser/CalendarServer/trunk/txdav/common/datastore/sql_schema>`_ in your database. Starting and stopping the database should be accomplished using native tools (e.g. pg_ctl). The database should be started before Calendar Server, and stopped after Calendar Server.
+
+It is critically important that your database server keeps updated statistics about your database, which allows the database query planner to select appropriate performance optimizations. Refer to your database server documentation for details.
+
+--------------
+Memcached
+--------------
+
+The default memcached settings are found in `stdconfig.py <https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/stdconfig.py>`_. By default there is one memcached 'pool' that is automatically managed by Calendar Server. For a multi-server deployment, an additional memcached instance is required, and both instances must be shared across all servers. A sample configuration is shown below, which instructs Calendar Server to connect to the 'Default' pool at example.com port 11211, and the 'Shared' pool at example.com port 11311.
+
+::
+
+    <!-- Memcache Settings -->
+    <key>Memcached</key>
+    <dict>
+      <key>MaxClients</key>
+      <integer>5</integer>
+      <key>Options</key>
+      <array>
+        <string>-U</string>
+        <string>0</string>
+        <string>-m</string>
+        <string>6000</string>
+      </array>
+      <key>Pools</key>
+      <dict>
+        <key>Default</key>
+        <dict>
+          <key>ClientEnabled</key>
+          <true/>
+          <key>ServerEnabled</key>
+          <false/>
+          <key>BindAddress</key>
+          <string>EXAMPLE.COM</string>
+          <key>Port</key>
+          <integer>11211</integer>
+          <key>HandleCacheTypes</key>
+          <array>
+            <string>Default</string>
+          </array>
+        </dict>
+        <key>ProxyDB</key>
+        <dict>
+          <key>BindAddress</key>
+          <string>EXAMPLE.COM</string>
+          <key>ClientEnabled</key>
+          <true/>
+          <key>ServerEnabled</key>
+          <false/>
+          <key>HandleCacheTypes</key>
+          <array>
+            <string>ProxyDB</string>
+            <string>PrincipalToken</string>
+          </array>
+          <key>Port</key>
+          <integer>11311</integer>
+        </dict>
+      </dict>
+      <key>memcached</key>
+      <string>memcached</string>
+    </dict>
+
+This defines two memcache pools, one called Default and one called ProxyDB. In this configuration, the administrator is expected to ensure that there is a memcache instance running on host EXAMPLE.COM listening on port 11211 (Default), and another one also on EXAMPLE.COM listening on 11311 (ProxyDB). All calendar servers need to have the same memcache configuration. Memcache should start first and stop last, relative to calendar server and postgres
+
+----------------
+Proxy Database
+----------------
+
+The Proxy DB (for delegation) is typically stored on disk in an sqlite DB, which does not allow for concurrent access across multiple hosts. To address this, create an additional DB in the postgres server, then edit caldavd.plist to add something like the following, and disable any other ProxyDB configuration.
+
+::
+
+    <!-- PostgreSQL ProxyDB Service -->
+    <key>ProxyDBService</key>
+    <dict>
+      <key>type</key>
+      <string>twistedcaldav.directory.calendaruserproxy.ProxyPostgreSQLDB</string>
+      <key>params</key>
+      <dict>
+        <key>dbtype</key>
+        <string>ProxyDB</string>
+        <key>host</key>
+        <string>PARADISE-FALLS</string>
+        <key>database</key>
+        <string>FOSSILS</string>
+        <key>user</key>
+        <string>MUNTZ</string>
+      </dict>
+    </dict>
+
+As with the memcache config, all calendar servers should have the same ProxyDBService config. In the shown example, the server will expect to access a database called FOSSILS as user MUNTZ on the postgres server PARADISE-FALLS. Unlike with the primary calendar data store, calendar server is prepared to initialize the schema of this database at runtime if it does not exist - so nothing is required beyond creating the empty db, creating the db user with appropriate access, and applying some caldavd.plist configuration.
+
+-------------------
+Directory Services
+-------------------
+
+It is critical that all servers use the same directory services data that defines users (and their passwords), groups, resources, and locations used by Calendar Server. By default, this data is stored in local XML files, which is not ideally suited for a multi-server deployment, although would still work fine if the administrator is willing to manage the workflow around updating and distributing those files to all servers.
+
+In addition, Calendar Server provides a very configurable LDAP client interface for accessing external directory services data. Administrators familiar with LDAP should need little more than to look at `twistedcaldav/stdconfig.py <https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/stdconfig.py>`_ for the available options to get started. Calendar Server will perform standard LDAP bind authentication to authenticate clients.
+
+Open Directory is also available when running on Mac OS X or Mac OS X Server.
+
+-------------------
+Client Connectivity
+-------------------
+
+Use either a load balancer or round robin dns, and configure all servers to use the same ServerHostName in caldavd.plist. A load balancer provides the most optimal distribution of work across available servers, and greater resiliency incase of individual server failure. Round robin DNS is simpler, and should work fine - however be aware that DNS caches may result in a given client 'sticking' to a server for a while. Using the same ServerHostName everywhere allows for all servers to have the exact same caldavd.plist, which is strongly recommended for simplicity.
+
+-------------------------------
+Shared Storage for Attachments
+-------------------------------
+
+Set the caldavd.plist key AttachmentsRoot to a filesystem directory that is shared and writable by all Calendar Server machines, for example an NFS export. This will be used to store file attachements that users may attach to calendar events.
+
+-------------------
+General Advise
+-------------------
+
+* Ensure caldavd.plist is identical on all Calendar Server hosts. This is not strictly required, but recommended to keep things as predictable as possible. Since you already have shared storage for AttachmentsRoot, use that to host the 'conf' directory for all servers as well; this way you don't need to push config changes out to the servers.
+
+* Use the various `tools and utilities <https://trac.calendarserver.org/browser/CalendarServer/trunk/contrib/tools>`_ to monitor activity in real time, and also for post-processing access logs.
+
+* Be sure you are getting the most from an individual server before you decide you need additional hosts (other than for redundancy). To optimize the single-server configuration, play with the caldavd.plist keys MultiProcessCount (# of daemons spawned), and MaxRequests (# of requests a daemon will process concurrently). If your Calendar Server isn't above 80% CPU use for sustained periods, you most likely don't need more Calendar Server hosts.
+
+* Ensure that your database's table statistics are updated at a reasonable interval. "Reasonable" depends entirely on how quickly your data changes in shape and size. In particular, be sure to update stats after any bulk changes.
+
+* Tune the database for performance, using the methodologies appropriate for the database you are using. The DB server will need to accept up to MultiProcessCount * MaxRequests connections from each Calendar Server, unless MaxDBConnectionsPerPool is set, in which case the number is MultiProcessCount * MaxDBConnectionsPerPool per server, plus a handful more for other things like the notification sidecar or command line tools.
+
+* Test Scenario: With a well tuned multi-server deployment of identically configured caldav servers behind a load balancer, and a separate Postgres server with a fast RAID 0, in a low-latency lab environment using simulated iCal client load, it takes 5 or 6 caldav servers to saturate the postgres server (which becomes i/o bound at a load of about 55,000 simulated users in this test).
+
+* To eliminate all single points of failure, implement high-availability for memcache, the database, the directory service, the shared storage for AttachmentsRoot, and the network load balancer(s).
+
+* When using an external directory service such as LDAP or Open Directory, overall Calendar Server performance is highly dependent on the responsiveness of the directory service.
+

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/support/build.sh
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/support/build.sh	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/support/build.sh	2011-11-28 21:07:52 UTC (rev 8346)
@@ -718,7 +718,7 @@
     "${pypi}/p/python-ldap/${ld}.tar.gz";
 
   # XXX actually PyCalendar should be imported in-place.
-  py_dependency -fe -i "src" -r 179 \
+  py_dependency -fe -i "src" -r 184 \
     "pycalendar" "pycalendar" "pycalendar" \
     "http://svn.mulberrymail.com/repos/PyCalendar/branches/server";
 


Property changes on: CalendarServer/branches/users/cdaboo/component-set-fixes/support/build.sh
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation/support/build.sh:4379-4443
/CalendarServer/branches/egg-info-351/support/build.sh:4589-4615
/CalendarServer/branches/generic-sqlstore/support/build.sh:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/support/build.sh:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/support/build.sh:5911-5935
/CalendarServer/branches/new-store/support/build.sh:5594-5934
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/support/build.sh:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/support/build.sh:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/support/build.sh:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591/support/build.sh:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/support/build.sh:4465-4957
/CalendarServer/branches/users/cdaboo/pods/support/build.sh:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/support/build.sh:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/support/build.sh:7227-7237
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/support/build.sh:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/support/build.sh:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/support/build.sh:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/support/build.sh:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/support/build.sh:4971-5080
/CalendarServer/branches/users/glyph/dalify/support/build.sh:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect/support/build.sh:6824-6876
/CalendarServer/branches/users/glyph/deploybuild/support/build.sh:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/support/build.sh:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/support/build.sh:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/support/build.sh:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/support/build.sh:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7/support/build.sh:6369-6445
/CalendarServer/branches/users/glyph/new-export/support/build.sh:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/support/build.sh:7340-7351
/CalendarServer/branches/users/glyph/sendfdport/support/build.sh:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/support/build.sh:6490-6550
/CalendarServer/branches/users/glyph/sql-store/support/build.sh:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/support/build.sh:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/support/build.sh:5084-5149
/CalendarServer/branches/users/sagen/applepush/support/build.sh:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/support/build.sh:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/support/build.sh:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/support/build.sh:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/support/build.sh:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/support/build.sh:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/support/build.sh:4068-4075
/CalendarServer/branches/users/sagen/resources-2/support/build.sh:5084-5093
/CalendarServer/branches/users/wsanchez/transations/support/build.sh:5515-5593
/CalendarServer/trunk/support/build.sh:8130-8260
   + /CalendarServer/branches/config-separation/support/build.sh:4379-4443
/CalendarServer/branches/egg-info-351/support/build.sh:4589-4615
/CalendarServer/branches/generic-sqlstore/support/build.sh:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/support/build.sh:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/support/build.sh:5911-5935
/CalendarServer/branches/new-store/support/build.sh:5594-5934
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/support/build.sh:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/support/build.sh:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/support/build.sh:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591/support/build.sh:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/support/build.sh:4465-4957
/CalendarServer/branches/users/cdaboo/pods/support/build.sh:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/support/build.sh:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/support/build.sh:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes/support/build.sh:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/support/build.sh:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/support/build.sh:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/support/build.sh:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/support/build.sh:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/support/build.sh:4971-5080
/CalendarServer/branches/users/glyph/dalify/support/build.sh:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect/support/build.sh:6824-6876
/CalendarServer/branches/users/glyph/deploybuild/support/build.sh:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/support/build.sh:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/support/build.sh:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/support/build.sh:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/support/build.sh:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7/support/build.sh:6369-6445
/CalendarServer/branches/users/glyph/new-export/support/build.sh:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/support/build.sh:7340-7351
/CalendarServer/branches/users/glyph/sendfdport/support/build.sh:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/support/build.sh:6490-6550
/CalendarServer/branches/users/glyph/sql-store/support/build.sh:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/support/build.sh:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/support/build.sh:5084-5149
/CalendarServer/branches/users/sagen/applepush/support/build.sh:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/support/build.sh:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/support/build.sh:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/support/build.sh:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/support/build.sh:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/support/build.sh:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/support/build.sh:4068-4075
/CalendarServer/branches/users/sagen/resources-2/support/build.sh:5084-5093
/CalendarServer/branches/users/wsanchez/transations/support/build.sh:5515-5593
/CalendarServer/trunk/support/build.sh:8130-8344

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/support/submit
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/support/submit	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/support/submit	2011-11-28 21:07:52 UTC (rev 8346)
@@ -23,7 +23,7 @@
 set -e
 set -u
 
-version="14";
+version="15";
 
  wd="$(cd "$(dirname "$0")" && pwd)";
 src="$(cd "${wd}/.." && pwd)";
@@ -45,7 +45,7 @@
 
   if [ "${1-}" != "-" ]; then echo "$@"; echo; fi;
 
-  echo "Usage: ${program} release";
+  echo "Usage: ${program} release [release ...]";
   echo "       ${program} -b[ip]";
   echo "";
   echo "Options:";
@@ -74,8 +74,8 @@
   if "${install}"; then usage "-i flag requires -b"; fi;
   if "${package}"; then usage "-p flag requires -b"; fi;
 
-  if [ $# == 0 ]; then usage "No release specified"; fi;
-  release="$1"; shift;
+  if [ $# == 0 ]; then usage "No releases specified"; fi;
+  releases="$@"; shift $#;
 
   if ! "${submission_enabled}"; then
     echo "Submissions from this branch are not enabled.";
@@ -190,7 +190,7 @@
 else
   echo "";
   echo "Submitting sources for ${project_version}...";
-  submitproject "${wc}" "${release}";
+  submitproject "${wc}" ${releases};
 fi;
 
 rm -rf "${tmp}";

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/support/version.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/support/version.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/support/version.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -24,7 +24,7 @@
     # Compute the version number.
     #
 
-    base_version = "3.2"
+    base_version = "4.0"
 
     branches = (
         "tags/release/CalendarServer-" + base_version,
@@ -49,7 +49,7 @@
             base_version += "-dev"
 
         if svn_revision == "exported":
-            if "RC_JASPER" in os.environ and os.environ["RC_JASPER"] == "YES":
+            if "RC_XBS" in os.environ and os.environ["RC_XBS"] == "YES":
                 project_name = basename(os.environ["SRCROOT"])
 
                 prefix = "CalendarServer-"

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/dav/method/prop_common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/dav/method/prop_common.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/dav/method/prop_common.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -93,10 +93,10 @@
                 properties_by_status[responsecode.OK].append(prop)
             except:
                 f = Failure()
-    
-                log.err("Error reading property %r for resource %s: %s" % (qname, request.uri, f.value))
-    
                 status = statusForFailure(f, "getting property: %s" % (qname,))
+                if status != responsecode.NOT_FOUND:
+                    log.err("Error reading property %r for resource %s: %s" %
+                            (qname, request.uri, f.value))
                 if status not in properties_by_status: properties_by_status[status] = []
                 properties_by_status[status].append(propertyName(qname))
         else:

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/test/test_server.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/test/test_server.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twext/web2/test/test_server.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -352,6 +352,9 @@
             (200, {}, "prepath:[] postpath:['consumed', 'path', 'segments']"))
 
     def test_redirectResource(self):
+        """
+        Make sure a redirect response has the correct status and Location header.
+        """
         redirectResource = resource.RedirectResource(scheme='https',
                                                      host='localhost',
                                                      port=443,
@@ -363,6 +366,10 @@
             (301, {'location': 'https://localhost/foo?bar=baz'}, None))
 
     def test_redirectResourceWithSchemeRemapping(self):
+        """
+        Make sure a redirect response has the correct status and Location header, when
+        SSL is on, and the client request uses scheme http with the SSL port.
+        """
 
         def chanrequest2(root, uri, length, headers, method, version, prepath, content):
             site = server.Site(root)
@@ -380,6 +387,10 @@
             (301, {'location': 'https://localhost:8443/foo'}, None))
 
     def test_redirectResourceWithoutSchemeRemapping(self):
+        """
+        Make sure a redirect response has the correct status and Location header, when
+        SSL is on, and the client request uses scheme http with the non-SSL port.
+        """
 
         def chanrequest2(root, uri, length, headers, method, version, prepath, content):
             site = server.Site(root)
@@ -397,6 +408,10 @@
             (301, {'location': 'http://localhost:8008/foo'}, None))
 
     def test_redirectResourceWithoutSSLSchemeRemapping(self):
+        """
+        Make sure a redirect response has the correct status and Location header, when
+        SSL is off, and the client request uses scheme http with the SSL port.
+        """
 
         def chanrequest2(root, uri, length, headers, method, version, prepath, content):
             site = server.Site(root)

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/directory.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/directory.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -637,10 +637,13 @@
             self.log_info("Applying proxy assignment changes")
             assignmentCount = 0
             for principalUID, members in assignments:
-                current = (yield self.proxyDB.getMembers(principalUID))
-                if members != current:
-                    assignmentCount += 1
-                    yield self.proxyDB.setGroupMembers(principalUID, members)
+                try:
+                    current = (yield self.proxyDB.getMembers(principalUID))
+                    if members != current:
+                        assignmentCount += 1
+                        yield self.proxyDB.setGroupMembers(principalUID, members)
+                except Exception, e:
+                    self.log_error("Unable to apply proxy assignment: principal=%s, members=%s, error=%s" % (principalUID, members, e))
             self.log_info("Applied %d assignment%s to proxy database" %
                 (assignmentCount, "" if assignmentCount == 1 else "s"))
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/ldapdirectory.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/ldapdirectory.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -270,6 +270,10 @@
                     (repr(self.credentials.get("dn")),))
                 self.ldap.simple_bind_s(self.credentials.get("dn"),
                     self.credentials.get("password"))
+            except ldap.SERVER_DOWN:
+                msg = "Can't connect to LDAP %s: server down" % (self.uri,)
+                self.log_error(msg)
+                raise DirectoryConfigurationError(msg)
             except ldap.INVALID_CREDENTIALS:
                 msg = "Can't bind to LDAP %s: check credentials" % (self.uri,)
                 self.log_error(msg)
@@ -314,6 +318,7 @@
         numMissingGuids = 0
         guidAttr = self.rdnSchema["guidAttr"]
         for dn, attrs in results:
+            dn = normalizeDNstr(dn)
 
             unrestricted = True
             if self.restrictedGUIDs is not None:
@@ -370,6 +375,7 @@
             ldap.SCOPE_SUBTREE, filterstr=filterstr, attrlist=attrlist)
 
         for dn, attrs in results:
+            dn = normalizeDNstr(dn)
             guid = self._getUniqueLdapAttribute(attrs, guidAttr)
             if guid:
                 readDelegate = self._getUniqueLdapAttribute(attrs, readAttr)
@@ -470,6 +476,9 @@
         try:
             result = self.ldap.search_s(base, scope, filterstr=filterstr,
                 attrlist=attrlist)
+        except ldap.SERVER_DOWN:
+            self.log_error("LDAP server unavailable")
+            raise HTTPError(StatusResponse(responsecode.SERVICE_UNAVAILABLE, "LDAP server unavailable"))
         except ldap.NO_SUCH_OBJECT:
             result = []
         except ldap.FILTER_ERROR, e:
@@ -508,12 +517,13 @@
 
                 if len(result) == 1:
                     dn, attrs = result[0]
+                    dn = normalizeDNstr(dn)
                     if self.groupSchema["membersAttr"]:
-                        members = self._getMultipleLdapAttributes(attrs,
-                            self.groupSchema["membersAttr"])
+                        members = set(self._getMultipleLdapAttributes(attrs,
+                            self.groupSchema["membersAttr"]))
                     if self.groupSchema["nestedGroupsAttr"]:
-                        nestedGroups = self._getMultipleLdapAttributes(attrs,
-                            self.groupSchema["nestedGroupsAttr"])
+                        nestedGroups = set(self._getMultipleLdapAttributes(attrs,
+                            self.groupSchema["nestedGroupsAttr"]))
 
                 else:
                     members = []
@@ -563,15 +573,16 @@
 
             if len(result) == 1:
                 dn, attrs = result[0]
+                dn = normalizeDNstr(dn)
                 if self.groupSchema["membersAttr"]:
-                    subMembers = self._getMultipleLdapAttributes(attrs,
-                        self.groupSchema["membersAttr"])
+                    subMembers = set(self._getMultipleLdapAttributes(attrs,
+                        self.groupSchema["membersAttr"]))
                 else:
                     subMembers = []
 
                 if self.groupSchema["nestedGroupsAttr"]:
-                    subNestedGroups = self._getMultipleLdapAttributes(attrs,
-                        self.groupSchema["nestedGroupsAttr"])
+                    subNestedGroups = set(self._getMultipleLdapAttributes(attrs,
+                        self.groupSchema["nestedGroupsAttr"]))
                 else:
                     subNestedGroups = []
 
@@ -605,7 +616,7 @@
             values = attrs.get(key)
             if values is not None:
                 results += values
-        return set(results)
+        return results
 
 
     def _ldapResultToRecord(self, dn, attrs, recordType):
@@ -640,7 +651,7 @@
                 raise MissingGuidException()
 
         # Find or build email
-        emailAddresses = self._getMultipleLdapAttributes(attrs, self.rdnSchema[recordType]["mapping"]["emailAddresses"])
+        emailAddresses = set(self._getMultipleLdapAttributes(attrs, self.rdnSchema[recordType]["mapping"]["emailAddresses"]))
         emailSuffix = self.rdnSchema[recordType]["emailSuffix"]
 
         if len(emailAddresses) == 0 and emailSuffix:
@@ -651,7 +662,7 @@
         proxyGUIDs = ()
         readOnlyProxyGUIDs = ()
         autoSchedule = False
-        memberGUIDs = set()
+        memberGUIDs = []
 
         # LDAP attribute -> principal matchings
         if recordType == self.recordType_users:
@@ -669,18 +680,16 @@
 
             if self.groupSchema["membersAttr"]:
                 members = self._getMultipleLdapAttributes(attrs, self.groupSchema["membersAttr"])
-                if members:
-                    if type(members) is str:
-                        members = set([members])
-                    memberGUIDs.update(members)
+                memberGUIDs.extend(members)
             if self.groupSchema["nestedGroupsAttr"]:
                 members = self._getMultipleLdapAttributes(attrs, self.groupSchema["nestedGroupsAttr"])
-                if members:
-                    if type(members) is str:
-                        members = set([members])
-                    memberGUIDs.update(members)
+                memberGUIDs.extend(members)
 
+            # Normalize members if they're in DN form
+            if not self.groupSchema["memberIdAttr"]: # empty = dn
+                memberGUIDs = [normalizeDNstr(dnStr) for dnStr in list(memberGUIDs)]
 
+
         elif recordType in (self.recordType_resources,
             self.recordType_locations):
             fullName = self._getUniqueLdapAttribute(attrs, self.rdnSchema[recordType]["mapping"]["fullName"])
@@ -715,11 +724,11 @@
                     autoSchedule = (autoScheduleValue ==
                         self.resourceSchema["autoScheduleEnabledValue"])
                 if self.resourceSchema["proxyAttr"]:
-                    proxyGUIDs = self._getMultipleLdapAttributes(attrs,
-                        self.resourceSchema["proxyAttr"])
+                    proxyGUIDs = set(self._getMultipleLdapAttributes(attrs,
+                        self.resourceSchema["proxyAttr"]))
                 if self.resourceSchema["readOnlyProxyAttr"]:
-                    readOnlyProxyGUIDs = self._getMultipleLdapAttributes(attrs,
-                        self.resourceSchema["readOnlyProxyAttr"])
+                    readOnlyProxyGUIDs = set(self._getMultipleLdapAttributes(attrs,
+                        self.resourceSchema["readOnlyProxyAttr"]))
 
         serverID = partitionID = None
         if self.partitionSchema["serverIdAttr"]:
@@ -854,6 +863,7 @@
 
             if result:
                 dn, attrs = result.pop()
+                dn = normalizeDNstr(dn)
 
                 unrestricted = True
                 if self.restrictedGUIDs is not None:
@@ -934,6 +944,7 @@
                 self.log_debug("LDAP search returned %d results" % (len(results),))
                 numMissingGuids = 0
                 for dn, attrs in results:
+                    dn = normalizeDNstr(dn)
                     # Skip if group restriction is in place and guid is not
                     # a member
                     if (recordType != self.recordType_groups and
@@ -985,6 +996,7 @@
         attributeToSearch = "guid"
         valuesToFetch = guids
 
+
         while valuesToFetch:
             results = []
 
@@ -1013,10 +1025,10 @@
                 if alias not in recordsByAlias:
                     recordsByAlias[alias] = record
 
-                # record._memberIds contains the members of this group,
+                # record.memberGUIDs() contains the members of this group,
                 # but it might not be in guid form; it will be data from
                 # self.groupSchema["memberIdAttr"]
-                for memberAlias in record._memberIds:
+                for memberAlias in record.memberGUIDs():
                     if not memberIdAttr:
                         # Members are identified by dn so we can take a short
                         # cut:  we know we only need to examine groups, and
@@ -1056,6 +1068,16 @@
     return child[-len(parent):] == parent
 
 
+def normalizeDNstr(dnStr):
+    """
+    Convert to lowercase and remove extra whitespace
+    @param dnStr: dn
+    @type dnStr: C{str}
+    @return: normalized dn C{str}
+    """
+    return ' '.join(ldap.dn.dn2str(ldap.dn.str2dn(dnStr.lower())).split())
+
+
 def buildFilter(mapping, fields, operand="or"):
     """
     Create an LDAP filter string from a list of tuples representing directory
@@ -1138,22 +1160,13 @@
         # Store copy of member guids
         self._memberGUIDs = memberGUIDs
 
-        # Identifiers of the members of this record if it is a group
-        membersAttrs = []
-        if self.service.groupSchema["membersAttr"]:
-            membersAttrs.append(self.service.groupSchema["membersAttr"])
-        if self.service.groupSchema["nestedGroupsAttr"]:
-            membersAttrs.append(self.service.groupSchema["nestedGroupsAttr"])
-        self._memberIds = self.service._getMultipleLdapAttributes(attrs,
-            *membersAttrs)
-
         # Identifier of this record as a group member
         memberIdAttr = self.service.groupSchema["memberIdAttr"]
         if memberIdAttr:
             self._memberId = self.service._getUniqueLdapAttribute(attrs,
                 memberIdAttr)
         else:
-            self._memberId = self.dn
+            self._memberId = normalizeDNstr(self.dn)
 
 
     def members(self):
@@ -1171,7 +1184,7 @@
         memberIdAttr = self.service.groupSchema["memberIdAttr"]
         results = []
 
-        for memberId in self._memberIds:
+        for memberId in self._memberGUIDs:
 
             if memberIdAttr:
 
@@ -1194,6 +1207,7 @@
             if result:
 
                 dn, attrs = result.pop()
+                dn = normalizeDNstr(dn)
                 self.log_debug("Retrieved: %s %s" % (dn,attrs))
                 recordType = self.service.recordTypeForDN(dn)
                 if recordType is None:
@@ -1245,6 +1259,7 @@
                 ldap.SCOPE_SUBTREE, filterstr=filterstr, attrlist=self.service.attrlist)
 
             for dn, attrs in results:
+                dn = normalizeDNstr(dn)
                 shortName = self.service._getUniqueLdapAttribute(attrs, "cn")
                 self.log_debug("%s is a member of %s" % (self._memberId, shortName))
                 groups.append(self.service.recordWithShortName(recordType,

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/test/test_ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/test/test_ldapdirectory.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/directory/test/test_ldapdirectory.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -17,13 +17,14 @@
 try:
     from twistedcaldav.directory.ldapdirectory import (
         buildFilter, LdapDirectoryService, MissingGuidException,
-        splitIntoBatches
+        splitIntoBatches, normalizeDNstr, dnContainedIn
     )
     from twistedcaldav.test.util import proxiesFile
     from twistedcaldav.directory.calendaruserproxyloader import XMLCalendarUserProxyLoader
     from twistedcaldav.directory import calendaruserproxy
     from twistedcaldav.directory.directory import GroupMembershipCache, GroupMembershipCacheUpdater
     from twisted.internet.defer import inlineCallbacks
+    from string import maketrans
     import ldap
 except ImportError:
     print "Skipping because ldap module not installed"
@@ -113,16 +114,150 @@
 
         def __init__(self, actual):
             self.actual = actual
-            self.testResults = []
 
-        def addTestResults(self, results):
-            self.testResults.insert(0, results)
+            # Test data returned from search_s.
+            # Note that some DNs have various extra whitespace added and mixed
+            # up case since LDAP is pretty loose about these.
+            self.records = (
+                (
+                    "cn=Recursive1_coasts, cn=gROUps,dc=example, dc=com",
+                    {
+                        'cn': ['recursive1_coasts'],
+                        'apple-generateduid': ['recursive1_coasts'],
+                        'uniqueMember': [
+                            'cn=recursive2_coasts,cn=groups,dc=example,dc=com',
+                            'uid=wsanchez ,cn=users, dc=eXAMple,dc=com',
+                        ],
+                    }
+                ),
+                (
+                    "cn=recursive2_coasts,cn=groups,dc=example,dc=com",
+                    {
+                        'cn': ['recursive2_coasts'],
+                        'apple-generateduid': ['recursive2_coasts'],
+                        'uniqueMember': [
+                            'cn=recursive1_coasts,cn=groups,dc=example,dc=com',
+                            'uid=cdaboo,cn=users,dc=example,dc=com',
+                        ],
+                    }
+                ),
+                (
+                    'cn=both_coasts,cn=groups,dc=example,dc=com',
+                    {
+                        'cn': ['both_coasts'],
+                        'apple-generateduid': ['both_coasts'],
+                        'uniqueMember': [
+                            'cn=right_coast,cn=groups,dc=example,dc=com',
+                            'cn=left_coast,cn=groups,dc=example,dc=com',
+                        ],
+                    }
+                ),
+                (
+                    'cn=right_coast,cn=groups,dc=example,dc=com',
+                    {
+                        'cn': ['right_coast'],
+                        'apple-generateduid': ['right_coast'],
+                        'uniqueMember': [
+                            'uid=cdaboo,cn=users,dc=example,dc=com',
+                        ],
+                    }
+                ),
+                (
+                    'cn=left_coast,cn=groups,dc=example,dc=com',
+                    {
+                        'cn': ['left_coast'],
+                        'apple-generateduid': ['left_coast'],
+                        'uniqueMember': [
+                            'uid=wsanchez, cn=users,dc=example,dc=com',
+                            'uid=lecroy,cn=users,dc=example,dc=com',
+                            'uid=dreid,cn=users,dc=example,dc=com',
+                        ],
+                    }
+                ),
+                (
+                    "uid=odtestamanda,cn=users,dc=example,dc=com",
+                    {
+                        'uid': ['odtestamanda'],
+                        'apple-generateduid': ['9DC04A70-E6DD-11DF-9492-0800200C9A66'],
+                        'sn': ['Test'],
+                        'mail': ['odtestamanda at example.com', 'alternate at example.com'],
+                        'givenName': ['Amanda'],
+                        'cn': ['Amanda Test']
+                    }
+                ),
+                (
+                    "uid=odtestbetty,cn=users,dc=example,dc=com",
+                    {
+                        'uid': ['odtestbetty'],
+                        'apple-generateduid': ['93A8F5C5-49D8-4641-840F-CD1903B0394C'],
+                        'sn': ['Test'],
+                        'mail': ['odtestbetty at example.com'],
+                        'givenName': ['Betty'],
+                        'cn': ['Betty Test']
+                    }
+                ),
+                (
+                    "uid=odtestcarlene,cn=users,dc=example,dc=com",
+                    {
+                        'uid': ['odtestcarlene'],
+                        # Note: no guid here, to test this record is skipped
+                        'sn': ['Test'],
+                        'mail': ['odtestcarlene at example.com'],
+                        'givenName': ['Carlene'],
+                        'cn': ['Carlene Test']
+                    }
+                ),
+                (
+                    "uid=cdaboo,cn=users,dc=example,dc=com",
+                    {
+                        'uid': ['cdaboo'],
+                        'apple-generateduid': ['5A985493-EE2C-4665-94CF-4DFEA3A89500'],
+                        'sn': ['Daboo'],
+                        'mail': ['daboo at example.com'],
+                        'givenName': ['Cyrus'],
+                        'cn': ['Cyrus Daboo']
+                    }
+                ),
+                (
+                    "uid=wsanchez  ,  cn=users  , dc=example,dc=com",
+                    {
+                        'uid': ['wsanchez'],
+                        'apple-generateduid': ['6423F94A-6B76-4A3A-815B-D52CFD77935D'],
+                        'sn': ['Sanchez'],
+                        'mail': ['wsanchez at example.com'],
+                        'givenName': ['Wilfredo'],
+                        'cn': ['Wilfredo Sanchez']
+                    }
+                ),
+            )
 
         def search_s(self, base, scope, filterstr="(objectClass=*)",
             attrlist=None):
-            return self.testResults.pop()
+            """ A simple implementation of LDAP search filter processing """
 
+            base = normalizeDNstr(base)
+            results = []
+            for dn, attrs in self.records:
+                dn = normalizeDNstr(dn)
+                if dn == base:
+                    results.append((dn, attrs))
+                elif dnContainedIn(ldap.dn.str2dn(dn), ldap.dn.str2dn(base)):
+                    if filterstr in ("(objectClass=*)", "(!(objectClass=organizationalUnit))"):
+                        results.append((dn, attrs))
+                    else:
+                        trans = maketrans("&(|)", "   |")
+                        fragments = filterstr.encode("utf-8").translate(trans).split("|")
+                        for fragment in fragments:
+                            if not fragment:
+                                continue
+                            fragment = fragment.strip()
+                            key, value = fragment.split("=")
+                            if value in attrs.get(key, []):
+                                results.append((dn, attrs))
 
+            return results
+
+
     class LdapDirectoryServiceTestCase(TestCase):
 
         def setUp(self):
@@ -156,7 +291,7 @@
                         "rdn": "cn=Users",
                         "attr": "uid", # used only to synthesize email address
                         "emailSuffix": None, # used only to synthesize email address
-                        "filter": "(objectClass=apple-user)", # additional filter for this type
+                        "filter": "", # additional filter for this type
                         "loginEnabledAttr" : "", # attribute controlling login
                         "loginEnabledValue" : "yes", # "True" value of above attribute
                         "calendarEnabledAttr" : "enable-calendar", # attribute controlling calendaring
@@ -173,7 +308,7 @@
                         "rdn": "cn=Groups",
                         "attr": "cn", # used only to synthesize email address
                         "emailSuffix": None, # used only to synthesize email address
-                        "filter": "(objectClass=apple-group)", # additional filter for this type
+                        "filter": "", # additional filter for this type
                         "mapping": { # maps internal record names to LDAP
                             "recordName": "cn",
                             "fullName" : "cn",
@@ -234,6 +369,20 @@
             self.service.ldap = LdapDirectoryTestWrapper(self.service.ldap)
 
 
+        def test_ldapWrapper(self):
+            """
+            Exercise the fake search_s implementation
+            """
+            # Get all groups
+            self.assertEquals(
+                len(self.service.ldap.search_s("cn=groups,dc=example,dc=com", 0, "(objectClass=*)", [])), 5)
+
+            self.assertEquals(
+                len(self.service.ldap.search_s("cn=recursive1_coasts,cn=groups,dc=example,dc=com", 2, "(objectClass=*)", [])), 1)
+
+            self.assertEquals(
+                len(self.service.ldap.search_s("cn=groups,dc=example,dc=com", 0, "(|(apple-generateduid=right_coast)(apple-generateduid=left_coast))", [])), 2)
+
         def test_ldapRecordCreation(self):
             """
             Exercise _ldapResultToRecord(), which converts a dictionary
@@ -321,9 +470,9 @@
                 'apple-generateduid': [guid],
                 'uniqueMember':
                     [
-                        '9DC04A70-E6DD-11DF-9492-0800200C9A66',
-                        '9DC04A71-E6DD-11DF-9492-0800200C9A66',
-                        '6C6CD282-E6E3-11DF-9492-0800200C9A66'
+                        'uid=odtestamanda,cn=users,dc=example,dc=com',
+                        'uid=odtestbetty,cn=users,dc=example,dc=com',
+                        'cn=odtestgroupb,cn=groups,dc=example,dc=com',
                     ],
                 'cn': ['odtestgrouptop']
             }
@@ -331,9 +480,11 @@
                 self.service.recordType_groups)
             self.assertEquals(record.guid, guid)
             self.assertEquals(record.memberGUIDs(),
-                set(['6C6CD282-E6E3-11DF-9492-0800200C9A66',
-                     '9DC04A70-E6DD-11DF-9492-0800200C9A66',
-                     '9DC04A71-E6DD-11DF-9492-0800200C9A66'])
+                set([
+                     'cn=odtestgroupb,cn=groups,dc=example,dc=com',
+                     'uid=odtestamanda,cn=users,dc=example,dc=com',
+                     'uid=odtestbetty,cn=users,dc=example,dc=com',
+                     ])
             )
 
             # Resource with delegates and autoSchedule = True
@@ -433,46 +584,11 @@
             and turns the results into records
             """
 
-            self.service.ldap.addTestResults([
-                (
-                    "uid=odtestamanda,cn=users,dc=example,dc=com",
-                    {
-                        'uid': ['odtestamanda'],
-                        'apple-generateduid': ['9DC04A70-E6DD-11DF-9492-0800200C9A66'],
-                        'sn': ['Test'],
-                        'mail': ['odtestamanda at example.com', 'alternate at example.com'],
-                        'givenName': ['Amanda'],
-                        'cn': ['Amanda Test']
-                    }
-                ),
-                (
-                    "uid=odtestbetty,cn=users,dc=example,dc=com",
-                    {
-                        'uid': ['odtestbetty'],
-                        'apple-generateduid': ['93A8F5C5-49D8-4641-840F-CD1903B0394C'],
-                        'sn': ['Test'],
-                        'mail': ['odtestbetty at example.com'],
-                        'givenName': ['Betty'],
-                        'cn': ['Betty Test']
-                    }
-                ),
-                (
-                    "uid=odtestcarlene,cn=users,dc=example,dc=com",
-                    {
-                        'uid': ['odtestcarlene'],
-                        # Note: no guid here, to test this record is skipped
-                        'sn': ['Test'],
-                        'mail': ['odtestcarlene at example.com'],
-                        'givenName': ['Carlene'],
-                        'cn': ['Carlene Test']
-                    }
-                ),
-            ])
             records = self.service.listRecords(self.service.recordType_users)
-            self.assertEquals(len(records), 2)
+            self.assertEquals(len(records), 4)
             self.assertEquals(
                 set([r.firstName for r in records]),
-                set(["Amanda", "Betty"]) # Carlene is skipped because no guid in LDAP
+                set(["Amanda", "Betty", "Cyrus", "Wilfredo"]) # Carlene is skipped because no guid in LDAP
             )
 
         @inlineCallbacks
@@ -496,106 +612,6 @@
             updater = GroupMembershipCacheUpdater(calendaruserproxy.ProxyDBService,
                 self.service, 30, 15, cache=cache, useExternalProxies=False)
 
-            # Fake LDAP results for the getGroups() call performed within
-            # updateCache().  Also include recursive groups to make sure we
-            # handle that situation.
-            self.service.ldap.addTestResults([
-                (
-                    "cn=recursive1_coasts,cn=groups,dc=example,dc=com",
-                    {
-                        'cn': ['recursive1_coasts'],
-                        'apple-generateduid': ['recursive1_coasts'],
-                        'uniqueMember': [
-                            'cn=recursive2_coasts,cn=groups,dc=example,dc=com',
-                            'uid=wsanchez,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-                (
-                    "cn=recursive2_coasts,cn=groups,dc=example,dc=com",
-                    {
-                        'cn': ['recursive2_coasts'],
-                        'apple-generateduid': ['recursive2_coasts'],
-                        'uniqueMember': [
-                            'cn=recursive1_coasts,cn=groups,dc=example,dc=com',
-                            'uid=cdaboo,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-                (
-                    'cn=both_coasts,cn=groups,dc=example,dc=com',
-                    {
-                        'cn': ['both_coasts'],
-                        'apple-generateduid': ['both_coasts'],
-                        'uniqueMember': [
-                            'cn=right_coast,cn=groups,dc=example,dc=com',
-                            'cn=left_coast,cn=groups,dc=example,dc=com',
-                        ],
-                    }
-                ),
-                (
-                    'cn=right_coast,cn=groups,dc=example,dc=com',
-                    {
-                        'cn': ['right_coast'],
-                        'apple-generateduid': ['right_coast'],
-                        'uniqueMember': [
-                            'uid=cdaboo,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-                (
-                    'cn=left_coast,cn=groups,dc=example,dc=com',
-                    {
-                        'cn': ['left_coast'],
-                        'apple-generateduid': ['left_coast'],
-                        'uniqueMember': [
-                            'uid=wsanchez,cn=users,dc=example,dc=com',
-                            'uid=lecroy,cn=users,dc=example,dc=com',
-                            'uid=dreid,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-            ])
-            self.service.ldap.addTestResults([
-                (
-                    "cn=recursive2_coasts,cn=groups,dc=example,dc=com",
-                    {
-                        'cn': ['recursive2_coasts'],
-                        'apple-generateduid': ['recursive2_coasts'],
-                        'uniqueMember': [
-                            'cn=recursive1_coasts,cn=groups,dc=example,dc=com',
-                            'uid=cdaboo,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-            ])
-            self.service.ldap.addTestResults([
-                (
-                    'cn=left_coast,cn=groups,dc=example,dc=com',
-                    {
-                        'cn': ['left_coast'],
-                        'apple-generateduid': ['left_coast'],
-                        'uniqueMember': [
-                            'uid=wsanchez,cn=users,dc=example,dc=com',
-                            'uid=lecroy,cn=users,dc=example,dc=com',
-                            'uid=dreid,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-            ])
-            self.service.ldap.addTestResults([
-                (
-                    'cn=right_coast,cn=groups,dc=example,dc=com',
-                    {
-                        'cn': ['right_coast'],
-                        'apple-generateduid': ['right_coast'],
-                        'uniqueMember': [
-                            'uid=cdaboo,cn=users,dc=example,dc=com',
-                        ],
-                    }
-                ),
-            ])
-
             self.assertEquals((False, 8), (yield updater.updateCache()))
 
             users = self.service.recordType_users
@@ -605,18 +621,6 @@
                 ("wsanchez", set(["both_coasts", "left_coast", "recursive1_coasts", "recursive2_coasts"])),
             ]:
 
-                # Fake LDAP results for the record lookup
-                self.service.ldap.addTestResults([
-                    (
-                        "uid=%s,cn=users,dc=example,dc=com" % (shortName,),
-                        {
-                            'uid': [shortName],
-                            'cn': [shortName],
-                            'apple-generateduid': [shortName],
-                        }
-                    ),
-                ])
-
                 record = self.service.recordWithShortName(users, shortName)
                 self.assertEquals(groups, (yield record.cachedGroups()))
 
@@ -658,3 +662,16 @@
             # No Match
             dnStr = "uid=foo,cn=US ers ,dc=EXAMple,dc=com"
             self.assertEquals(self.service.recordTypeForDN(dnStr), None)
+
+        def test_normalizeDN(self):
+            for input, expected in (
+                ("uid=foo,cn=users,dc=example,dc=com",
+                 "uid=foo,cn=users,dc=example,dc=com"),
+                ("uid=FoO,cn=uSeRs,dc=ExAmPlE,dc=CoM",
+                 "uid=foo,cn=users,dc=example,dc=com"),
+                ("uid=FoO , cn=uS eRs , dc=ExA mPlE ,   dc=CoM",
+                 "uid=foo,cn=us ers,dc=exa mple,dc=com"),
+                ("uid=FoO , cn=uS  eRs , dc=ExA    mPlE ,   dc=CoM",
+                 "uid=foo,cn=us ers,dc=exa mple,dc=com"),
+            ):
+                self.assertEquals(expected, normalizeDNstr(input))

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/ical.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/ical.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/ical.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -1729,6 +1729,13 @@
                 yield attendee
 
 
+    def getAllUniqueAttendees(self, onlyScheduleAgentServer=True):
+        attendeesByInstance = self.getAttendeesByInstance(True, onlyScheduleAgentServer=onlyScheduleAgentServer)
+        attendees = set()
+        for attendee, _ignore in attendeesByInstance:
+            attendees.add(attendee)
+        return attendees
+
     def getMaskUID(self):
         """
         Get the X-CALENDARSEREVR-MASK-UID value. Works on either a VCALENDAR or on a component.

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/mail.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/mail.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/mail.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -707,7 +707,9 @@
                 update TOKENS set DATESTAMP = :1 WHERE TOKEN = :2
                 """, datetime.date.today(), token
             )
-        return token
+            return str(token)
+        else:
+            return None
 
     def deleteToken(self, token):
         self._db_execute(

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcachepool.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcachepool.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcachepool.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -360,6 +360,12 @@
     def add(self, *args, **kwargs):
         return self.performRequest('add', *args, **kwargs)
 
+    def incr(self, *args, **kwargs):
+        return self.performRequest('increment', *args, **kwargs)
+
+    def decr(self, *args, **kwargs):
+        return self.performRequest('decrement', *args, **kwargs)
+
     def flushAll(self, *args, **kwargs):
         return self.performRequest('flushAll', *args, **kwargs)
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcacher.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcacher.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/memcacher.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -99,6 +99,34 @@
             except KeyError:
                 return succeed(False)
 
+        def incr(self, key, delta=1):
+            value = self._cache.get(key, None)
+            if value is not None:
+                value, expire, identifier = value
+                try:
+                    value = int(value)
+                except ValueError:
+                    value = None
+                else:
+                    value += delta
+                    self._cache[key] = (str(value), expire, identifier,)
+            return succeed(value)
+
+        def decr(self, key, delta=1):
+            value = self._cache.get(key, None)
+            if value is not None:
+                value, expire, identifier = value
+                try:
+                    value = int(value)
+                except ValueError:
+                    value = None
+                else:
+                    value -= delta
+                    if value < 0:
+                        value = 0
+                    self._cache[key] = (str(value), expire, identifier,)
+            return succeed(value)
+
         def flushAll(self):
             self._cache = {}
             return succeed(True)
@@ -132,6 +160,12 @@
         def delete(self, key):
             return succeed(True)
 
+        def incr(self, key, delta=1):
+            return succeed(None)
+
+        def decr(self, key, delta=1):
+            return succeed(None)
+
         def flushAll(self):
             return succeed(True)
 
@@ -245,6 +279,14 @@
         self.log_debug("Deleting Cache Token for %r" % (key,))
         return self._getMemcacheProtocol().delete('%s:%s' % (self._namespace, self._normalizeKey(key)))
 
+    def incr(self, key, delta=1):
+        self.log_debug("Incrementing Cache Token for %r" % (key,))
+        return self._getMemcacheProtocol().incr('%s:%s' % (self._namespace, self._normalizeKey(key)), delta)
+
+    def decr(self, key, delta=1):
+        self.log_debug("Decrementing Cache Token for %r" % (key,))
+        return self._getMemcacheProtocol().incr('%s:%s' % (self._namespace, self._normalizeKey(key)), delta)
+
     def flushAll(self):
         self.log_debug("Flushing All Cache Tokens")
         return self._getMemcacheProtocol().flushAll()

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/put_common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/put_common.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/put_common.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -962,7 +962,7 @@
                     # All auto-processed updates for an Organizer leave the tag unchanged
                     change_scheduletag = False
                 elif self.processing_organizer == False:
-                    # Auto-processed updates that are the result of an organizer "refresh' due
+                    # Auto-processed updates that are the result of an organizer "refresh" due
                     # to another Attendee's REPLY should leave the tag unchanged
                     change_scheduletag = not hasattr(self.request, "doing_attendee_refresh")
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/report_multiget_common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/report_multiget_common.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/method/report_multiget_common.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -38,6 +38,7 @@
 from twistedcaldav.carddavxml import carddav_namespace
 from twistedcaldav.config import config
 from twistedcaldav.method import report_common
+from txdav.common.icommondatastore import ConcurrentModification
 from twistedcaldav.method.report_common import COLLECTION_TYPE_CALENDAR,\
     COLLECTION_TYPE_ADDRESSBOOK
 from twistedcaldav.query import addressbookqueryfilter
@@ -217,18 +218,28 @@
             # Get properties for all valid readable resources
             for resource, href in ok_resources:
                 try:
-                    yield report_common.responseForHref(request, responses, davxml.HRef.fromString(href), resource, propertiesForResource, propertyreq, isowner=isowner)
+                    yield report_common.responseForHref(
+                        request, responses, davxml.HRef.fromString(href),
+                        resource, propertiesForResource, propertyreq,
+                        isowner=isowner
+                    )
                 except ValueError:
-                    log.err("Invalid calendar resource during multiget: %s" % (href,))
-                    responses.append(davxml.StatusResponse(davxml.HRef.fromString(href), davxml.Status.fromResponseCode(responsecode.FORBIDDEN)))
-                except IOError:
+                    log.err("Invalid calendar resource during multiget: %s" %
+                            (href,))
+                    responses.append(davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(responsecode.FORBIDDEN)))
+                except ConcurrentModification:
                     # This can happen because of a race-condition between the
                     # time we determine which resources exist and the deletion
                     # of one of these resources in another request.  In this
                     # case, return a 404 for the now missing resource rather
                     # than raise an error for the entire report.
                     log.err("Missing resource during multiget: %s" % (href,))
-                    responses.append(davxml.StatusResponse(davxml.HRef.fromString(href), davxml.Status.fromResponseCode(responsecode.NOT_FOUND)))
+                    responses.append(davxml.StatusResponse(
+                        davxml.HRef.fromString(href),
+                        davxml.Status.fromResponseCode(responsecode.NOT_FOUND)
+                    ))
 
             # Indicate error for all valid non-readable resources
             for ignore_resource, href in bad_resources:

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/implicit.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/implicit.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -247,6 +247,7 @@
         # Setup some parameters
         self.do_smart_merge = do_smart_merge
         self.except_attendees = ()
+        self.only_refresh_attendees = None
 
         # Determine what type of scheduling this is: Organizer triggered or Attendee triggered
         if self.state == "organizer":
@@ -264,26 +265,21 @@
             returnValue(self.return_calendar if hasattr(self, "return_calendar") else self.calendar)
 
     @inlineCallbacks
-    def refreshAllAttendeesExceptSome(self, request, resource, calendar, attendees):
+    def refreshAllAttendeesExceptSome(self, request, resource, except_attendees=(), only_attendees=None):
         """
-        
-        @param request:
-        @type request:
-        @param attendee:
-        @type attendee:
-        @param calendar:
-        @type calendar:
+        Refresh the iCalendar data for all attendees except the one specified in attendees.
         """
 
         self.request = request
         self.resource = resource
-        self.calendar = calendar
+        self.calendar = (yield self.resource.iCalendarForUser(self.request))
         self.state = "organizer"
         self.action = "modify"
 
         self.calendar_owner = None
         self.internal_request = True
-        self.except_attendees = attendees
+        self.except_attendees = except_attendees
+        self.only_refresh_attendees = only_attendees
         self.changed_rids = None
         self.reinvites = None
 
@@ -463,6 +459,12 @@
         outbox = (yield self.request.locateResource(outboxURL))
         yield outbox.authorize(self.request, (caldavxml.ScheduleSend(),))
 
+    def makeScheduler(self):
+        """
+        Convenience method which we can override in unit tests to make testing easier.
+        """
+        return CalDAVScheduler(self.request, self.resource)
+
     @inlineCallbacks
     def doImplicitOrganizer(self):
         
@@ -835,7 +837,7 @@
             # Send scheduling message
             
             # This is a local CALDAV scheduling operation.
-            scheduler = CalDAVScheduler(self.request, self.resource)
+            scheduler = self.makeScheduler()
     
             # Do the PUT processing
             log.info("Implicit CANCEL - organizer: '%s' to attendee: '%s', UID: '%s', RIDs: '%s'" % (self.organizer, attendee, self.uid, rids))
@@ -864,6 +866,10 @@
             if attendee in self.except_attendees:
                 continue
 
+            # Only send to specified attendees
+            if self.only_refresh_attendees is not None and attendee not in self.only_refresh_attendees:
+                continue
+
             # If SCHEDULE-FORCE-SEND only change, only send message to those Attendees
             if self.reinvites and attendee in self.reinvites:
                 continue
@@ -873,7 +879,7 @@
             # Send scheduling message
             if itipmsg is not None:
                 # This is a local CALDAV scheduling operation.
-                scheduler = CalDAVScheduler(self.request, self.resource)
+                scheduler = self.makeScheduler()
         
                 # Do the PUT processing
                 log.info("Implicit REQUEST - organizer: '%s' to attendee: '%s', UID: '%s'" % (self.organizer, attendee, self.uid,))
@@ -1123,7 +1129,7 @@
         # Send scheduling message
 
         # This is a local CALDAV scheduling operation.
-        scheduler = CalDAVScheduler(self.request, self.resource)
+        scheduler = self.makeScheduler()
 
         # Do the PUT processing
         def _gotResponse(response):

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/processing.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/processing.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/processing.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -35,6 +35,7 @@
 from twistedcaldav.scheduling.itip import iTipProcessing, iTIPRequestStatus
 from twistedcaldav.scheduling.utils import getCalendarObjectForPrincipals
 from twistedcaldav.memcachelock import MemcacheLock, MemcacheLockTimeoutError
+from twistedcaldav.memcacher import Memcacher
 from pycalendar.duration import PyCalendarDuration
 from pycalendar.datetime import PyCalendarDateTime
 from pycalendar.timezone import PyCalendarTimezone
@@ -173,7 +174,7 @@
  
             # Update the organizer's copy of the event
             log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:REPLY, UID: '%s' - updating event" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
-            recipient_calendar_resource = (yield self.writeCalendarResource(self.recipient_calendar_collection_uri, self.recipient_calendar_collection, self.recipient_calendar_name, self.recipient_calendar))
+            self.organizer_calendar_resource = (yield self.writeCalendarResource(self.recipient_calendar_collection_uri, self.recipient_calendar_collection, self.recipient_calendar_name, self.recipient_calendar))
             
             # Build the schedule-changes XML element
             attendeeReplying, rids = processed
@@ -204,7 +205,7 @@
 
             # Only update other attendees when the partstat was changed by the reply
             if partstatChanged:
-                yield self.queueAttendeeUpdate(recipient_calendar_resource, (attendeeReplying,))
+                yield self.queueAttendeeUpdate((attendeeReplying,))
 
             result = (True, False, True, changes,)
 
@@ -215,9 +216,12 @@
         returnValue(result)
 
     @inlineCallbacks
-    def queueAttendeeUpdate(self, resource, attendees):
+    def queueAttendeeUpdate(self, exclude_attendees):
         """
         Queue up an update to attendees and use a memcache lock to ensure we don't update too frequently.
+        
+        @param exclude_attendees: list of attendees who should not be refreshed (e.g., the one that triggeed the refresh)
+        @type exclude_attendees: C{list}
         """
         
         # When doing auto-processing of replies, only refresh attendees when the last auto-accept is done.
@@ -227,72 +231,159 @@
             self.request.auto_reply_suppressed = True
             returnValue(None)
         if hasattr(self.request, "auto_reply_suppressed"):
-            attendees = ()
+            exclude_attendees = ()
 
-        # Use a memcachelock to ensure others don't refresh whilst we have an enqueued call
         self.uid = self.recipient_calendar.resourceUID()
-        if config.Scheduling.Options.AttendeeRefreshInterval:
-            attendees = ()
-            lock = MemcacheLock("RefreshUIDLock", self.uid, timeout=0.0, expire_time=config.Scheduling.Options.AttendeeRefreshInterval)
+
+        # Check for batched refreshes
+        if config.Scheduling.Options.AttendeeRefreshBatch:
             
-            # Try lock, but fail immediately if already taken
+            # Need to lock whilst manipulating the batch list
+            lock = MemcacheLock("BatchRefreshUIDLock", self.uid, timeout=60.0, expire_time=60.0)
             try:
                 yield lock.acquire()
             except MemcacheLockTimeoutError:
+                # If we could not lock then just fail the refresh - not sure what else to do
                 returnValue(None)
+            
+            try:
+                # Get all attendees to refresh
+                allAttendees = sorted(list(self.recipient_calendar.getAllUniqueAttendees()))
+    
+                # Always need to refresh every attendee
+                exclude_attendees = ()
+                
+                # See if there is already a pending refresh and merge current attendees into that list,
+                # otherwise just mark all attendees as pending
+                cache = Memcacher("BatchRefreshAttendees", pickle=True)
+                pendingAttendees = yield cache.get(self.uid)
+                firstTime = False
+                if pendingAttendees:
+                    for attendee in allAttendees:
+                        if attendee not in pendingAttendees:
+                            pendingAttendees.append(attendee)
+                else:
+                    firstTime = True
+                    pendingAttendees = allAttendees
+                yield cache.set(self.uid, pendingAttendees)
+    
+                # Now start the first batch off
+                if firstTime:
+                    reactor.callLater(config.Scheduling.Options.AttendeeRefreshBatchDelaySeconds, self._doBatchRefresh)
+            finally:
+                yield lock.clean()
+        
         else:
-            lock = None
+            yield self._doRefresh(self.organizer_calendar_resource, exclude_attendees)
 
-        @inlineCallbacks
-        def _doRefresh(organizer_resource):
-            log.debug("ImplicitProcessing - refreshing UID: '%s'" % (self.uid,))
-            from twistedcaldav.scheduling.implicit import ImplicitScheduler
-            scheduler = ImplicitScheduler()
-            yield scheduler.refreshAllAttendeesExceptSome(self.request, organizer_resource, self.recipient_calendar, attendees)
+    @inlineCallbacks
+    def _doRefresh(self, organizer_resource, exclude_attendees=(), only_attendees=None):
+        """
+        Do a refresh of attendees.
 
-        @inlineCallbacks
-        def _doDelayedRefresh():
+        @param organizer_resource: the resource for the organizer's calendar data
+        @type organizer_resource: L{DAVResource}
+        @param exclude_attendees: list of attendees to not refresh
+        @type exclude_attendees: C{tuple}
+        @param only_attendees: list of attendees to refresh (C{None} - refresh all) 
+        @type only_attendees: C{tuple}
+        """
+        log.debug("ImplicitProcessing - refreshing UID: '%s', Attendees: %s" % (self.uid, ", ".join(only_attendees) if only_attendees else "all"))
+        from twistedcaldav.scheduling.implicit import ImplicitScheduler
+        scheduler = ImplicitScheduler()
+        yield scheduler.refreshAllAttendeesExceptSome(
+            self.request,
+            organizer_resource,
+            exclude_attendees,
+            only_attendees=only_attendees,
+        )
+        
+    @inlineCallbacks
+    def _doDelayedRefresh(self, attendeesToProcess):
+        """
+        Do an attendee refresh that has been delayed until after processing of the request that called it. That
+        requires that we create a new transaction to work with.
 
-            # We need to get the UID lock for implicit processing whilst we send the auto-reply
-            # as the Organizer processing will attempt to write out data to other attendees to
-            # refresh them. To prevent a race we need a lock.
-            uidlock = MemcacheLock("ImplicitUIDLock", self.uid, timeout=60.0, expire_time=5*60)
-    
+        @param attendeesToProcess: list of attendees to refresh.
+        @type attendeesToProcess: C{list}
+        """
+
+        # We need to get the UID lock for implicit processing whilst we send the auto-reply
+        # as the Organizer processing will attempt to write out data to other attendees to
+        # refresh them. To prevent a race we need a lock.
+        uidlock = MemcacheLock("ImplicitUIDLock", self.uid, timeout=60.0, expire_time=5*60)
+
+        try:
+            yield uidlock.acquire()
+        except MemcacheLockTimeoutError:
+            # Just try again to get the lock
+            reactor.callLater(2.0, self._doDelayedRefresh, attendeesToProcess)
+        else:
+
+            # inNewTransaction wipes out the remembered resource<-> URL mappings in the
+            # request object but we need to be able to map the actual reply resource to its
+            # URL when doing auto-processing, so we have to sneak that mapping back in here.
+            txn = yield self.organizer_calendar_resource.inNewTransaction(self.request)
+            organizer_resource = (yield self.request.locateResource(self.organizer_calendar_resource._url))
+
             try:
-                yield uidlock.acquire()
-            except MemcacheLockTimeoutError:
-                # Just try again to get the lock
-                reactor.callLater(2.0, _doDelayedRefresh)
+                if organizer_resource.exists():
+                    yield self._doRefresh(organizer_resource, only_attendees=attendeesToProcess)
+                else:
+                    log.debug("ImplicitProcessing - skipping refresh of missing UID: '%s'" % (self.uid,))
+            except Exception, e:
+                log.debug("ImplicitProcessing - refresh exception UID: '%s', %s" % (self.uid, str(e)))
+                yield txn.abort()
             else:
+                yield txn.commit()
+        finally:
+            yield uidlock.clean()
+
+    @inlineCallbacks
+    def _doBatchRefresh(self):
+        """
+        Do refresh of attendees in batches until the batch list is empty.
+        """
+
+        # Need to lock whilst manipulating the batch list
+        log.debug("ImplicitProcessing - batch refresh for UID: '%s'" % (self.uid,))
+        lock = MemcacheLock("BatchRefreshUIDLock", self.uid, timeout=60.0, expire_time=60.0)
+        try:
+            yield lock.acquire()
+        except MemcacheLockTimeoutError:
+            # If we could not lock then just fail the refresh - not sure what else to do
+            returnValue(None)
+
+        try:
+            # Get the batch list
+            cache = Memcacher("BatchRefreshAttendees", pickle=True)
+            pendingAttendees = yield cache.get(self.uid)
+            if pendingAttendees:
                 
-                # Release lock before sending refresh
+                # Get the next batch of attendees to process and update the cache value or remove it if
+                # no more processing is needed
+                attendeesToProcess = pendingAttendees[:config.Scheduling.Options.AttendeeRefreshBatch]
+                pendingAttendees = pendingAttendees[config.Scheduling.Options.AttendeeRefreshBatch:]
+                if pendingAttendees:
+                    yield cache.set(self.uid, pendingAttendees)
+                else:
+                    yield cache.delete(self.uid)
+                    
+                # Make sure we release this here to avoid potential deadlock when grabbing the ImplicitUIDLock in the next call
                 yield lock.release()
-    
-                # inNewTransaction wipes out the remembered resource<-> URL mappings in the
-                # request object but we need to be able to map the actual reply resource to its
-                # URL when doing auto-processing, so we have to sneak that mapping back in here.
-                txn = yield resource.inNewTransaction(self.request)
-                organizer_resource = (yield self.request.locateResource(resource._url))
-    
-                try:
-                    if organizer_resource.exists():
-                        yield _doRefresh(organizer_resource)
-                    else:
-                        log.debug("ImplicitProcessing - skipping refresh of missing UID: '%s'" % (self.uid,))
-                except Exception, e:
-                    log.debug("ImplicitProcessing - refresh exception UID: '%s', %s" % (self.uid, str(e)))
-                    yield txn.abort()
-                else:
-                    yield txn.commit()
-            finally:
-                # This correctly gets called only after commit or abort is done
-                yield uidlock.clean()
-
-        if lock:
-            reactor.callLater(config.Scheduling.Options.AttendeeRefreshInterval, _doDelayedRefresh)
-        else:
-            yield _doRefresh(resource)
-
+                
+                # Now do the batch refresh
+                yield self._doDelayedRefresh(attendeesToProcess)
+                
+                # Queue the next refresh if needed
+                if pendingAttendees:
+                    reactor.callLater(config.Scheduling.Options.AttendeeRefreshBatchIntervalSeconds, self._doBatchRefresh)
+            else:
+                yield cache.delete(self.uid)
+                yield lock.release()
+        finally:
+            yield lock.clean()
+            
     @inlineCallbacks
     def doImplicitAttendee(self):
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/test/test_implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/test/test_implicit.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/scheduling/test/test_implicit.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -14,12 +14,35 @@
 # limitations under the License.
 ##
 
+from pycalendar.datetime import PyCalendarDateTime
+from pycalendar.timezone import PyCalendarTimezone
+from twext.web2 import responsecode
+from twisted.internet.defer import succeed, inlineCallbacks
 from twistedcaldav.ical import Component
+from twistedcaldav.scheduling.implicit import ImplicitScheduler
+from twistedcaldav.scheduling.scheduler import ScheduleResponseQueue
 import twistedcaldav.test.util
-from twistedcaldav.scheduling.implicit import ImplicitScheduler
-from pycalendar.datetime import PyCalendarDateTime
-from pycalendar.timezone import PyCalendarTimezone
 
+class FakeScheduler(object):
+    """
+    A fake CalDAVScheduler that does nothing except track who messages were sent to.
+    """
+    
+    def __init__(self, recipients):
+        self.recipients = recipients
+
+    def doSchedulingViaPUT(self, originator, recipients, calendar, internal_request=False):
+        self.recipients.extend(recipients)
+        return succeed(ScheduleResponseQueue("FAKE", responsecode.OK))
+
+class FakePrincipal(object):
+    
+    def __init__(self, cuaddr):
+        self.cuaddr = cuaddr
+        
+    def calendarUserAddresses(self):
+        return (self.cuaddr,)
+
 class Implicit (twistedcaldav.test.util.TestCase):
     """
     iCalendar support tests
@@ -28,134 +51,134 @@
     def test_removed_attendees(self):
         
         data = (
-#            (
-#                "#1.1 Simple component, no change",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#ATTENDEE:mailto:user2 at example.com
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#ATTENDEE:mailto:user2 at example.com
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                (),
-#            ),
-#            (
-#                "#1.2 Simple component, one removal",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#ATTENDEE:mailto:user2 at example.com
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                (("mailto:user2 at example.com", None),),
-#            ),
-#            (
-#                "#1.3 Simple component, two removals",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#ATTENDEE:mailto:user2 at example.com
-#ATTENDEE:mailto:user3 at example.com
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                (
-#                    ("mailto:user2 at example.com", None),
-#                    ("mailto:user3 at example.com", None),
-#                ),
-#            ),
-#            (
-#                "#2.1 Simple recurring component, two removals",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#ATTENDEE:mailto:user2 at example.com
-#ATTENDEE:mailto:user3 at example.com
-#RRULE:FREQ=MONTHLY
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                """BEGIN:VCALENDAR
-#VERSION:2.0
-#PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
-#BEGIN:VEVENT
-#UID:12345-67890
-#DTSTART:20080601T120000Z
-#DTEND:20080601T130000Z
-#ORGANIZER;CN="User 01":mailto:user1 at example.com
-#ATTENDEE:mailto:user1 at example.com
-#RRULE:FREQ=MONTHLY
-#END:VEVENT
-#END:VCALENDAR
-#""",
-#                (
-#                    ("mailto:user2 at example.com", None),
-#                    ("mailto:user3 at example.com", None),
-#                ),
-#            ),
             (
+                "#1.1 Simple component, no change",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                (),
+            ),
+            (
+                "#1.2 Simple component, one removal",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                (("mailto:user2 at example.com", None),),
+            ),
+            (
+                "#1.3 Simple component, two removals",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+ATTENDEE:mailto:user3 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+END:VEVENT
+END:VCALENDAR
+""",
+                (
+                    ("mailto:user2 at example.com", None),
+                    ("mailto:user3 at example.com", None),
+                ),
+            ),
+            (
+                "#2.1 Simple recurring component, two removals",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+ATTENDEE:mailto:user3 at example.com
+RRULE:FREQ=MONTHLY
+END:VEVENT
+END:VCALENDAR
+""",
+                """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+RRULE:FREQ=MONTHLY
+END:VEVENT
+END:VCALENDAR
+""",
+                (
+                    ("mailto:user2 at example.com", None),
+                    ("mailto:user3 at example.com", None),
+                ),
+            ),
+            (
                 "#2.2 Simple recurring component, add exdate",
                 """BEGIN:VCALENDAR
 VERSION:2.0
@@ -755,10 +778,62 @@
             scheduler.calendar = Component.fromString(calendar2)
             scheduler.extractCalendarData()
             scheduler.findRemovedAttendees()
-#            if not description.startswith("#4.3"):
-#                continue
-#            print description
-#            print scheduler.cancelledAttendees
-#            print set(result)
             self.assertEqual(scheduler.cancelledAttendees, set(result), msg=description)
 
+
+    @inlineCallbacks   
+    def test_process_request_excludes_includes(self):
+        """
+        Test that processRequests correctly excludes or includes the specified attendees.
+        """
+
+        data = (
+            ((), None, 3, ("mailto:user2 at example.com", "mailto:user3 at example.com", "mailto:user4 at example.com",),),
+            (("mailto:user2 at example.com",), None, 2, ("mailto:user3 at example.com", "mailto:user4 at example.com",),),
+            ((), ("mailto:user2 at example.com", "mailto:user4 at example.com",) , 2, ("mailto:user2 at example.com", "mailto:user4 at example.com",),),
+        )
+
+        calendar = """BEGIN:VCALENDAR
+VERSION:2.0
+PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN
+BEGIN:VEVENT
+UID:12345-67890
+DTSTART:20080601T120000Z
+DTEND:20080601T130000Z
+ORGANIZER;CN="User 01":mailto:user1 at example.com
+ATTENDEE:mailto:user1 at example.com
+ATTENDEE:mailto:user2 at example.com
+ATTENDEE:mailto:user3 at example.com
+ATTENDEE:mailto:user4 at example.com
+END:VEVENT
+END:VCALENDAR
+"""
+
+        for excludes, includes, result_count, result_set in data:
+            scheduler = ImplicitScheduler()
+            scheduler.resource = None
+            scheduler.request = None
+            scheduler.calendar = Component.fromString(calendar)
+            scheduler.state = "organizer"
+            scheduler.action = "modify"
+            scheduler.calendar_owner = None
+            scheduler.internal_request = True
+            scheduler.except_attendees = excludes
+            scheduler.only_refresh_attendees = includes
+            scheduler.changed_rids = None
+            scheduler.reinvites = None
+    
+            # Get some useful information from the calendar
+            yield scheduler.extractCalendarData()
+            scheduler.organizerPrincipal = FakePrincipal(scheduler.organizer)
+    
+            recipients = []
+            
+            def makeFakeScheduler():
+                return FakeScheduler(recipients)
+            scheduler.makeScheduler = makeFakeScheduler
+            
+            count = (yield scheduler.processRequests())
+            self.assertEqual(count, result_count)
+            self.assertEqual(len(recipients), result_count)
+            self.assertEqual(set(recipients), set(result_set))

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/stdconfig.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/stdconfig.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -608,12 +608,14 @@
         },
 
         "Options" : {
-            "AllowGroupAsOrganizer"      : False, # Allow groups to be Organizers
-            "AllowLocationAsOrganizer"   : False, # Allow locations to be Organizers
-            "AllowResourceAsOrganizer"   : False, # Allow resources to be Organizers
-            "AllowUserAutoAccept"        : False, # Allow auto-accept for users
-            "LimitFreeBusyAttendees"     : 30,    # Maximum number of attendees to request freebusy for
-            "AttendeeRefreshInterval"    : 0,     # Time after an iTIP REPLY at which attendee refresh will trigger 
+            "AllowGroupAsOrganizer"               : False, # Allow groups to be Organizers
+            "AllowLocationAsOrganizer"            : False, # Allow locations to be Organizers
+            "AllowResourceAsOrganizer"            : False, # Allow resources to be Organizers
+            "AllowUserAutoAccept"                 : False, # Allow auto-accept for users
+            "LimitFreeBusyAttendees"              : 30,    # Maximum number of attendees to request freebusy for
+            "AttendeeRefreshBatch"                :  5,    # Number of attendees to do batched refreshes: 0 - no batching
+            "AttendeeRefreshBatchDelaySeconds"    :  5,    # Time after an iTIP REPLY for first batched attendee refresh
+            "AttendeeRefreshBatchIntervalSeconds" :  5,    # Time between attendee batch refreshes 
         }
     },
 
@@ -648,7 +650,7 @@
                 "ProviderPort" : 2195,
                 "FeedbackHost" : "feedback.push.apple.com",
                 "FeedbackPort" : 2196,
-                "FeedbackUpdateSeconds" : 300, # 5 minutes
+                "FeedbackUpdateSeconds" : 28800, # 8 hours
                 "Environment" : "PRODUCTION",
                 "CalDAV" : {
                     "CertificatePath" : "",

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/test/test_mail.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/test/test_mail.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/test/test_mail.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -159,6 +159,7 @@
         self.handler.lowercase()
         retrieved = self.handler.db.getToken(organizer.lower(),
             attendee.lower(), icaluid)
+        self.assertIsInstance(retrieved, str)
         self.assertEquals(retrieved, token)
 
         # Insert a token with (new-format) urn:uuid:

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/upgrade.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/twistedcaldav/upgrade.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -90,7 +90,7 @@
 #
 
 @inlineCallbacks
-def upgrade_to_1(config):
+def upgrade_to_1(config, directory):
 
     errorOccurred = False
 
@@ -363,7 +363,6 @@
 
 
 
-    directory = getDirectory()
     cuaCache = {}
 
     docRoot = config.DocumentRoot
@@ -559,7 +558,7 @@
 
 
 @inlineCallbacks
-def upgrade_to_2(config):
+def upgrade_to_2(config, directory):
     
     errorOccurred = False
 
@@ -575,45 +574,6 @@
         if os.path.exists(oldDbPath) and not os.path.exists(newDbPath):
             os.rename(oldDbPath, newDbPath)
 
-    def migrateFromOD():
-        #
-        # Migrates locations and resources from OD
-        #
-        triggerFile = "trigger_resource_migration"
-        triggerPath = os.path.join(config.ServerRoot, triggerFile)
-        if os.path.exists(triggerPath):
-            os.remove(triggerPath)
-    
-            log.info("Migrating locations and resources")
-    
-            directory = getDirectory()
-            userService = directory.serviceForRecordType("users")
-            resourceService = directory.serviceForRecordType("resources")
-            if (
-                not isinstance(userService, OpenDirectoryService) or
-                not isinstance(resourceService, XMLDirectoryService)
-            ):
-                # Configuration requires no migration
-                return succeed(None)
-    
-            # Fetch the autoSchedule assignments from resourceinfo.sqlite and pass
-            # those to migrateResources
-            autoSchedules = {}
-            dbPath = os.path.join(config.DataRoot, ResourceInfoDatabase.dbFilename)
-            if os.path.exists(dbPath):
-                resourceInfoDatabase = ResourceInfoDatabase(config.DataRoot)
-                results = resourceInfoDatabase._db_execute(
-                    "select GUID, AUTOSCHEDULE from RESOURCEINFO"
-                )
-                for guid, autoSchedule in results:
-                    autoSchedules[guid] = autoSchedule
-    
-            # Create internal copies of resources and locations based on what is
-            # found in OD, overriding the autoSchedule default with existing
-            # assignments from resourceinfo.sqlite
-            return migrateResources(userService, resourceService,
-                autoSchedules=autoSchedules)
-
     def flattenHome(calHome):
 
         log.debug("Flattening calendar home: %s" % (calHome,))
@@ -658,9 +618,9 @@
 
         except Exception, e:
             log.error("Failed to upgrade calendar home %s: %s" % (calHome, e))
-            return False
+            return succeed(False)
         
-        return True
+        return succeed(True)
 
     def flattenHomes():
         """
@@ -690,20 +650,20 @@
                                     continue
                                 if not flattenHome(calHome):
                                     errorOccurred = True
-        
+
         return errorOccurred
-        
+
     renameProxyDB()
+
+    # Move auto-schedule from resourceinfo sqlite to augments:
+    yield migrateAutoSchedule(config, directory)
+
     errorOccurred = flattenHomes()
-    try:
-        yield migrateFromOD()
-    except:
-        errorOccurred = True
-        
+
     if errorOccurred:
         raise UpgradeError("Data upgrade failed, see error.log for details")
-    
 
+
 # The on-disk version number (which defaults to zero if .calendarserver_version
 # doesn't exist), is compared with each of the numbers in the upgradeMethods
 # array.  If it is less than the number, the associated method is called.
@@ -716,6 +676,15 @@
 @inlineCallbacks
 def upgradeData(config):
 
+    directory = getDirectory()
+
+    try:
+        # Migrate locations/resources now because upgrade_to_1 depends on them
+        # being in resources.xml
+        (yield migrateFromOD(config, directory))
+    except Exception, e:
+        raise UpgradeError("Unable to migrate locations and resources from OD: %s" % (e,))
+
     docRoot = config.DocumentRoot
 
     versionFilePath = os.path.join(docRoot, ".calendarserver_version")
@@ -737,7 +706,7 @@
     for version, method in upgradeMethods:
         if onDiskVersion < version:
             log.warn("Upgrading to version %d" % (version,))
-            (yield method(config))
+            (yield method(config, directory))
             log.warn("Upgraded to version %d" % (version,))
             with open(versionFilePath, "w") as verFile:
                 verFile.write(str(version))
@@ -873,6 +842,54 @@
         return data, False
 
 
+# Deferred
+def migrateFromOD(config, directory):
+    #
+    # Migrates locations and resources from OD
+    #
+    triggerFile = "trigger_resource_migration"
+    triggerPath = os.path.join(config.ServerRoot, triggerFile)
+    if os.path.exists(triggerPath):
+        os.remove(triggerPath)
+
+        log.warn("Migrating locations and resources")
+
+        userService = directory.serviceForRecordType("users")
+        resourceService = directory.serviceForRecordType("resources")
+        if (
+            not isinstance(userService, OpenDirectoryService) or
+            not isinstance(resourceService, XMLDirectoryService)
+        ):
+            # Configuration requires no migration
+            return succeed(None)
+
+        # Create internal copies of resources and locations based on what is
+        # found in OD
+        return migrateResources(userService, resourceService)
+
+
+ at inlineCallbacks
+def migrateAutoSchedule(config, directory):
+    # Fetch the autoSchedule assignments from resourceinfo.sqlite and store
+    # the values in augments
+    augmentService = directory.augmentService
+    augmentRecords = []
+    dbPath = os.path.join(config.DataRoot, ResourceInfoDatabase.dbFilename)
+    if os.path.exists(dbPath):
+        resourceInfoDatabase = ResourceInfoDatabase(config.DataRoot)
+        results = resourceInfoDatabase._db_execute(
+            "select GUID, AUTOSCHEDULE from RESOURCEINFO"
+        )
+        for guid, autoSchedule in results:
+            record = directory.recordWithGUID(guid)
+            if record is not None:
+                augmentRecord = (yield augmentService.getAugmentRecord(guid, record.recordType))
+                augmentRecord.autoSchedule = autoSchedule
+                augmentRecords.append(augmentRecord)
+
+    yield augmentService.addAugmentRecords(augmentRecords)
+
+
 class UpgradeFileSystemFormatService(Service, object):
     """
     Upgrade filesystem from previous versions.
@@ -894,8 +911,17 @@
 
         @return: a Deferred which fires when the upgrade is complete.
         """
+
+        # Don't try to use memcached during upgrade; it's not necessarily
+        # running yet.
+        memcacheEnabled = self.config.Memcached.Pools.Default.ClientEnabled
+        self.config.Memcached.Pools.Default.ClientEnabled = False
+
         yield upgradeData(self.config)
 
+        # Restore memcached client setting
+        self.config.Memcached.Pools.Default.ClientEnabled = memcacheEnabled
+
         # see http://twistedmatrix.com/trac/ticket/4649
         reactor.callLater(0, self.wrappedService.setServiceParent, self.parent)
 
@@ -963,7 +989,9 @@
                 os.chown(dbPath, uid, gid)
 
         # Process old inbox items
+        self.store.setMigrating(True)
         yield self.processInboxItems()
+        self.store.setMigrating(False)
 
 
     @inlineCallbacks
@@ -1065,7 +1093,6 @@
         log.debug("Processing inbox item %s" % (inboxItem,))
 
         txn = request._newStoreTransaction
-        txn._notifierFactory = None # Do not send push notifications
 
         ownerPrincipal = principal
         cua = "urn:uuid:%s" % (uuid,)

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/file.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/file.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -63,8 +63,8 @@
     CommonObjectResource, CommonStubResource)
 from txdav.caldav.icalendarstore import QuotaExceeded
 
-from txdav.common.icommondatastore import (NoSuchObjectResourceError,
-    InternalDataStoreError)
+from txdav.common.icommondatastore import ConcurrentModification
+from txdav.common.icommondatastore import InternalDataStoreError
 from txdav.base.datastore.file import writeOperation, hidden, FileMetaDataMixin
 from txdav.base.propertystore.base import PropertyName
 
@@ -456,7 +456,7 @@
             fh = self._path.open()
         except IOError, e:
             if e[0] == ENOENT:
-                raise NoSuchObjectResourceError(self)
+                raise ConcurrentModification()
             else:
                 raise
 


Property changes on: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/index_file.py
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation/txdav/caldav/datastore/index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/caldav/datastore/index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/caldav/datastore/index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/caldav/datastore/index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/caldav/datastore/index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/caldav/datastore/index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/caldav/datastore/index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/caldav/datastore/index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/caldav/datastore/index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/txdav/caldav/datastore/index_file.py:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/caldav/datastore/index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/caldav/datastore/index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/caldav/datastore/index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/caldav/datastore/index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/caldav/datastore/index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/caldav/datastore/index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/caldav/datastore/index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/caldav/datastore/index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/caldav/datastore/index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/caldav/datastore/index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/caldav/datastore/index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/caldav/datastore/index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/caldav/datastore/index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/caldav/datastore/index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/caldav/datastore/index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/caldav/datastore/index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/caldav/datastore/index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/caldav/datastore/index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/caldav/datastore/index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/caldav/datastore/index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/caldav/datastore/index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/caldav/datastore/index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/caldav/datastore/index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/caldav/datastore/index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/caldav/datastore/index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/caldav/datastore/index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/caldav/datastore/index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/caldav/datastore/index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/caldav/datastore/index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/caldav/datastore/index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/caldav/datastore/index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/caldav/datastore/index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/caldav/datastore/index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/caldav/datastore/index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/index.py:6322-6394
/CalendarServer/trunk/txdav/caldav/datastore/index_file.py:8130-8260
   + /CalendarServer/branches/config-separation/txdav/caldav/datastore/index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/caldav/datastore/index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/caldav/datastore/index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/caldav/datastore/index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/caldav/datastore/index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/caldav/datastore/index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/caldav/datastore/index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/caldav/datastore/index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/caldav/datastore/index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/txdav/caldav/datastore/index_file.py:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/caldav/datastore/index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/caldav/datastore/index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/caldav/datastore/index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/caldav/datastore/index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/caldav/datastore/index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes/txdav/caldav/datastore/index_file.py:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/caldav/datastore/index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/caldav/datastore/index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/caldav/datastore/index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/caldav/datastore/index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/caldav/datastore/index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/caldav/datastore/index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/caldav/datastore/index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/caldav/datastore/index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/caldav/datastore/index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/caldav/datastore/index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/caldav/datastore/index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/caldav/datastore/index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/caldav/datastore/index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/caldav/datastore/index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/caldav/datastore/index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/caldav/datastore/index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/caldav/datastore/index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/caldav/datastore/index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/caldav/datastore/index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/caldav/datastore/index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/caldav/datastore/index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/caldav/datastore/index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/caldav/datastore/index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/caldav/datastore/index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/caldav/datastore/index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/caldav/datastore/index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/caldav/datastore/index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/caldav/datastore/index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/caldav/datastore/index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/index.py:6322-6394
/CalendarServer/trunk/txdav/caldav/datastore/index_file.py:8130-8344

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/sql.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/sql.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -328,6 +328,7 @@
         return self._home
 
 
+    # FIXME: resource type is DAV.  This doesn't belong in the data store.  -wsv
     def resourceType(self):
         return ResourceType.calendar #@UndefinedVariable
 
@@ -392,6 +393,8 @@
             ),
         )
 
+    # FIXME: this is DAV-ish.  Data store calendar objects don't have
+    # mime types.  -wsv
     def contentType(self):
         """
         The content type of Calendar objects is text/calendar.
@@ -905,9 +908,10 @@
     @inlineCallbacks
     def component(self):
         """
-        Read calendar data and validate/fix it. Do not raise a store error here if there are unfixable
-        errors as that could prevent the overall request to fail. Instead we will hand bad data off to
-        the caller - that is not ideal but in theory we should have checked everything on the way in and
+        Read calendar data and validate/fix it. Do not raise a store error here
+        if there are unfixable errors as that could prevent the overall request
+        to fail. Instead we will hand bad data off to the caller - that is not
+        ideal but in theory we should have checked everything on the way in and
         only allowed in good data.
         """
         text = yield self._text()
@@ -925,10 +929,12 @@
         fixed, unfixed = component.validCalendarData(doFix=True, doRaise=False)
 
         if unfixed:
-            self.log_error("Calendar data id=%s had unfixable problems:\n  %s" % (self._resourceID, "\n  ".join(unfixed),))
-        
+            self.log_error("Calendar data id=%s had unfixable problems:\n  %s" %
+                           (self._resourceID, "\n  ".join(unfixed),))
+
         if fixed:
-            self.log_error("Calendar data id=%s had fixable problems:\n  %s" % (self._resourceID, "\n  ".join(fixed),))
+            self.log_error("Calendar data id=%s had fixable problems:\n  %s" %
+                           (self._resourceID, "\n  ".join(fixed),))
 
         returnValue(component)
 
@@ -1284,6 +1290,7 @@
     def created(self):
         return self._created
 
+
     def modified(self):
         return self._modified
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/common.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/common.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -55,6 +55,7 @@
 from txdav.caldav.icalendarstore import QuotaExceeded
 from txdav.common.datastore.test.util import deriveQuota
 from txdav.common.datastore.test.util import withSpecialQuota
+from txdav.common.icommondatastore import ConcurrentModification
 from twistedcaldav.ical import Component
 from twistedcaldav.config import config
 
@@ -300,34 +301,33 @@
 
 
     @inlineCallbacks
-    def homeUnderTest(self):
+    def homeUnderTest(self, txn=None):
         """
         Get the calendar home detailed by C{requirements['home1']}.
         """
-        returnValue(
-            (yield self.transactionUnderTest().calendarHomeWithUID("home1"))
-        )
+        if txn is None:
+            txn = self.transactionUnderTest()
+        returnValue((yield txn.calendarHomeWithUID("home1")))
 
 
     @inlineCallbacks
-    def calendarUnderTest(self):
+    def calendarUnderTest(self, txn=None):
         """
         Get the calendar detailed by C{requirements['home1']['calendar_1']}.
         """
         returnValue((yield
-            (yield self.homeUnderTest()).calendarWithName("calendar_1"))
+            (yield self.homeUnderTest(txn)).calendarWithName("calendar_1"))
         )
 
 
     @inlineCallbacks
-    def calendarObjectUnderTest(self, name="1.ics"):
+    def calendarObjectUnderTest(self, name="1.ics", txn=None):
         """
         Get the calendar detailed by
         C{requirements['home1']['calendar_1'][name]}.
         """
-        returnValue(
-            (yield (yield self.calendarUnderTest())
-                .calendarObjectWithName(name)))
+        returnValue((yield (yield self.calendarUnderTest(txn))
+                     .calendarObjectWithName(name)))
 
 
     def test_calendarStoreProvides(self):
@@ -837,6 +837,29 @@
 
 
     @inlineCallbacks
+    def test_calendarObjectRemoveConcurrent(self):
+        """
+        If a transaction, C{A}, is examining an L{ICalendarObject} C{O} while
+        another transaction, C{B}, deletes O, L{O.component()} should raise
+        L{ConcurrentModification}.  (This assumes that we are in the default
+        serialization level, C{READ COMMITTED}.  This test might fail if
+        something changes that.)
+        """
+        calendarObject = yield self.calendarObjectUnderTest()
+        ctxn = self.concurrentTransaction()
+        calendar1prime = yield self.calendarUnderTest(ctxn)
+        yield calendar1prime.removeCalendarObjectWithName("1.ics")
+        yield ctxn.commit()
+        try:
+            retrieval = yield calendarObject.component()
+        except ConcurrentModification:
+            pass
+        else:
+            self.fail("ConcurrentModification not raised, %r returned." %
+                      (retrieval,))
+
+
+    @inlineCallbacks
     def test_ownerCalendarHome(self):
         """
         L{ICalendar.ownerCalendarHome} should match the home UID.


Property changes on: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/test_index_file.py
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation/txdav/caldav/datastore/test/test_index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/caldav/datastore/test/test_index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/caldav/datastore/test/test_index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/caldav/datastore/test/test_index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/caldav/datastore/test/test_index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/caldav/datastore/test/test_index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/caldav/datastore/test/test_index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/caldav/datastore/test/test_index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/caldav/datastore/test/test_index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/caldav/datastore/test/test_index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/caldav/datastore/test/test_index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/caldav/datastore/test/test_index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/caldav/datastore/test/test_index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/caldav/datastore/test/test_index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/caldav/datastore/test/test_index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/caldav/datastore/test/test_index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/caldav/datastore/test/test_index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/caldav/datastore/test/test_index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/caldav/datastore/test/test_index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/caldav/datastore/test/test_index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/caldav/datastore/test/test_index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/caldav/datastore/test/test_index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/caldav/datastore/test/test_index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/caldav/datastore/test/test_index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/caldav/datastore/test/test_index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/caldav/datastore/test/test_index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/caldav/datastore/test/test_index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/caldav/datastore/test/test_index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/caldav/datastore/test/test_index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/caldav/datastore/test/test_index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/caldav/datastore/test/test_index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/caldav/datastore/test/test_index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/caldav/datastore/test/test_index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/caldav/datastore/test/test_index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/caldav/datastore/test/test_index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/caldav/datastore/test/test_index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/caldav/datastore/test/test_index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/caldav/datastore/test/test_index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/caldav/datastore/test/test_index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/caldav/datastore/test/test_index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/caldav/datastore/test/test_index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/caldav/datastore/test/test_index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/caldav/datastore/test/test_index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/test/test_index.py:6322-6394
/CalendarServer/trunk/txdav/caldav/datastore/test/test_index_file.py:8130-8260
   + /CalendarServer/branches/config-separation/txdav/caldav/datastore/test/test_index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/caldav/datastore/test/test_index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/caldav/datastore/test/test_index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/caldav/datastore/test/test_index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/caldav/datastore/test/test_index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/caldav/datastore/test/test_index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/caldav/datastore/test/test_index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/caldav/datastore/test/test_index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/caldav/datastore/test/test_index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/caldav/datastore/test/test_index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/caldav/datastore/test/test_index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/caldav/datastore/test/test_index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/caldav/datastore/test/test_index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/caldav/datastore/test/test_index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes/txdav/caldav/datastore/test/test_index_file.py:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/caldav/datastore/test/test_index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/caldav/datastore/test/test_index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/caldav/datastore/test/test_index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/caldav/datastore/test/test_index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/caldav/datastore/test/test_index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/caldav/datastore/test/test_index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/caldav/datastore/test/test_index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/caldav/datastore/test/test_index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/caldav/datastore/test/test_index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/caldav/datastore/test/test_index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/caldav/datastore/test/test_index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/caldav/datastore/test/test_index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/caldav/datastore/test/test_index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/caldav/datastore/test/test_index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/caldav/datastore/test/test_index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/caldav/datastore/test/test_index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/caldav/datastore/test/test_index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/caldav/datastore/test/test_index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/caldav/datastore/test/test_index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/caldav/datastore/test/test_index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/caldav/datastore/test/test_index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/caldav/datastore/test/test_index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/caldav/datastore/test/test_index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/caldav/datastore/test/test_index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/caldav/datastore/test/test_index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/caldav/datastore/test/test_index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/caldav/datastore/test/test_index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/caldav/datastore/test/test_index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/caldav/datastore/test/test_index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/test/test_index.py:6322-6394
/CalendarServer/trunk/txdav/caldav/datastore/test/test_index_file.py:8130-8344

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/test_util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/test_util.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/test/test_util.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -90,7 +90,7 @@
 
 
     @inlineCallbacks
-    def test_badXdash(self):
+    def test_emptyXdash(self):
         resource = DropboxIDTests.FakeCalendarResource("""BEGIN:VCALENDAR
 VERSION:2.0
 BEGIN:VEVENT
@@ -103,7 +103,7 @@
 END:VCALENDAR
 """)
 
-        self.assertEquals( (yield dropboxIDFromCalendarObject(resource)), "")
+        self.assertEquals( (yield dropboxIDFromCalendarObject(resource)), "12345-67890.dropbox")
 
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/util.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/datastore/util.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -89,7 +89,7 @@
     # Try "X-APPLE-DROPBOX" first
     dropboxProperty = (yield calendarObject.component(
         )).getFirstPropertyInAnyComponent("X-APPLE-DROPBOX")
-    if dropboxProperty is not None:
+    if dropboxProperty is not None and dropboxProperty.value():
         componentDropboxID = dropboxProperty.value().rstrip("/").split("/")[-1]
         returnValue(componentDropboxID)
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/icalendarstore.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/icalendarstore.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/caldav/icalendarstore.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -368,6 +368,10 @@
         """
         Retrieve the calendar component for this calendar object.
 
+        @raise ConcurrentModification: if this L{ICalendarObject} has been
+            deleted and committed by another transaction between its creation
+            and the first call to this method.
+
         @return: a C{VCALENDAR} L{VComponent}.
         """
 


Property changes on: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/datastore/index_file.py
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation/txdav/carddav/datastore/index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/carddav/datastore/index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/carddav/datastore/index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/carddav/datastore/index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/carddav/datastore/index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/carddav/datastore/index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/carddav/datastore/index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/carddav/datastore/index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/carddav/datastore/index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/txdav/carddav/datastore/index_file.py:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/carddav/datastore/index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/carddav/datastore/index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/carddav/datastore/index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/carddav/datastore/index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/carddav/datastore/index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/carddav/datastore/index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/carddav/datastore/index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/carddav/datastore/index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/carddav/datastore/index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/carddav/datastore/index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/carddav/datastore/index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/carddav/datastore/index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/carddav/datastore/index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/carddav/datastore/index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/carddav/datastore/index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/carddav/datastore/index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/carddav/datastore/index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/carddav/datastore/index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/carddav/datastore/index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/carddav/datastore/index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/carddav/datastore/index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/carddav/datastore/index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/carddav/datastore/index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/carddav/datastore/index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/carddav/datastore/index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/carddav/datastore/index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/carddav/datastore/index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/carddav/datastore/index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/carddav/datastore/index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/carddav/datastore/index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/carddav/datastore/index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/carddav/datastore/index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/carddav/datastore/index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/carddav/datastore/index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/vcardindex.py:6322-6394
/CalendarServer/trunk/txdav/carddav/datastore/index_file.py:8130-8260
   + /CalendarServer/branches/config-separation/txdav/carddav/datastore/index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/carddav/datastore/index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/carddav/datastore/index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/carddav/datastore/index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/carddav/datastore/index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/carddav/datastore/index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/carddav/datastore/index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/carddav/datastore/index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/carddav/datastore/index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace/txdav/carddav/datastore/index_file.py:8137-8141
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/carddav/datastore/index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/carddav/datastore/index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/carddav/datastore/index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/carddav/datastore/index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/carddav/datastore/index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes/txdav/carddav/datastore/index_file.py:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/carddav/datastore/index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/carddav/datastore/index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/carddav/datastore/index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/carddav/datastore/index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/carddav/datastore/index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/carddav/datastore/index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/carddav/datastore/index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/carddav/datastore/index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/carddav/datastore/index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/carddav/datastore/index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/carddav/datastore/index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/carddav/datastore/index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/carddav/datastore/index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/carddav/datastore/index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/carddav/datastore/index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/carddav/datastore/index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/carddav/datastore/index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/carddav/datastore/index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/carddav/datastore/index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/carddav/datastore/index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/carddav/datastore/index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/carddav/datastore/index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/carddav/datastore/index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/carddav/datastore/index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/carddav/datastore/index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/carddav/datastore/index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/carddav/datastore/index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/carddav/datastore/index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/carddav/datastore/index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/vcardindex.py:6322-6394
/CalendarServer/trunk/txdav/carddav/datastore/index_file.py:8130-8344


Property changes on: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/datastore/test/test_index_file.py
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation/txdav/carddav/datastore/test/test_index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/carddav/datastore/test/test_index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/carddav/datastore/test/test_index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/carddav/datastore/test/test_index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/carddav/datastore/test/test_index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/carddav/datastore/test/test_index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/carddav/datastore/test/test_index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/carddav/datastore/test/test_index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/carddav/datastore/test/test_index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/carddav/datastore/test/test_index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/carddav/datastore/test/test_index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/carddav/datastore/test/test_index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/carddav/datastore/test/test_index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/carddav/datastore/test/test_index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/carddav/datastore/test/test_index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/carddav/datastore/test/test_index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/carddav/datastore/test/test_index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/carddav/datastore/test/test_index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/carddav/datastore/test/test_index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/carddav/datastore/test/test_index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/carddav/datastore/test/test_index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/carddav/datastore/test/test_index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/carddav/datastore/test/test_index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/carddav/datastore/test/test_index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/carddav/datastore/test/test_index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/carddav/datastore/test/test_index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/carddav/datastore/test/test_index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/carddav/datastore/test/test_index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/carddav/datastore/test/test_index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/carddav/datastore/test/test_index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/carddav/datastore/test/test_index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/carddav/datastore/test/test_index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/carddav/datastore/test/test_index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/carddav/datastore/test/test_index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/carddav/datastore/test/test_index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/carddav/datastore/test/test_index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/carddav/datastore/test/test_index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/carddav/datastore/test/test_index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/carddav/datastore/test/test_index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/carddav/datastore/test/test_index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/carddav/datastore/test/test_index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/carddav/datastore/test/test_index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/carddav/datastore/test/test_index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/test/test_vcardindex.py:6322-6394
/CalendarServer/trunk/txdav/carddav/datastore/test/test_index_file.py:8130-8260
   + /CalendarServer/branches/config-separation/txdav/carddav/datastore/test/test_index_file.py:4379-4443
/CalendarServer/branches/egg-info-351/txdav/carddav/datastore/test/test_index_file.py:4589-4625
/CalendarServer/branches/generic-sqlstore/txdav/carddav/datastore/test/test_index_file.py:6167-6191
/CalendarServer/branches/new-store-no-caldavfile-2/txdav/carddav/datastore/test/test_index_file.py:5936-5981
/CalendarServer/branches/new-store-no-caldavfile/txdav/carddav/datastore/test/test_index_file.py:5911-5935
/CalendarServer/branches/new-store/txdav/carddav/datastore/test/test_index_file.py:5594-5934
/CalendarServer/branches/users/cdaboo/batchupload-6699/txdav/carddav/datastore/test/test_index_file.py:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692/txdav/carddav/datastore/test/test_index_file.py:5693-5702
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627/txdav/carddav/datastore/test/test_index_file.py:3628-3644
/CalendarServer/branches/users/cdaboo/more-sharing-5591/txdav/carddav/datastore/test/test_index_file.py:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464/txdav/carddav/datastore/test/test_index_file.py:4465-4957
/CalendarServer/branches/users/cdaboo/pods/txdav/carddav/datastore/test/test_index_file.py:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar/txdav/carddav/datastore/test/test_index_file.py:7085-7206
/CalendarServer/branches/users/cdaboo/pycard/txdav/carddav/datastore/test/test_index_file.py:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes/txdav/carddav/datastore/test/test_index_file.py:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070/txdav/carddav/datastore/test/test_index_file.py:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187/txdav/carddav/datastore/test/test_index_file.py:5188-5440
/CalendarServer/branches/users/cdaboo/timezones/txdav/carddav/datastore/test/test_index_file.py:7443-7699
/CalendarServer/branches/users/glyph/conn-limit/txdav/carddav/datastore/test/test_index_file.py:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge/txdav/carddav/datastore/test/test_index_file.py:4971-5080
/CalendarServer/branches/users/glyph/dalify/txdav/carddav/datastore/test/test_index_file.py:6932-7023
/CalendarServer/branches/users/glyph/deploybuild/txdav/carddav/datastore/test/test_index_file.py:7563-7572
/CalendarServer/branches/users/glyph/dont-start-postgres/txdav/carddav/datastore/test/test_index_file.py:6592-6614
/CalendarServer/branches/users/glyph/linux-tests/txdav/carddav/datastore/test/test_index_file.py:6893-6900
/CalendarServer/branches/users/glyph/misc-portability-fixes/txdav/carddav/datastore/test/test_index_file.py:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6/txdav/carddav/datastore/test/test_index_file.py:6322-6334
/CalendarServer/branches/users/glyph/more-deferreds-7/txdav/carddav/datastore/test/test_index_file.py:6369
/CalendarServer/branches/users/glyph/new-export/txdav/carddav/datastore/test/test_index_file.py:7444-7485
/CalendarServer/branches/users/glyph/oracle-nulls/txdav/carddav/datastore/test/test_index_file.py:7340-7351
/CalendarServer/branches/users/glyph/oracle/txdav/carddav/datastore/test/test_index_file.py:7106-7155
/CalendarServer/branches/users/glyph/sendfdport/txdav/carddav/datastore/test/test_index_file.py:5388-5424
/CalendarServer/branches/users/glyph/sharedpool/txdav/carddav/datastore/test/test_index_file.py:6490-6550
/CalendarServer/branches/users/glyph/sql-store/txdav/carddav/datastore/test/test_index_file.py:5929-6073
/CalendarServer/branches/users/glyph/subtransactions/txdav/carddav/datastore/test/test_index_file.py:7248-7258
/CalendarServer/branches/users/glyph/use-system-twisted/txdav/carddav/datastore/test/test_index_file.py:5084-5149
/CalendarServer/branches/users/sagen/applepush/txdav/carddav/datastore/test/test_index_file.py:8126-8184
/CalendarServer/branches/users/sagen/inboxitems/txdav/carddav/datastore/test/test_index_file.py:7380-7381
/CalendarServer/branches/users/sagen/locations-resources-2/txdav/carddav/datastore/test/test_index_file.py:5052-5061
/CalendarServer/branches/users/sagen/locations-resources/txdav/carddav/datastore/test/test_index_file.py:5032-5051
/CalendarServer/branches/users/sagen/purge_old_events/txdav/carddav/datastore/test/test_index_file.py:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038/txdav/carddav/datastore/test/test_index_file.py:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066/txdav/carddav/datastore/test/test_index_file.py:4068-4075
/CalendarServer/branches/users/sagen/resources-2/txdav/carddav/datastore/test/test_index_file.py:5084-5093
/CalendarServer/branches/users/wsanchez/transations/txdav/carddav/datastore/test/test_index_file.py:5515-5593
/CalendarServer/trunk/twistedcaldav/test/test_vcardindex.py:6322-6394
/CalendarServer/trunk/txdav/carddav/datastore/test/test_index_file.py:8130-8344

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/iaddressbookstore.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/iaddressbookstore.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/carddav/iaddressbookstore.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -254,6 +254,10 @@
         """
         Retrieve the addressbook component for this addressbook object.
 
+        @raise ConcurrentModification: if this L{IAddressBookObject} has been
+            deleted and committed by another transaction between its creation
+            and the first call to this method.
+
         @return: a C{VCARD} L{VComponent}.
         """
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/file.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/file.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -111,9 +111,10 @@
         self._transactionClass = CommonStoreTransaction
         self._propertyStoreClass = propertyStoreClass
         self.quota = quota
+        self._migrating = False
 
 
-    def newTransaction(self, name='no name', migrating=False):
+    def newTransaction(self, name='no name'):
         """
         Create a new transaction.
 
@@ -125,10 +126,17 @@
             self.enableCalendars,
             self.enableAddressBooks,
             self._notifierFactory,
-            migrating,
+            self._migrating,
         )
 
 
+    def setMigrating(self, state):
+        """
+        Set the "migrating" state
+        """
+        self._migrating = state
+
+
     def _homesOfType(self, storeType):
         """
         Common implementation of L{ICalendarStore.eachCalendarHome} and

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/sql.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/sql.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -75,6 +75,7 @@
 from txdav.base.propertystore.none import PropertyStore as NonePropertyStore
 from txdav.base.propertystore.sql import PropertyStore
 
+from txdav.common.icommondatastore import ConcurrentModification
 from twistedcaldav.customxml import NotificationType
 from twistedcaldav.dateops import datetimeMktime, parseSQLTimestamp,\
     pyCalendarTodatetime
@@ -141,6 +142,7 @@
         self.enableAddressBooks = enableAddressBooks
         self.label = label
         self.quota = quota
+        self._migrating = False
 
 
     def eachCalendarHome(self):
@@ -158,7 +160,7 @@
 
 
 
-    def newTransaction(self, label="unlabeled", migrating=False):
+    def newTransaction(self, label="unlabeled"):
         """
         @see L{IDataStore.newTransaction}
         """
@@ -167,11 +169,18 @@
             self.sqlTxnFactory(),
             self.enableCalendars,
             self.enableAddressBooks,
-            None if migrating else self.notifierFactory,
+            None if self._migrating else self.notifierFactory,
             label,
-            migrating,
+            self._migrating,
         )
 
+    def setMigrating(self, state):
+        """
+        Set the "migrating" state
+        """
+        self._migrating = state
+
+
 class TransactionStatsCollector(object):
     
     def __init__(self):
@@ -2693,12 +2702,16 @@
     @inlineCallbacks
     def _text(self):
         if self._objectText is None:
-            text = (
+            texts = (
                 yield self._textByIDQuery.on(self._txn,
                                              resourceID=self._resourceID)
-            )[0][0]
-            self._objectText = text
-            returnValue(text)
+            )
+            if texts:
+                text = texts[0][0]
+                self._objectText = text
+                returnValue(text)
+            else:
+                raise ConcurrentModification()
         else:
             returnValue(self._objectText)
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/test/util.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/test/util.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -45,6 +45,7 @@
 from twext.enterprise.adbapi2 import ConnectionPool
 from twisted.internet.defer import returnValue
 from twistedcaldav.notify import Notifier, NodeCreationException
+from twext.enterprise.ienterprise import AlreadyFinishedError
 from twistedcaldav.vcard import Component as ABComponent
 
 md5key = PropertyName.fromElement(TwistedGETContentMD5)
@@ -408,7 +409,6 @@
     lastTransaction = None
     savedStore = None
     assertProvides = assertProvides
-    lastCommitSetUp = False
 
     def transactionUnderTest(self):
         """
@@ -416,17 +416,29 @@
         C[lastTransaction}.  Also makes sure to use the same store, saving the
         value from C{storeUnderTest}.
         """
-        if not self.lastCommitSetUp:
-            self.lastCommitSetUp = True
-            self.addCleanup(self.commitLast)
-        if self.lastTransaction is not None:
-            return self.lastTransaction
+        if self.lastTransaction is None:
+            self.lastTransaction = self.concurrentTransaction()
+        return self.lastTransaction
+
+
+    def concurrentTransaction(self):
+        """
+        Create a transaction from C{storeUnderTest} and save it for later
+        clean-up.
+        """
         if self.savedStore is None:
             self.savedStore = self.storeUnderTest()
         self.counter += 1
-        txn = self.lastTransaction = self.savedStore.newTransaction(
+        txn = self.savedStore.newTransaction(
             self.id() + " #" + str(self.counter)
         )
+        @inlineCallbacks
+        def maybeCommitThis():
+            try:
+                yield txn.commit()
+            except AlreadyFinishedError:
+                pass
+        self.addCleanup(maybeCommitThis)
         return txn
 
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/upgrade/migrate.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/upgrade/migrate.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/datastore/upgrade/migrate.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -128,6 +128,9 @@
         @return: a Deferred which fires when the migration is complete.
         """
         self.log_warn("Beginning filesystem -> database upgrade.")
+
+        self.sqlStore.setMigrating(True)
+
         for homeType, migrateFunc, eachFunc, destFunc, _ignore_topPathName in [
             ("calendar", migrateCalendarHome,
                 self.fileStore.eachCalendarHome,
@@ -141,7 +144,7 @@
             for fileTxn, fileHome in eachFunc():
                 uid = fileHome.uid()
                 self.log_warn("Migrating %s UID %r" % (homeType, uid))
-                sqlTxn = self.sqlStore.newTransaction(migrating=True)
+                sqlTxn = self.sqlStore.newTransaction()
                 homeGetter = destFunc(sqlTxn)
                 if (yield homeGetter(uid, create=False)) is not None:
                     self.log_warn(
@@ -177,9 +180,11 @@
             for fp in sqlAttachmentsPath.walk():
                 os.chown(fp.path, uid, gid)
 
+        self.sqlStore.setMigrating(False) 
         self.log_warn(
             "Filesystem upgrade complete, launching database service."
         )
+
         # see http://twistedmatrix.com/trac/ticket/4649
         reactor.callLater(0, self.wrappedService.setServiceParent, self.parent)
 

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/icommondatastore.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/icommondatastore.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/common/icommondatastore.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -35,6 +35,7 @@
     "NotFoundError",
     "NoSuchHomeChildError",
     "NoSuchObjectResourceError",
+    "ConcurrentModification",
     "InvalidObjectResourceError",
     "InternalDataStoreError",
 ]
@@ -98,6 +99,18 @@
     The requested object resource does not exist.
     """
 
+class ConcurrentModification(NotFoundError):
+    """
+    Despite being loaded in the current transaction, the object whose data is
+    being requested has been deleted or modified in another transaction, and
+    therefore that data can no longer be retrieved.
+
+    (Note: in the future we should be able to avoid these types of errors with
+    more usage of locking, but until the impact of that on performance is
+    determined, callers of C{component()} need to be aware that this can
+    happen.)
+    """
+
 class InvalidObjectResourceError(CommonStoreError):
     """
     Invalid object resource data.

Modified: CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/idav.py
===================================================================
--- CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/idav.py	2011-11-28 21:07:10 UTC (rev 8345)
+++ CalendarServer/branches/users/cdaboo/component-set-fixes/txdav/idav.py	2011-11-28 21:07:52 UTC (rev 8346)
@@ -116,7 +116,16 @@
         @rtype: L{ITransaction}
         """
 
+    def setMigrating(state):
+        """
+        Set the "migrating" state to either True or False.  This state is
+        used to supress push notifications and etag changes.
 
+        @param state: the boolean value to set the migrating state to
+        @type state: C{boolean}
+        """
+
+
 class IDataStoreObject(Interface):
     """
     An L{IDataStoreObject} are the objects stored in an L{IDataStore}.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20111128/244811a5/attachment-0001.html>


More information about the calendarserver-changes mailing list