[CalendarServer-changes] [10098] CalendarServer/branches/users/cdaboo/managed-attachments

source_changes at macosforge.org source_changes at macosforge.org
Wed Nov 28 09:35:29 PST 2012


Revision: 10098
          http://trac.calendarserver.org//changeset/10098
Author:   cdaboo at apple.com
Date:     2012-11-28 09:35:29 -0800 (Wed, 28 Nov 2012)
Log Message:
-----------
Merge from trunk.

Modified Paths:
--------------
    CalendarServer/branches/users/cdaboo/managed-attachments/bin/calendarserver_command_gateway
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tap/util.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/backup_pg.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/gateway.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/principals.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/cmd.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/terminal.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/test/test_vfs.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/vfs.py
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/caldavd.plist
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/users-groups.xml
    CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/test_gateway.py
    CalendarServer/branches/users/cdaboo/managed-attachments/conf/caldavd-apple.plist
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarmigrator.py
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarpromotion.py
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_migrator.py
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_promotion.py
    CalendarServer/branches/users/cdaboo/managed-attachments/doc/Admin/MultiServerDeployment.rst
    CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.txt
    CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.xml
    CalendarServer/branches/users/cdaboo/managed-attachments/doc/calendarserver_manage_principals.8
    CalendarServer/branches/users/cdaboo/managed-attachments/support/Makefile.Apple
    CalendarServer/branches/users/cdaboo/managed-attachments/support/build.sh
    CalendarServer/branches/users/cdaboo/managed-attachments/twext/enterprise/dal/test/test_parseschema.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/aggregate.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/augment.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/directory.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/idirectory.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/ldapdirectory.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/principal.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/augments.xml
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_directory.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_ldapdirectory.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_principal.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_xmlfile.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/xmlaugmentsparser.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/scheduling/processing.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/test_xmlutil.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/util.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/upgrade.py
    CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/xmlutil.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/test_subpostgres.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/common.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_file.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/icalendarstore.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/common.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_file.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/iaddressbookstore.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/file.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql_schema/current.sql
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/migrate.py
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/test/test_migrate.py

Added Paths:
-----------
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarcommonextra.py
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_commonextra.py
    CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/
    CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch
    CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/importFile.sql

Removed Paths:
-------------
    CalendarServer/branches/users/cdaboo/managed-attachments/contrib/create_caldavd_db.sh
    CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch

Property Changed:
----------------
    CalendarServer/branches/users/cdaboo/managed-attachments/


Property changes on: CalendarServer/branches/users/cdaboo/managed-attachments
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:9985-9989
   + /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:9985-10097

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/bin/calendarserver_command_gateway
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/bin/calendarserver_command_gateway	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/bin/calendarserver_command_gateway	2012-11-28 17:35:29 UTC (rev 10098)
@@ -16,8 +16,13 @@
 # limitations under the License.
 ##
 
+import os
 import sys
 
+# In OS X Server context, add to PATH to find Postgres utilities (initdb, pg_ctl)
+if "Server.app" in sys.argv[0]:
+    os.environ["PATH"] += ":" + os.path.join(os.path.dirname(os.path.dirname(sys.argv[0])), "bin")
+
 #PYTHONPATH
 
 if __name__ == "__main__":

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tap/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tap/util.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tap/util.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -131,7 +131,8 @@
         maxConnections=config.Postgres.MaxConnections,
         options=config.Postgres.Options,
         uid=uid, gid=gid,
-        spawnedDBUser=config.SpawnedDBUser
+        spawnedDBUser=config.SpawnedDBUser,
+        importFileName=config.DBImportFile
     )
 
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/backup_pg.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/backup_pg.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/backup_pg.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -112,7 +112,7 @@
             print e.output
         raise BackupError(
             "%s failed:\n%s (exit code = %d)" %
-            (PGDUMP, e.output, e.returncode)
+            (PSQL, e.output, e.returncode)
         )
 
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/gateway.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/gateway.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/gateway.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -17,28 +17,25 @@
 ##
 
 from getopt import getopt, GetoptError
-from grp import getgrnam
-from pwd import getpwnam
 import os
 import sys
 import xml
 
 from twext.python.plistlib import readPlistFromString, writePlistToString
 
-from twisted.internet import reactor
 from twisted.internet.defer import inlineCallbacks
-from twisted.python.util import switchUID
-from twistedcaldav.config import config, ConfigurationError
 from twistedcaldav.directory.directory import DirectoryError
 from txdav.xml import element as davxml
 
-from calendarserver.tools.util import loadConfig, getDirectory, setupMemcached, checkDirectory
 from calendarserver.tools.principals import (
     principalForPrincipalID, proxySubprincipal, addProxy, removeProxy,
     getProxies, setProxies, ProxyError, ProxyWarning, updateRecord
 )
+from calendarserver.tools.purge import WorkerService, purgeOldEvents, DEFAULT_BATCH_SIZE, DEFAULT_RETAIN_DAYS
+from calendarserver.tools.cmdline import utilityMain
 
 from twext.python.log import StandardIOObserver
+from pycalendar.datetime import PyCalendarDateTime
 
 
 def usage(e=None):
@@ -60,6 +57,25 @@
         sys.exit(0)
 
 
+class RunnerService(WorkerService):
+    """
+    A wrapper around Runner which uses utilityMain to get the store
+    """
+
+    commands = None
+
+    @inlineCallbacks
+    def doWork(self):
+        """
+        Create/run a Runner to execute the commands
+        """
+        rootResource = self.rootResource()
+        directory = rootResource.getDirectory()
+        runner = Runner(rootResource, directory, self._store, self.commands)
+        if runner.validate():
+            yield runner.run( )
+
+
 def main():
 
     try:
@@ -92,40 +108,7 @@
         else:
             raise NotImplementedError(opt)
 
-    try:
-        loadConfig(configFileName)
 
-        # Create the DataRoot directory before shedding privileges
-        if config.DataRoot.startswith(config.ServerRoot + os.sep):
-            checkDirectory(
-                config.DataRoot,
-                "Data root",
-                access=os.W_OK,
-                create=(0750, config.UserName, config.GroupName),
-            )
-
-        # Shed privileges
-        if config.UserName and config.GroupName and os.getuid() == 0:
-            uid = getpwnam(config.UserName).pw_uid
-            gid = getgrnam(config.GroupName).gr_gid
-            switchUID(uid, uid, gid)
-
-        os.umask(config.umask)
-
-        # Configure memcached client settings prior to setting up resource
-        # hierarchy (in getDirectory)
-        setupMemcached(config)
-
-        try:
-            config.directory = getDirectory()
-        except DirectoryError, e:
-            respondWithError(str(e))
-            return
-
-    except ConfigurationError, e:
-        respondWithError(str(e))
-        return
-
     #
     # Read commands from stdin
     #
@@ -143,17 +126,10 @@
     else:
         commands = [plist]
 
-    runner = Runner(config.directory, commands)
-    if not runner.validate():
-        return
+    RunnerService.commands = commands
+    utilityMain(configFileName, RunnerService)
 
-    #
-    # Start the reactor
-    #
-    reactor.callLater(0, runner.run)
-    reactor.run()
 
-
 attrMap = {
     'GeneratedUID' : { 'attr' : 'guid', },
     'RealName' : { 'attr' : 'fullName', },
@@ -171,12 +147,15 @@
     'Country' : { 'extras' : True, 'attr' : 'country', },
     'Phone' : { 'extras' : True, 'attr' : 'phone', },
     'AutoSchedule' : { 'attr' : 'autoSchedule', },
+    'AutoAcceptGroup' : { 'attr' : 'autoAcceptGroup', },
 }
 
 class Runner(object):
 
-    def __init__(self, directory, commands):
+    def __init__(self, root, directory, store, commands):
+        self.root = root
         self.dir = directory
+        self.store = store
         self.commands = commands
 
     def validate(self):
@@ -207,9 +186,6 @@
             respondWithError("Command failed: '%s'" % (str(e),))
             raise
 
-        finally:
-            reactor.stop()
-
     # Locations
 
     def command_getLocationList(self, command):
@@ -217,7 +193,6 @@
 
     @inlineCallbacks
     def command_createLocation(self, command):
-
         kwargs = {}
         for key, info in attrMap.iteritems():
             if command.has_key(key):
@@ -232,7 +207,7 @@
         readProxies = command.get("ReadProxies", None)
         writeProxies = command.get("WriteProxies", None)
         principal = principalForPrincipalID(record.guid, directory=self.dir)
-        (yield setProxies(principal, readProxies, writeProxies))
+        (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
 
         respondWithRecordsOfType(self.dir, command, "locations")
 
@@ -249,7 +224,9 @@
             respondWithError("Principal not found: %s" % (guid,))
             return
         recordDict['AutoSchedule'] = principal.getAutoSchedule()
-        recordDict['ReadProxies'], recordDict['WriteProxies'] = (yield getProxies(principal))
+        recordDict['AutoAcceptGroup'] = principal.getAutoAcceptGroup()
+        recordDict['ReadProxies'], recordDict['WriteProxies'] = (yield getProxies(principal,
+            directory=self.dir))
         respond(command, recordDict)
 
     command_getResourceAttributes = command_getLocationAttributes
@@ -262,6 +239,7 @@
         principal = principalForPrincipalID(command['GeneratedUID'],
             directory=self.dir)
         (yield principal.setAutoSchedule(command.get('AutoSchedule', False)))
+        (yield principal.setAutoAcceptGroup(command.get('AutoAcceptGroup', "")))
 
         kwargs = {}
         for key, info in attrMap.iteritems():
@@ -276,7 +254,7 @@
         readProxies = command.get("ReadProxies", None)
         writeProxies = command.get("WriteProxies", None)
         principal = principalForPrincipalID(record.guid, directory=self.dir)
-        (yield setProxies(principal, readProxies, writeProxies))
+        (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
 
         yield self.command_getLocationAttributes(command)
 
@@ -313,7 +291,7 @@
         readProxies = command.get("ReadProxies", None)
         writeProxies = command.get("WriteProxies", None)
         principal = principalForPrincipalID(record.guid, directory=self.dir)
-        (yield setProxies(principal, readProxies, writeProxies))
+        (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
 
         respondWithRecordsOfType(self.dir, command, "resources")
 
@@ -325,6 +303,7 @@
         principal = principalForPrincipalID(command['GeneratedUID'],
             directory=self.dir)
         (yield principal.setAutoSchedule(command.get('AutoSchedule', False)))
+        (yield principal.setAutoAcceptGroup(command.get('AutoAcceptGroup', "")))
 
         kwargs = {}
         for key, info in attrMap.iteritems():
@@ -339,7 +318,7 @@
         readProxies = command.get("ReadProxies", None)
         writeProxies = command.get("WriteProxies", None)
         principal = principalForPrincipalID(record.guid, directory=self.dir)
-        (yield setProxies(principal, readProxies, writeProxies))
+        (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
 
         yield self.command_getResourceAttributes(command)
 
@@ -452,6 +431,23 @@
         (yield respondWithProxies(self.dir, command, principal, "read"))
 
 
+    @inlineCallbacks
+    def command_purgeOldEvents(self, command):
+        """
+        Convert RetainDays from the command dictionary into a date, then purge
+        events older than that date.
+
+        @param command: the dictionary parsed from the plist read from stdin
+        @type command: C{dict}
+        """
+        retainDays = command.get("RetainDays", DEFAULT_RETAIN_DAYS)
+        cutoff = PyCalendarDateTime.getToday()
+        cutoff.setDateOnly(False)
+        cutoff.offsetDay(-retainDays)
+        eventCount = (yield purgeOldEvents(self.store, self.dir, self.root, cutoff, DEFAULT_BATCH_SIZE))
+        respond(command, {'EventsRemoved' : eventCount, "RetainDays" : retainDays})
+
+
 @inlineCallbacks
 def respondWithProxies(directory, command, principal, proxyType):
     proxies = []
@@ -460,7 +456,7 @@
         membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
         if membersProperty.children:
             for member in membersProperty.children:
-                proxyPrincipal = principalForPrincipalID(str(member))
+                proxyPrincipal = principalForPrincipalID(str(member), directory=directory)
                 proxies.append(proxyPrincipal.record.guid)
 
     respond(command, {

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/principals.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/principals.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/principals.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -88,6 +88,8 @@
     print "  --get-auto-schedule: read auto-schedule state"
     print "  --set-auto-schedule-mode={default|none|accept-always|decline-always|accept-if-free|decline-if-busy|automatic}: set auto-schedule mode"
     print "  --get-auto-schedule-mode: read auto-schedule mode"
+    print "  --set-auto-accept-group=principal: set auto-accept-group"
+    print "  --get-auto-accept-group: read auto-accept-group"
     print "  --add {locations|resources} 'full name' [record name] [GUID]: add a principal"
     print "  --remove: remove a principal"
 
@@ -118,6 +120,8 @@
                 "get-auto-schedule",
                 "set-auto-schedule-mode=",
                 "get-auto-schedule-mode",
+                "set-auto-accept-group=",
+                "get-auto-accept-group",
                 "verbose",
             ],
         )
@@ -223,6 +227,18 @@
         elif opt in ("", "--get-auto-schedule-mode"):
             principalActions.append((action_getAutoScheduleMode,))
 
+        elif opt in ("", "--set-auto-accept-group"):
+            try:
+                principalForPrincipalID(arg, checkOnly=True)
+            except ValueError, e:
+                abort(e)
+
+            principalActions.append((action_setAutoAcceptGroup, arg))
+
+        elif opt in ("", "--get-auto-accept-group"):
+            principalActions.append((action_getAutoAcceptGroup,))
+
+
         else:
             raise NotImplementedError(opt)
 
@@ -768,7 +784,50 @@
         autoScheduleMode,
     )
 
+ at inlineCallbacks
+def action_setAutoAcceptGroup(principal, autoAcceptGroup):
+    if principal.record.recordType == "groups":
+        print "Setting auto-accept-group for %s is not allowed." % (principal,)
 
+    elif principal.record.recordType == "users" and not config.Scheduling.Options.AutoSchedule.AllowUsers:
+        print "Setting auto-accept-group for %s is not allowed." % (principal,)
+
+    else:
+        groupPrincipal = principalForPrincipalID(autoAcceptGroup)
+        if groupPrincipal is None or groupPrincipal.record.recordType != "groups":
+            print "Invalid principal ID: %s" % (autoAcceptGroup,)
+        else:
+            print "Setting auto-accept-group to %s for %s" % (
+                prettyPrincipal(groupPrincipal),
+                prettyPrincipal(principal),
+            )
+
+            (yield updateRecord(False, config.directory,
+                principal.record.recordType,
+                guid=principal.record.guid,
+                shortNames=principal.record.shortNames,
+                fullName=principal.record.fullName,
+                autoAcceptGroup=groupPrincipal.record.guid,
+                **principal.record.extras
+            ))
+
+def action_getAutoAcceptGroup(principal):
+    autoAcceptGroup = principal.getAutoAcceptGroup()
+    if autoAcceptGroup:
+        record = config.directory.recordWithGUID(autoAcceptGroup)
+        if record is not None:
+            groupPrincipal = config.directory.principalCollection.principalForUID(record.uid)
+            if groupPrincipal is not None:
+                print "Auto-accept-group for %s is %s" % (
+                    prettyPrincipal(principal),
+                    prettyPrincipal(groupPrincipal),
+                )
+                return
+        print "Invalid auto-accept-group assigned: %s" % (autoAcceptGroup,)
+    else:
+        print "No auto-accept-group assigned to %s" % (prettyPrincipal(principal),)
+
+
 def abort(msg, status=1):
     sys.stdout.write("%s\n" % (msg,))
     try:
@@ -856,18 +915,33 @@
     matching the guid in kwargs.
     """
 
+    assignAutoSchedule = False
     if kwargs.has_key("autoSchedule"):
+        assignAutoSchedule = True
         autoSchedule = kwargs["autoSchedule"]
         del kwargs["autoSchedule"]
-    else:
+    elif create:
+        assignAutoSchedule = True
         autoSchedule = recordType in ("locations", "resources")
 
+    assignAutoScheduleMode = False
     if kwargs.has_key("autoScheduleMode"):
+        assignAutoScheduleMode = True
         autoScheduleMode = kwargs["autoScheduleMode"]
         del kwargs["autoScheduleMode"]
-    else:
+    elif create:
+        assignAutoScheduleMode = True
         autoScheduleMode = None
 
+    assignAutoAcceptGroup = False
+    if kwargs.has_key("autoAcceptGroup"):
+        assignAutoAcceptGroup = True
+        autoAcceptGroup = kwargs["autoAcceptGroup"]
+        del kwargs["autoAcceptGroup"]
+    elif create:
+        assignAutoAcceptGroup = True
+        autoAcceptGroup = None
+
     for key, value in kwargs.items():
         if isinstance(value, unicode):
             kwargs[key] = value.encode("utf-8")
@@ -890,8 +964,13 @@
 
     augmentService = directory.serviceForRecordType(recordType).augmentService
     augmentRecord = (yield augmentService.getAugmentRecord(kwargs['guid'], recordType))
-    augmentRecord.autoSchedule = autoSchedule
-    augmentRecord.autoScheduleMode = autoScheduleMode
+
+    if assignAutoSchedule:
+        augmentRecord.autoSchedule = autoSchedule
+    if assignAutoScheduleMode:
+        augmentRecord.autoScheduleMode = autoScheduleMode
+    if assignAutoAcceptGroup:
+        augmentRecord.autoAcceptGroup = autoAcceptGroup
     (yield augmentService.addAugmentRecords([augmentRecord]))
     try:
         directory.updateRecord(recordType, **kwargs)

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/cmd.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/cmd.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/cmd.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -67,6 +67,12 @@
 
 
 class CommandsBase(object):
+    """
+    Base class for commands.
+
+    @ivar protocol: a protocol for parsing the incoming command line.
+    @type protocol: L{calendarserver.tools.shell.terminal.ShellProtocol}
+    """
     def __init__(self, protocol):
         self.protocol = protocol
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/terminal.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/terminal.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/terminal.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -89,7 +89,28 @@
         super(ShellOptions, self).__init__()
 
 
+
 class ShellService(Service, object):
+    """
+    A L{ShellService} collects all the information that a shell needs to run;
+    when run, it invokes the shell on stdin/stdout.
+
+    @ivar store: the calendar / addressbook store.
+    @type store: L{txdav.idav.IDataStore}
+
+    @ivar directory: the directory service, to look up principals' names
+    @type directory: L{twistedcaldav.directory.idirectory.IDirectoryService}
+
+    @ivar options: the command-line options used to create this shell service
+    @type options: L{ShellOptions}
+
+    @ivar reactor: the reactor under which this service is running
+    @type reactor: L{IReactorTCP}, L{IReactorTime}, L{IReactorThreads} etc
+
+    @ivar config: the configuration associated with this shell service.
+    @type config: L{twistedcaldav.config.Config}
+    """
+
     def __init__(self, store, directory, options, reactor, config):
         super(ShellService, self).__init__()
         self.store      = store
@@ -100,6 +121,7 @@
         self.terminalFD = None
         self.protocol   = None
 
+
     def startService(self):
         """
         Start the service.
@@ -114,6 +136,7 @@
         self.protocol = ServerProtocol(lambda: ShellProtocol(self))
         StandardIO(self.protocol)
 
+
     def stopService(self):
         """
         Stop the service.
@@ -123,9 +146,13 @@
         os.write(self.terminalFD, "\r\x1bc\r")
 
 
+
 class ShellProtocol(ReceiveLineProtocol):
     """
     Data store shell protocol.
+
+    @ivar service: a service representing the running shell
+    @type service: L{ShellService}
     """
 
     # FIXME:

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/test/test_vfs.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/test/test_vfs.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/test/test_vfs.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -15,14 +15,18 @@
 # limitations under the License.
 ##
 
-import twisted.trial.unittest 
-from twisted.internet.defer import succeed
+from twisted.trial.unittest import TestCase
+from twisted.internet.defer import succeed, inlineCallbacks
 
 from calendarserver.tools.shell.vfs import ListEntry
 from calendarserver.tools.shell.vfs import File, Folder
+from calendarserver.tools.shell.vfs import UIDsFolder
+from calendarserver.tools.shell.terminal import ShellService
+from twistedcaldav.directory.test.test_xmlfile import XMLFileBase
+from txdav.common.datastore.test.util import buildStore
 
 
-class TestListEntry(twisted.trial.unittest.TestCase):
+class TestListEntry(TestCase):
     def test_toString(self):
         self.assertEquals(ListEntry(None, File  , "thingo"           ).toString(), "thingo" )
         self.assertEquals(ListEntry(None, File  , "thingo", Foo="foo").toString(), "thingo" )
@@ -100,3 +104,58 @@
             def list(self): return succeed(())
             list.fieldNames = ()
         self.assertEquals(fields(MyFile), ("thingo",))
+
+
+
+class DirectoryStubber(XMLFileBase):
+    """
+    Object which creates a stub L{IDirectoryService}.
+    """
+    def __init__(self, testCase):
+        self.testCase = testCase
+
+    def mktemp(self):
+        return self.testCase.mktemp()
+
+
+
+class UIDsFolderTests(TestCase):
+    """
+    L{UIDsFolder} contains all principals and is keyed by UID.
+    """
+
+    @inlineCallbacks
+    def setUp(self):
+        """
+        Create a L{UIDsFolder}.
+        """
+        self.svc = ShellService(store=(yield buildStore(self, None)),
+                                directory=DirectoryStubber(self).service(),
+                                options=None, reactor=None, config=None)
+        self.folder = UIDsFolder(self.svc, ())
+
+
+    @inlineCallbacks
+    def test_list(self):
+        """
+        L{UIDsFolder.list} returns a L{Deferred} firing an iterable of
+        L{ListEntry} objects, reflecting the directory information for all
+        calendars and addressbooks created in the store.
+        """
+        txn = self.svc.store.newTransaction()
+        wsanchez = "6423F94A-6B76-4A3A-815B-D52CFD77935D"
+        dreid = "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1"
+        yield txn.calendarHomeWithUID(wsanchez, create=True)
+        yield txn.addressbookHomeWithUID(dreid, create=True)
+        yield txn.commit()
+        listing = list((yield self.folder.list()))
+        self.assertEquals(
+            [x.fields for x in listing],
+            [{"Record Type": "users", "Short Name": "wsanchez",
+              "Full Name": "Wilfredo Sanchez", "Name": wsanchez},
+              {"Record Type": "users", "Short Name": "dreid",
+              "Full Name": "David Reid", "Name": dreid}]
+        )
+
+
+

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/vfs.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/vfs.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/shell/vfs.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -1,3 +1,4 @@
+# -*- test-case-name: calendarserver.tools.shell.test.test_vfs -*-
 ##
 # Copyright (c) 2011-2012 Apple Inc. All rights reserved.
 #
@@ -56,6 +57,7 @@
     """
     Information about a C{File} as returned by C{File.list()}.
     """
+
     def __init__(self, parent, Class, Name, **fields):
         self.parent    = parent # The class implementing list()
         self.fileClass = Class
@@ -64,9 +66,11 @@
 
         fields["Name"] = Name
 
+
     def __str__(self):
         return self.toString()
 
+
     def __repr__(self):
         fields = self.fields.copy()
         del fields["Name"]
@@ -83,15 +87,18 @@
             fields,
         )
 
+
     def isFolder(self):
         return issubclass(self.fileClass, Folder)
 
+
     def toString(self):
         if self.isFolder():
             return "%s/" % (self.fileName,)
         else:
             return self.fileName
 
+
     @property
     def fieldNames(self):
         if not hasattr(self, "_fieldNames"):
@@ -101,10 +108,12 @@
                 else:
                     self._fieldNames = ("Name",) + tuple(self.parent.list.fieldNames)
             else:
-                self._fieldNames = ["Name"] + sorted(n for n in self.fields if n != "Name")
+                self._fieldNames = ["Name"] + sorted(n for n in self.fields
+                                                     if n != "Name")
 
         return self._fieldNames
 
+
     def toFields(self):
         try:
             return tuple(self.fields[fieldName] for fieldName in self.fieldNames)
@@ -115,6 +124,7 @@
             )
 
 
+
 class File(object):
     """
     Object in virtual data hierarchy.
@@ -217,7 +227,8 @@
     """
     Root of virtual data hierarchy.
 
-    Hierarchy:
+    Hierarchy::
+
       /                    RootFolder
         uids/              UIDsFolder
           <uid>/           PrincipalHomeFolder
@@ -262,9 +273,8 @@
         # FIXME: Merge in directory UIDs also?
         # FIXME: Add directory info (eg. name) to list entry
 
-        def addResult(uid):
-            if uid in results:
-                return
+        def addResult(ignoredTxn, home):
+            uid = home.uid()
 
             record = self.service.directory.recordWithUID(uid)
             if record:
@@ -277,22 +287,12 @@
                 info = {}
 
             results[uid] = ListEntry(self, PrincipalHomeFolder, uid, **info)
-
-        txn = self.service.store.newTransaction()
-        try:
-            for home in (yield txn.calendarHomes()):
-                addResult(home.uid())
-            for home in (yield txn.addressbookHomes()):
-                addResult(home.uid())
-        finally:
-            (yield txn.abort())
-
+        yield self.service.store.withEachCalendarHomeDo(addResult)
+        yield self.service.store.withEachAddressbookHomeDo(addResult)
         returnValue(results.itervalues())
 
-        list.fieldNames = ("Record Name", "Short Name", "Full Name")
 
 
-
 class RecordFolder(Folder):
     def _recordForName(self, name):
         recordTypeAttr = "recordType_" + self.recordType

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/caldavd.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/caldavd.plist	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/caldavd.plist	2012-11-28 17:35:29 UTC (rev 10098)
@@ -91,11 +91,11 @@
 
     <!-- Log root -->
     <key>LogRoot</key>
-    <string>/var/log/caldavd</string>
+    <string>Logs</string>
 
     <!-- Run root -->
     <key>RunRoot</key>
-    <string>/var/run</string>
+    <string>Logs/state</string>
 
     <!-- Child aliases -->
     <key>Aliases</key>
@@ -279,7 +279,7 @@
      -->
 
 	<key>ProxyLoadFromFile</key>
-    <string>conf/auth/proxies-test.xml</string>
+    <string></string>
 
     <!--
         Special principals

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/users-groups.xml
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/users-groups.xml	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/gateway/users-groups.xml	2012-11-28 17:35:29 UTC (rev 10098)
@@ -37,4 +37,13 @@
       <member type="users">user02</member>
     </members>
   </group>
+  <group>
+    <uid>testgroup2</uid>
+    <guid>f5a6142c-4189-4e9e-90b0-9cd0268b314b</guid>
+    <password>test</password>
+    <name>Group 02</name>
+    <members>
+      <member type="users">user01</member>
+    </members>
+  </group>
 </accounts>

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/test_gateway.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/test_gateway.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/calendarserver/tools/test/test_gateway.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -121,6 +121,7 @@
         self.assertEquals(results["result"]["RealName"], "Created Location 01 %s" % unichr(208))
         self.assertEquals(results["result"]["Comment"], "Test Comment")
         self.assertEquals(results["result"]["AutoSchedule"], True)
+        self.assertEquals(results["result"]["AutoAcceptGroup"], "E5A6142C-4189-4E9E-90B0-9CD0268B314B")
         self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04']))
         self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06']))
 
@@ -202,9 +203,11 @@
         self.assertEquals(record.extras["country"], "Updated USA")
         self.assertEquals(record.extras["phone"], "(408) 555-1213")
         self.assertEquals(record.autoSchedule, True)
+        self.assertEquals(record.autoAcceptGroup, "F5A6142C-4189-4E9E-90B0-9CD0268B314B")
 
         results = yield self.runCommand(command_getLocationAttributes)
         self.assertEquals(results["result"]["AutoSchedule"], True)
+        self.assertEquals(results["result"]["AutoAcceptGroup"], "F5A6142C-4189-4E9E-90B0-9CD0268B314B")
         self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03']))
         self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06', 'user07']))
 
@@ -274,6 +277,13 @@
         results = yield self.runCommand(command_removeWriteProxy)
         self.assertEquals(len(results["result"]["Proxies"]), 0)
 
+    @inlineCallbacks
+    def test_purgeOldEvents(self):
+        results = yield self.runCommand(command_purgeOldEvents)
+        self.assertEquals(results["result"]["EventsRemoved"], 0)
+        self.assertEquals(results["result"]["RetainDays"], 42)
+        results = yield self.runCommand(command_purgeOldEventsNoDays)
+        self.assertEquals(results["result"]["RetainDays"], 365)
 
 
 command_addReadProxy = """<?xml version="1.0" encoding="UTF-8"?>
@@ -312,6 +322,8 @@
         <string>createLocation</string>
         <key>AutoSchedule</key>
         <true/>
+        <key>AutoAcceptGroup</key>
+        <string>E5A6142C-4189-4E9E-90B0-9CD0268B314B</string>
         <key>GeneratedUID</key>
         <string>836B1B66-2E9A-4F46-8B1C-3DD6772C20B2</string>
         <key>RealName</key>
@@ -495,6 +507,8 @@
         <string>setLocationAttributes</string>
         <key>AutoSchedule</key>
         <true/>
+        <key>AutoAcceptGroup</key>
+        <string>F5A6142C-4189-4E9E-90B0-9CD0268B314B</string>
         <key>GeneratedUID</key>
         <string>836B1B66-2E9A-4F46-8B1C-3DD6772C20B2</string>
         <key>RealName</key>
@@ -582,3 +596,25 @@
 </dict>
 </plist>
 """
+
+command_purgeOldEvents = """<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+<dict>
+        <key>command</key>
+        <string>purgeOldEvents</string>
+        <key>RetainDays</key>
+        <integer>42</integer>
+</dict>
+</plist>
+"""
+
+command_purgeOldEventsNoDays = """<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+<dict>
+        <key>command</key>
+        <string>purgeOldEvents</string>
+</dict>
+</plist>
+"""

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/conf/caldavd-apple.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/conf/caldavd-apple.plist	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/conf/caldavd-apple.plist	2012-11-28 17:35:29 UTC (rev 10098)
@@ -96,9 +96,11 @@
 
     <!-- Database connection -->
     <key>DBType</key>
-    <string>postgres</string>
+    <string></string>
     <key>DSN</key>
-    <string>/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::</string>
+    <string></string>
+    <key>DBImportFile</key>
+    <string>/Library/Server/Calendar and Contacts/DataDump.sql</string>
 
     <!-- Data root -->
     <key>DataRoot</key>

Deleted: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/create_caldavd_db.sh
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/create_caldavd_db.sh	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/create_caldavd_db.sh	2012-11-28 17:35:29 UTC (rev 10098)
@@ -1,5 +0,0 @@
-#!/usr/bin/env bash
-
-/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_bootstrap_database
-
-exit 0

Copied: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarcommonextra.py (from rev 10097, CalendarServer/trunk/contrib/migration/calendarcommonextra.py)
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarcommonextra.py	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarcommonextra.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -0,0 +1,185 @@
+#!/usr/bin/env python
+#
+# CommonExtra script for calendar server.
+#
+# Copyright (c) 2012 Apple Inc.  All Rights Reserved.
+#
+# IMPORTANT NOTE:  This file is licensed only for use on Apple-labeled
+# computers and is subject to the terms and conditions of the Apple
+# Software License Agreement accompanying the package this file is a
+# part of.  You may not port this file to another platform without
+# Apple's written consent.
+
+import datetime
+import subprocess
+from plistlib import readPlist, writePlist
+
+LOG = "/Library/Logs/Migration/calendarmigrator.log"
+SERVER_APP_ROOT = "/Applications/Server.app/Contents/ServerRoot"
+CALENDAR_SERVER_ROOT = "/Library/Server/Calendar and Contacts"
+CALDAVD_PLIST = "%s/Config/caldavd.plist" % (CALENDAR_SERVER_ROOT,)
+SERVER_ADMIN = "%s/usr/sbin/serveradmin" % (SERVER_APP_ROOT,)
+CERT_ADMIN = "/Applications/Server.app/Contents/ServerRoot/usr/sbin/certadmin"
+PGDUMP = "%s/usr/bin/pg_dump" % (SERVER_APP_ROOT,)
+DROPDB = "%s/usr/bin/dropdb" % (SERVER_APP_ROOT,)
+POSTGRES_SERVICE_NAME = "postgres_server"
+PGSOCKETDIR = "/Library/Server/PostgreSQL For Server Services/Socket"
+USERNAME      = "caldav"
+DATABASENAME  = "caldav"
+DATADUMPFILENAME = "%s/DataDump.sql" % (CALENDAR_SERVER_ROOT,)
+
+def log(msg):
+    try:
+        timestamp = datetime.datetime.now().strftime("%b %d %H:%M:%S")
+        msg = "calendarcommonextra: %s %s" % (timestamp, msg)
+        print msg # so it appears in Setup.log
+        with open(LOG, 'a') as output:
+            output.write("%s\n" % (msg,)) # so it appears in our log
+    except IOError:
+        # Could not write to log
+        pass
+
+
+def startPostgres():
+    """
+    Start postgres via serveradmin
+
+    This will block until postgres is up and running
+    """
+    log("Starting %s via %s" % (POSTGRES_SERVICE_NAME, SERVER_ADMIN))
+    ret = subprocess.call([SERVER_ADMIN, "start", POSTGRES_SERVICE_NAME])
+    log("serveradmin exited with %d" % (ret,))
+
+def stopPostgres():
+    """
+    Stop postgres via serveradmin
+    """
+    log("Stopping %s via %s" % (POSTGRES_SERVICE_NAME, SERVER_ADMIN))
+    ret = subprocess.call([SERVER_ADMIN, "stop", POSTGRES_SERVICE_NAME])
+    log("serveradmin exited with %d" % (ret,))
+
+
+def dumpOldDatabase(dumpFile):
+    """
+    Use pg_dump to dump data to dumpFile
+    """
+
+    cmdArgs = [
+        PGDUMP,
+        "-h", PGSOCKETDIR,
+        "--username=%s" % (USERNAME,),
+        "--inserts",
+        "--no-privileges",
+        "--file=%s" % (dumpFile,),
+        DATABASENAME
+    ]
+    try:
+        log("Dumping data to %s" % (dumpFile,))
+        log("Executing: %s" % (" ".join(cmdArgs)))
+        out = subprocess.check_output(cmdArgs, stderr=subprocess.STDOUT)
+        log(out)
+        return True
+    except subprocess.CalledProcessError, e:
+        log(e.output)
+        return False
+
+
+def dropOldDatabase():
+    """
+    Use dropdb to delete the caldav database from the shared postgres server
+    """
+
+    cmdArgs = [
+        DROPDB,
+        "-h", PGSOCKETDIR,
+        "--username=%s" % (USERNAME,),
+        DATABASENAME
+    ]
+    try:
+        log("\nDropping %s database" % (DATABASENAME,))
+        log("Executing: %s" % (" ".join(cmdArgs)))
+        out = subprocess.check_output(cmdArgs, stderr=subprocess.STDOUT)
+        log(out)
+        return True
+    except subprocess.CalledProcessError, e:
+        log(e.output)
+        return False
+
+
+def getDefaultCert():
+    """
+    Ask certadmin for default cert
+    @returns: path to default certificate, or empty string if no default
+    @rtype: C{str}
+    """
+    child = subprocess.Popen(
+        args=[CERT_ADMIN, "--default-certificate-path"],
+        stdout=subprocess.PIPE,
+        stderr=subprocess.PIPE,
+    )
+    output, error = child.communicate()
+    if child.returncode:
+        log("Error looking up default certificate (%d): %s" % (child.returncode, error))
+        return ""
+    else:
+        certPath = output.strip()
+        log("Default certificate is: %s" % (certPath,))
+        return certPath
+
+def updateSettings(settings, otherCert):
+    """
+    Replace SSL settings based on otherCert path
+    """
+    basePath = otherCert[:-len("cert.pem")]
+    log("Base path is %s" % (basePath,))
+
+    log("Setting SSLCertificate to %s" % (otherCert,))
+    settings["SSLCertificate"] = otherCert
+
+    otherChain = basePath + "chain.pem"
+    log("Setting SSLAuthorityChain to %s" % (otherChain,))
+    settings["SSLAuthorityChain"] = otherChain
+
+    otherKey = basePath + "key.pem"
+    log("Setting SSLPrivateKey to %s" % (otherKey,))
+    settings["SSLPrivateKey"] = otherKey
+
+    settings["EnableSSL"] = True
+    settings["RedirectHTTPToHTTPS"] = True
+
+def setCert(plistPath, otherCert):
+    """
+    Replace SSL settings in plist at plistPath based on otherCert path
+    """
+    log("Reading plist %s" % (plistPath,))
+    plist = readPlist(plistPath)
+    log("Read in plist %s" % (plistPath,))
+
+    updateSettings(plist, otherCert)
+
+    log("Writing plist %s" % (plistPath,))
+    writePlist(plist, plistPath)
+
+def isSSLEnabled(plistPath):
+    """
+    Examine plist for EnableSSL
+    """
+    log("Reading plist %s" % (plistPath,))
+    plist = readPlist(plistPath)
+    return plist.get("EnableSSL", False)
+
+def main():
+    startPostgres()
+    if dumpOldDatabase(DATADUMPFILENAME):
+        dropOldDatabase()
+    stopPostgres()
+
+    if not isSSLEnabled(CALDAVD_PLIST):
+        defaultCertPath = getDefaultCert()
+        log("Default cert path: %s" % (defaultCertPath,))
+        if defaultCertPath:
+            setCert(CALDAVD_PLIST, defaultCertPath)
+
+
+if __name__ == "__main__":
+    main()

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarmigrator.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarmigrator.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarmigrator.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -169,8 +169,13 @@
             # Trigger migration of locations and resources from OD
             triggerResourceMigration(newServerRoot)
 
-            setRunState(options, enableCalDAV, enableCardDAV)
+            # TODO: instead of starting now, leave breadcrumbs for
+            # the commonextra to start the service, so that data can
+            # be dumped from the old Postgres to a file which will
+            # be executed by calendar server when it next starts up.
 
+            # setRunState(options, enableCalDAV, enableCardDAV)
+
     else:
         log("ERROR: --sourceRoot and --sourceVersion must be specified")
         sys.exit(1)
@@ -479,12 +484,28 @@
     # If SSL is enabled, redirect HTTP to HTTPS.
     combined["RedirectHTTPToHTTPS"] = enableSSL
 
-    # New DSN value for server-specific Postgres
-    combined["DSN"] = "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::"
+    # New DBType value indicating we launch our own Postgres
+    combined["DBType"] = ""
 
+    # No DSN value since we launch our own Postgres
+    combined["DSN"] = ""
+
+    # Path to SQL file to import previous data from
+    combined["DBImportFile"] = "/Library/Server/Calendar and Contacts/DataDump.sql"
+
     # ConfigRoot is now always "Config"
     combined["ConfigRoot"] = "Config"
 
+    # Remove RunRoot and PIDFile keys so they use the new defaults
+    try:
+        del combined["RunRoot"]
+    except:
+        pass
+    try:
+        del combined["PIDFile"]
+    except:
+        pass
+
     return adminChanges
 
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarpromotion.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarpromotion.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/calendarpromotion.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -27,8 +27,9 @@
 
 def updatePlist(plistData):
     """
-    Update the passed-in plist data with new values for disabling the XMPPNotifier, and
-    to set the DSN to use the server-specific Postgres.
+    Update the passed-in plist data with new values for disabling the XMPPNotifier,
+    to set DBType to empty string indicating we'll be starting our own Postgres server,
+    and to specify the new location for ConfigRoot ("Config" directory beneath ServerRoot).
 
     @param plistData: the plist data to update in place
     @type plistData: C{dict}
@@ -38,9 +39,22 @@
             plistData["Notifications"]["Services"]["XMPPNotifier"]["Enabled"] = False
     except KeyError:
         pass
-    plistData["DSN"] = "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::"
+    plistData["DBType"] = ""
+    plistData["DSN"] = ""
+    plistData["ConfigRoot"] = "Config"
+    plistData["DBImportFile"] = "/Library/Server/Calendar and Contacts/DataDump.sql"
+    # Remove RunRoot and PIDFile keys so they use the new defaults
+    try:
+        del plistData["RunRoot"]
+    except:
+        pass
+    try:
+        del plistData["PIDFile"]
+    except:
+        pass
 
 
+
 def main():
 
     try:

Copied: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_commonextra.py (from rev 10097, CalendarServer/trunk/contrib/migration/test/test_commonextra.py)
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_commonextra.py	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_commonextra.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -0,0 +1,44 @@
+##
+# Copyright (c) 2012 Apple Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+##
+
+import twistedcaldav.test.util
+from contrib.migration.calendarcommonextra import updateSettings
+
+class CommonExtraTests(twistedcaldav.test.util.TestCase):
+    """
+    Calendar Server CommonExtra Tests
+    """
+
+    def test_updateSettings(self):
+        """
+        Verify SSL values are updated
+        """
+
+        # suppress prints
+        from contrib.migration import calendarcommonextra
+        self.patch(calendarcommonextra, "log", lambda x : x)
+
+        orig = {
+        }
+        expected = {
+            'EnableSSL': True,
+            'RedirectHTTPToHTTPS': True,
+            'SSLAuthorityChain': '/test/pchain.pem',
+            'SSLCertificate': '/test/path.cert',
+            'SSLPrivateKey': '/test/pkey.pem',
+        }
+        updateSettings(orig, "/test/path.cert")
+        self.assertEquals(orig, expected)

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_migrator.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_migrator.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_migrator.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -90,7 +90,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : True,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": True,
@@ -129,7 +131,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -168,7 +172,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : True,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": True,
@@ -207,7 +213,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : True,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": True,
@@ -246,7 +254,9 @@
             "BindHTTPPorts": [1111, 2222, 4444, 5555, 7777, 8888],
             "BindSSLPorts": [3333, 6666, 9999, 11111],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : True,
             "HTTPPort": 8888,
             "RedirectHTTPToHTTPS": True,
@@ -282,7 +292,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -313,7 +325,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : True,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": True,
@@ -335,7 +349,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -383,7 +399,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -423,7 +441,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -476,7 +496,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -518,7 +540,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -560,7 +584,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,
@@ -596,7 +622,9 @@
             "BindHTTPPorts": [8008, 8800],
             "BindSSLPorts": [8443, 8843],
             "ConfigRoot" : "Config",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DSN" : "",
+            "DBType" : "",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
             "EnableSSL" : False,
             "HTTPPort": 8008,
             "RedirectHTTPToHTTPS": False,

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_promotion.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_promotion.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/contrib/migration/test/test_promotion.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -29,10 +29,15 @@
 
         orig = {
             "ignored" : "ignored",
+            "RunRoot" : "xyzzy",
+            "PIDFile" : "plugh",
         }
         expected = {
             "ignored" : "ignored",
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
+            "DBType" : "",
+            "DSN" : "",
+            "ConfigRoot" : "Config",
         }
         updatePlist(orig)
         self.assertEquals(orig, expected)
@@ -44,7 +49,9 @@
                         "Enabled" : True
                     }
                 }
-            }
+            },
+            "ConfigRoot" : "/etc/caldavd",
+
         }
         expected = {
             "Notifications" : {
@@ -54,7 +61,10 @@
                     }
                 }
             },
-            "DSN" : "/Library/Server/PostgreSQL For Server Services/Socket:caldav:caldav:::",
+            "DBImportFile" : "/Library/Server/Calendar and Contacts/DataDump.sql",
+            "DBType" : "",
+            "DSN" : "",
+            "ConfigRoot" : "Config",
         }
         updatePlist(orig)
         self.assertEquals(orig, expected)

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/doc/Admin/MultiServerDeployment.rst
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/doc/Admin/MultiServerDeployment.rst	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/doc/Admin/MultiServerDeployment.rst	2012-11-28 17:35:29 UTC (rev 10098)
@@ -18,7 +18,7 @@
 
 * `Shared Storage for Attachments`_: AttachmentsRoot should point to storage shared across all servers, e.g. an NFS mount. Used for file attachments to calendar events.
 
-* `General Advise`_: *No one wants advice - only corroboration.*  --John Steinbeck
+* `General Advice`_: *No one wants advice - only corroboration.*  --John Steinbeck
 
 ---------------------
 Database Connectivity
@@ -170,7 +170,7 @@
 Set the caldavd.plist key AttachmentsRoot to a filesystem directory that is shared and writable by all Calendar Server machines, for example an NFS export. This will be used to store file attachements that users may attach to calendar events.
 
 -------------------
-General Advise
+General Advice
 -------------------
 
 * Ensure caldavd.plist is identical on all Calendar Server hosts. This is not strictly required, but recommended to keep things as predictable as possible. Since you already have shared storage for AttachmentsRoot, use that to host the 'conf' directory for all servers as well; this way you don't need to push config changes out to the servers.

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.txt
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.txt	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.txt	2012-11-28 17:35:29 UTC (rev 10098)
@@ -2,12 +2,11 @@
 
 
 Calendar Server Extension                                       C. Daboo
-                                                          Apple Computer
-                                                             May 3, 2007
+                                                              Apple Inc.
+                                                       November 13, 2012
 
 
               Calendar User Proxy Functionality in CalDAV
-                           caldav-cu-proxy-02
 
 Abstract
 
@@ -25,14 +24,18 @@
      3.2.  Client . . . . . . . . . . . . . . . . . . . . . . . . . .  3
    4.  Open Issues  . . . . . . . . . . . . . . . . . . . . . . . . .  4
    5.  New features in CalDAV . . . . . . . . . . . . . . . . . . . .  4
-     5.1.  Proxy Principal Resource . . . . . . . . . . . . . . . . .  4
-     5.2.  Privilege Provisioning . . . . . . . . . . . . . . . . . .  8
-   6.  Security Considerations  . . . . . . . . . . . . . . . . . . .  9
-   7.  IANA Considerations  . . . . . . . . . . . . . . . . . . . . .  9
-   8.  Normative References . . . . . . . . . . . . . . . . . . . . .  9
-   Appendix A.  Acknowledgments . . . . . . . . . . . . . . . . . . .  9
-   Appendix B.  Change History  . . . . . . . . . . . . . . . . . . . 10
-   Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 10
+     5.1.  Feature Discovery  . . . . . . . . . . . . . . . . . . . .  4
+     5.2.  Proxy Principal Resource . . . . . . . . . . . . . . . . .  4
+     5.3.  New Principal Properties . . . . . . . . . . . . . . . . .  8
+       5.3.1.  CS:calendar-proxy-read-for Property  . . . . . . . . .  8
+       5.3.2.  CS:calendar-proxy-write-for Property . . . . . . . . .  8
+     5.4.  Privilege Provisioning . . . . . . . . . . . . . . . . . .  9
+   6.  Security Considerations  . . . . . . . . . . . . . . . . . . . 10
+   7.  IANA Considerations  . . . . . . . . . . . . . . . . . . . . . 10
+   8.  Normative References . . . . . . . . . . . . . . . . . . . . . 10
+   Appendix A.  Acknowledgments . . . . . . . . . . . . . . . . . . . 11
+   Appendix B.  Change History  . . . . . . . . . . . . . . . . . . . 11
+   Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 12
 
 
 
@@ -49,12 +52,9 @@
 
 
 
-
-
-
 Daboo                                                           [Page 1]
 
-                              CalDAV Proxy                      May 2007
+                              CalDAV Proxy                 November 2012
 
 
 1.  Introduction
@@ -110,7 +110,7 @@
 
 Daboo                                                           [Page 2]
 
-                              CalDAV Proxy                      May 2007
+                              CalDAV Proxy                 November 2012
 
 
    this namespace are referenced in this document outside of the context
@@ -146,9 +146,13 @@
        "proxy group" inheritable read-write access.
 
    c.  Add an ACE to each of the calendar Inbox and Outbox collections
-       giving the CALDAV:schedule privilege
-       [I-D.desruisseaux-caldav-sched] to the read-write "proxy group".
+       giving the CALDAV:schedule privilege [RFC6638] to the read-write
+       "proxy group".
 
+   On each user principal resource, the server maintains two WebDAV
+   properties containing lists of other user principals for which the
+   target principal is a read-only or read-write proxy.
+
 3.2.  Client
 
    A client can see who the proxies are for the current principal by
@@ -157,24 +161,22 @@
 
    The client can edit the list of proxies for the current principal by
    editing the DAV:group-member-set property on the relevant "proxy
-   group" principal resource.
 
-   The client can find out who the current principal is a proxy for by
-   running a DAV:principal-match REPORT on the principal collection.
 
 
-
 Daboo                                                           [Page 3]
 
-                              CalDAV Proxy                      May 2007
+                              CalDAV Proxy                 November 2012
 
 
-   Alternatively, the client can find out who the current principal is a
-   proxy for by examining the DAV:group-membership property on the
-   current principal resource looking for membership in other users'
-   "proxy groups".
+   group" principal resource.
 
+   The client can find out who the current principal is a proxy for by
+   examining the CS:calendar-proxy-read-for and CS:calendar-proxy-write-
+   for properties, possibly using the DAV:expand-property REPORT to get
+   other useful properties about the principals being proxied for.
 
+
 4.  Open Issues
 
    1.  Do we want to separate read-write access to calendars vs the
@@ -194,8 +196,14 @@
 
 5.  New features in CalDAV
 
-5.1.  Proxy Principal Resource
+5.1.  Feature Discovery
 
+   A server that supports the features described in this document MUST
+   include "calendar-proxy" as a field in the DAV response header from
+   an OPTIONS request on any resource that supports these features.
+
+5.2.  Proxy Principal Resource
+
    Each "regular" principal resource that needs to allow calendar user
    proxy support MUST be a collection resource. i.e. in addition to
    including the DAV:principal XML element in the DAV:resourcetype
@@ -209,6 +217,14 @@
    resources that are groups contain the list of principals for calendar
    users who can act as a read-only or read-write proxy respectively.
 
+
+
+
+Daboo                                                           [Page 4]
+
+                              CalDAV Proxy                 November 2012
+
+
    The server MUST include the CS:calendar-proxy-read or CS:calendar-
    proxy-write XML elements in the DAV:resourcetype property of the
    child resources, respectively.  This allows clients to discover the
@@ -216,15 +232,6 @@
    current user's principal resource and requesting the DAV:resourcetype
    property be returned.  The element type declarations are:
 
-
-
-
-
-Daboo                                                           [Page 4]
-
-                              CalDAV Proxy                      May 2007
-
-
    <!ELEMENT calendar-proxy-read EMPTY>
 
    <!ELEMENT calendar-proxy-write EMPTY>
@@ -265,24 +272,25 @@
    The DAV:group-membership property on the resource /principals/users/
    red/ would be:
 
-   <DAV:group-membership>
-     <DAV:href>/principals/users/cyrus/calendar-proxy-write</DAV:href>
-   </DAV:group-membership>
 
-   If the principal "red" was also a read-only proxy for the principal
-   "wilfredo", then the DA:group-membership property on the resource
-   /principals/users/red/ would be:
 
 
 
-
 Daboo                                                           [Page 5]
 
-                              CalDAV Proxy                      May 2007
+                              CalDAV Proxy                 November 2012
 
 
    <DAV:group-membership>
      <DAV:href>/principals/users/cyrus/calendar-proxy-write</DAV:href>
+   </DAV:group-membership>
+
+   If the principal "red" was also a read-only proxy for the principal
+   "wilfredo", then the DA:group-membership property on the resource
+   /principals/users/red/ would be:
+
+   <DAV:group-membership>
+     <DAV:href>/principals/users/cyrus/calendar-proxy-write</DAV:href>
      <DAV:href>/principals/users/wilfredo/calendar-proxy-read</DAV:href>
    </DAV:group-membership>
 
@@ -324,17 +332,9 @@
 
 
 
-
-
-
-
-
-
-
-
 Daboo                                                           [Page 6]
 
-                              CalDAV Proxy                      May 2007
+                              CalDAV Proxy                 November 2012
 
 
    >> Response <<
@@ -390,11 +390,84 @@
 
 Daboo                                                           [Page 7]
 
-                              CalDAV Proxy                      May 2007
+                              CalDAV Proxy                 November 2012
 
 
-5.2.  Privilege Provisioning
+5.3.  New Principal Properties
 
+   Each "regular" principal that is a proxy for other principals MUST
+   have the CS:calendar-proxy-read-for and CS:calendar-proxy-write-for
+   WebDAV properties available on its principal resource, to allow
+   clients to quickly find the "proxy for" information.
+
+5.3.1.  CS:calendar-proxy-read-for Property
+
+   Name:  calendar-proxy-read-for
+
+   Namespace:  http://calendarserver.org/ns/
+
+   Purpose:  Lists principals for whom the current principal is a read-
+      only proxy for.
+
+   Protected:  This property MUST be protected.
+
+   PROPFIND behavior:  This property SHOULD NOT be returned by a
+      PROPFIND allprop request (as defined in Section 14.2 of
+      [RFC4918]).
+
+   Description:  This property allows a client to quickly determine the
+      principal for whom the current principal is a read-only proxy for.
+      The server MUST account for any group memberships of the current
+      principal that are either direct or indirect members of a proxy
+      group. e.g., if principal "A" assigns a group "G" as a read-only
+      proxy, and principal "B" is a member of group "G", then principal
+      "B" will see principal "A" listed in the CS:calendar-proxy-read-
+      for property on their principal resource.
+
+   Definition:
+
+     <!ELEMENT calendar-proxy-read-for (DAV:href*)>
+
+5.3.2.  CS:calendar-proxy-write-for Property
+
+   Name:  calendar-proxy-write-for
+
+   Namespace:  http://calendarserver.org/ns/
+
+   Purpose:  Lists principals for whom the current principal is a read-
+      write proxy for.
+
+   Protected:  This property MUST be protected.
+
+
+
+
+
+
+Daboo                                                           [Page 8]
+
+                              CalDAV Proxy                 November 2012
+
+
+   PROPFIND behavior:  This property SHOULD NOT be returned by a
+      PROPFIND allprop request (as defined in Section 14.2 of
+      [RFC4918]).
+
+   Description:  This property allows a client to quickly determine the
+      principal for whom the current principal is a read-write proxy
+      for.  The server MUST account for any group memberships of the
+      current principal that are either direct or indirect members of a
+      proxy group. e.g., if principal "A" assigns a group "G" as a read-
+      write proxy, and principal "B" is a member of group "G", then
+      principal "B" will see principal "A" listed in the CS:calendar-
+      proxy-write-for property on their principal resource.
+
+   Definition:
+
+     <!ELEMENT calendar-proxy-write-for (DAV:href*)>
+
+5.4.  Privilege Provisioning
+
    In order for a calendar user proxy to be able to access the calendars
    of the user they are proxying for the server MUST ensure that the
    privileges on the relevant calendars are setup accordingly:
@@ -407,14 +480,31 @@
 
    Additionally, the CalDAV scheduling Inbox and Outbox calendar
    collections for the user allowing proxy access, MUST have the CALDAV:
-   schedule privilege [I-D.desruisseaux-caldav-sched] granted for read-
-   write calendar user proxy principals.
+   schedule privilege [RFC6638] granted for read-write calendar user
+   proxy principals.
 
    Note that with a suitable repository layout, a server may be able to
    grant the appropriate privileges on a parent collection and ensure
    that all the contained collections and resources inherit that.  For
    example, given the following repository layout:
 
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo                                                           [Page 9]
+
+                              CalDAV Proxy                 November 2012
+
+
            + /
              + calendars/
                + users/
@@ -440,15 +530,6 @@
    on the resource /calendars/users/cyrus/ and all children of that
    resource:
 
-
-
-
-
-Daboo                                                           [Page 8]
-
-                              CalDAV Proxy                      May 2007
-
-
    <DAV:ace>
      <DAV:principal>
        <DAV:href>/principals/users/cyrus/calendar-proxy-write</DAV:href>
@@ -471,12 +552,15 @@
 
 8.  Normative References
 
-   [I-D.desruisseaux-caldav-sched]
-              Desruisseaux, B., "Scheduling Extensions to CalDAV",
-              draft-desruisseaux-caldav-sched-03 (work in progress),
-              January 2007.
+   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
 
-   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
+
+
+Daboo                                                          [Page 10]
+
+                              CalDAV Proxy                 November 2012
+
+
               Requirement Levels", BCP 14, RFC 2119, March 1997.
 
    [RFC2518]  Goland, Y., Whitehead, E., Faizi, A., Carter, S., and D.
@@ -484,40 +568,55 @@
               WEBDAV", RFC 2518, February 1999.
 
    [RFC3744]  Clemm, G., Reschke, J., Sedlar, E., and J. Whitehead, "Web
-              Distributed Authoring and Versioning (WebDAV) Access
-              Control Protocol", RFC 3744, May 2004.
+              Distributed Authoring and Versioning (WebDAV)
+              Access Control Protocol", RFC 3744, May 2004.
 
    [RFC4791]  Daboo, C., Desruisseaux, B., and L. Dusseault,
               "Calendaring Extensions to WebDAV (CalDAV)", RFC 4791,
               March 2007.
 
+   [RFC4918]  Dusseault, L., "HTTP Extensions for Web Distributed
+              Authoring and Versioning (WebDAV)", RFC 4918, June 2007.
 
+   [RFC6638]  Daboo, C. and B. Desruisseaux, "Scheduling Extensions to
+              CalDAV", RFC 6638, June 2012.
+
+
 Appendix A.  Acknowledgments
 
    This specification is the result of discussions between the Apple
    calendar server and client teams.
 
 
+Appendix B.  Change History
 
+   Changes in -03:
 
-Daboo                                                           [Page 9]
-
-                              CalDAV Proxy                      May 2007
+   1.  Added OPTIONS DAV header token.
 
+   2.  Added CS:calendar-proxy-read-for and CS:calendar-proxy-write-for
+       properties for faster discovery of proxy relationships.
 
-Appendix B.  Change History
+   Changes in -02:
 
-   Changes from -00:
-
    1.  Updated to RFC 4791 reference.
 
-   Changes from -00:
+   Changes in -01:
 
    1.  Added more details on actual CalDAV protocol changes.
 
    2.  Changed namespace from http://apple.com/ns/calendarserver/ to
        http://calendarserver.org/ns/.
 
+
+
+
+
+Daboo                                                          [Page 11]
+
+                              CalDAV Proxy                 November 2012
+
+
    3.  Made "proxy group" principals child resources of their "owner"
        principals.
 
@@ -527,7 +626,7 @@
 Author's Address
 
    Cyrus Daboo
-   Apple Computer, Inc.
+   Apple, Inc.
    1 Infinite Loop
    Cupertino, CA  95014
    USA
@@ -556,5 +655,18 @@
 
 
 
-Daboo                                                          [Page 10]
+
+
+
+
+
+
+
+
+
+
+
+
+
+Daboo                                                          [Page 12]
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.xml
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.xml	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/doc/Extensions/caldav-proxy.xml	2012-11-28 17:35:29 UTC (rev 10098)
@@ -1,11 +1,12 @@
 <?xml version="1.0" encoding="UTF-8"?>
+<?xml-stylesheet type="text/xsl" href="../rfc2629.xslt"?>
 <!DOCTYPE rfc SYSTEM 'rfc2629.dtd' [
 <!ENTITY rfc2119 PUBLIC '' 'bibxml/reference.RFC.2119.xml'>
 <!ENTITY rfc2518 PUBLIC '' 'bibxml/reference.RFC.2518.xml'>
 <!ENTITY rfc3744 PUBLIC '' 'bibxml/reference.RFC.3744.xml'>
 <!ENTITY rfc4791 PUBLIC '' 'bibxml/reference.RFC.4791.xml'>
-<!ENTITY I-D.dusseault-caldav PUBLIC '' 'bibxml3/reference.I-D.dusseault-caldav.xml'>
-<!ENTITY I-D.desruisseaux-caldav-sched PUBLIC '' 'bibxml3/reference.I-D.desruisseaux-caldav-sched.xml'>
+<!ENTITY rfc4918 PUBLIC '' 'bibxml/reference.RFC.4918.xml'>
+<!ENTITY rfc6638 PUBLIC '' 'bibxml/reference.RFC.6638.xml'>
 ]> 
 <?rfc toc="yes"?>
 <?rfc tocdepth="4"?>
@@ -17,12 +18,12 @@
 <?rfc compact="yes"?>
 <?rfc subcompact="no"?>
 <?rfc private="Calendar Server Extension"?>
-<rfc ipr="none" docName='caldav-cu-proxy-02'>
+<rfc ipr="none" docName='caldav-cu-proxy-03'>
     <front>
         <title abbrev="CalDAV Proxy">Calendar User Proxy Functionality in CalDAV</title> 
         <author initials="C." surname="Daboo" fullname="Cyrus Daboo">
-            <organization abbrev="Apple Computer">
-                Apple Computer, Inc.
+            <organization abbrev="Apple Inc.">
+                Apple, Inc.
             </organization>
             <address>
                 <postal>
@@ -36,7 +37,7 @@
                 <uri>http://www.apple.com/</uri>
             </address>
         </author>
-        <date year='2007'/>
+        <date/>
         <abstract>
             <t>
                 This specification defines an extension to CalDAV that makes it easy for clients to setup and manage calendar user proxies, using the WebDAV Access Control List extension as a basis.
@@ -94,10 +95,13 @@
                             Add an ACE to the calendar home collection giving the read-write "proxy group" inheritable read-write access.
                         </t>
                         <t>
-                            Add an ACE to each of the calendar Inbox and Outbox collections giving the <xref target='I-D.desruisseaux-caldav-sched'>CALDAV:schedule privilege</xref> to the read-write "proxy group".
+                            Add an ACE to each of the calendar Inbox and Outbox collections giving the <xref target='RFC6638'>CALDAV:schedule privilege</xref> to the read-write "proxy group".
                         </t>
                     </list>
                 </t>
+                <t>
+                	On each user principal resource, the server maintains two WebDAV properties containing lists of other user principals for which the target principal is a read-only or read-write proxy.
+                </t>
             </section>
             <section title='Client'>
                 <t>
@@ -107,11 +111,8 @@
                     The client can edit the list of proxies for the current principal by editing the DAV:group-member-set property on the relevant "proxy group" principal resource.
                 </t>
                 <t>
-                    The client can find out who the current principal is a proxy for by running a DAV:principal-match REPORT on the principal collection.
+                    The client can find out who the current principal is a proxy for by examining the CS:calendar-proxy-read-for and CS:calendar-proxy-write-for properties, possibly using the DAV:expand-property REPORT to get other useful properties about the principals being proxied for.
                 </t>
-                <t>
-                    Alternatively, the client can find out who the current principal is a proxy for by examining the DAV:group-membership property on the current principal resource looking for membership in other users' "proxy groups".
-                </t>
             </section>
         </section>
 
@@ -135,6 +136,11 @@
         </section>
             
         <section title='New features in CalDAV' anchor='changes'>
+            <section title="Feature Discovery">
+                <t>
+                    A server that supports the features described in this document MUST include "calendar-proxy" as a field in the DAV response header from an OPTIONS request on any resource that supports these features.
+                </t>
+            </section>
             <section title='Proxy Principal Resource'>
                 <t>
                     Each "regular" principal resource that needs to allow calendar user proxy support MUST be a collection resource. i.e. in addition to including the DAV:principal XML element in the DAV:resourcetype property on the resource, it MUST also include the DAV:collection XML element.
@@ -279,6 +285,47 @@
                   </figure>
                 </t>
             </section>
+            <section title="New Principal Properties">
+            	<t>
+            		Each "regular" principal that is a proxy for other principals MUST have the CS:calendar-proxy-read-for and CS:calendar-proxy-write-for WebDAV properties available on its principal resource, to allow clients to quickly find the "proxy for" information.
+            	</t>
+        <section title="CS:calendar-proxy-read-for Property">
+          <t>
+            <list style="hanging">
+              <t hangText="Name:">calendar-proxy-read-for</t>
+              <t hangText="Namespace:">http://calendarserver.org/ns/</t>
+              <t hangText="Purpose:">Lists principals for whom the current principal is a read-only proxy for.</t>
+              <t hangText="Protected:">This property MUST be protected.</t>
+              <t hangText="PROPFIND behavior:">This property SHOULD NOT be returned by a PROPFIND allprop request (as defined in Section 14.2 of <xref target="RFC4918"/>).</t>
+
+              <t hangText="Description:">This property allows a client to quickly determine the principal for whom the current principal is a read-only proxy for. The server MUST account for any group memberships of the current principal that are either direct or indirect members of a proxy group. e.g., if principal "A" assigns a group "G" as a read-only proxy, and principal "B" is a member of group "G", then principal "B" will see principal "A" listed in the CS:calendar-proxy-read-for property on their principal resource.</t>
+              <t hangText="Definition:">
+                <figure><artwork><![CDATA[
+  <!ELEMENT calendar-proxy-read-for (DAV:href*)>
+]]></artwork></figure>
+              </t>
+            </list>
+          </t>
+        </section>
+        <section title="CS:calendar-proxy-write-for Property">
+          <t>
+            <list style="hanging">
+              <t hangText="Name:">calendar-proxy-write-for</t>
+              <t hangText="Namespace:">http://calendarserver.org/ns/</t>
+              <t hangText="Purpose:">Lists principals for whom the current principal is a read-write proxy for.</t>
+              <t hangText="Protected:">This property MUST be protected.</t>
+              <t hangText="PROPFIND behavior:">This property SHOULD NOT be returned by a PROPFIND allprop request (as defined in Section 14.2 of <xref target="RFC4918"/>).</t>
+
+              <t hangText="Description:">This property allows a client to quickly determine the principal for whom the current principal is a read-write proxy for. The server MUST account for any group memberships of the current principal that are either direct or indirect members of a proxy group. e.g., if principal "A" assigns a group "G" as a read-write proxy, and principal "B" is a member of group "G", then principal "B" will see principal "A" listed in the CS:calendar-proxy-write-for property on their principal resource.</t>
+              <t hangText="Definition:">
+                <figure><artwork><![CDATA[
+  <!ELEMENT calendar-proxy-write-for (DAV:href*)>
+]]></artwork></figure>
+              </t>
+            </list>
+          </t>
+        </section>
+            </section>
             <section title='Privilege Provisioning'>
                 <t>
                     In order for a calendar user proxy to be able to access the calendars of the user they are proxying for the server MUST ensure that the privileges on the relevant calendars are setup accordingly:
@@ -286,7 +333,7 @@
                         <t>The DAV:read privilege MUST be granted for read-only and read-write calendar user proxy principals</t>
                         <t>The DAV:write privilege MUST be granted for read-write calendar user proxy principals.</t>
                     </list>
-                    Additionally, the  CalDAV scheduling Inbox and Outbox calendar collections for the user allowing proxy access, MUST have the <xref target='I-D.desruisseaux-caldav-sched'>CALDAV:schedule privilege</xref> granted for read-write calendar user proxy principals.
+                    Additionally, the  CalDAV scheduling Inbox and Outbox calendar collections for the user allowing proxy access, MUST have the <xref target='RFC6638'>CALDAV:schedule privilege</xref> granted for read-write calendar user proxy principals.
                 </t>
                 <t>
                     Note that with a suitable repository layout, a server may be able to grant the appropriate privileges on a parent collection  and ensure that all the contained collections and resources inherit that. For example, given the following repository layout:
@@ -348,7 +395,8 @@
             &rfc2518;
             &rfc3744;
             &rfc4791;
-            &I-D.desruisseaux-caldav-sched; 
+            &rfc4918;
+            &rfc6638; 
         </references>
 <!--
 <references title='Informative References'>
@@ -360,14 +408,24 @@
             </t>
         </section>
         <section title='Change History'>
-            <t>Changes from -00:
+            <t>Changes in -03:
                 <list style='numbers'>
                     <t>
+                        Added OPTIONS DAV header token.
+                    </t>
+                    <t>
+                    	Added CS:calendar-proxy-read-for and CS:calendar-proxy-write-for properties for faster discovery of proxy relationships.
+                    </t>
+                </list>
+            </t>
+            <t>Changes in -02:
+                <list style='numbers'>
+                    <t>
                         Updated to RFC 4791 reference.
                     </t>
                 </list>
             </t>
-            <t>Changes from -00:
+            <t>Changes in -01:
                 <list style='numbers'>
                     <t>
                         Added more details on actual CalDAV protocol changes.

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/doc/calendarserver_manage_principals.8
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/doc/calendarserver_manage_principals.8	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/doc/calendarserver_manage_principals.8	2012-11-28 17:35:29 UTC (rev 10098)
@@ -38,6 +38,8 @@
 .Op Fl -get-auto-schedule
 .Op Fl -set-auto-schedule-mode Ar none|accept-always|decline-always|accept-if-free|decline-if-busy|automatic
 .Op Fl -get-auto-schedule-mode
+.Op Fl -set-auto-accept-group Ar group
+.Op Fl -get-auto-accept-group
 .Op Fl -add Ar locations|resources full-name [record-name] [GUID]
 .Op Fl -remove
 .Ar principal
@@ -123,6 +125,11 @@
 Enable or disable automatic scheduling.
 .It Fl -get-auto-schedule
 Get the automatic scheduling state.
+.It Fl -set-auto-accept-group Ar group
+The principal will auto-accept any invites from any member of the group (as long
+as there are no conflicts).
+.It Fl -get-auto-accept-group
+Get the currently assigned auto-accept group for the principal.
 .It Fl -add Ar locations|resources full-name [record-name] [GUID]
 Add a new location or resource. Record name and GUID are optional.  If
 GUID is not specified, one will be generated.  If record name is not

Deleted: CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch
===================================================================
--- CalendarServer/trunk/lib-patches/pycrypto/__init__.py.patch	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch	2012-11-28 17:35:29 UTC (rev 10098)
@@ -1,6 +0,0 @@
-Index: lib/Crypto/Random/Fortuna/__init__.py
-===================================================================
---- lib/Crypto/Random/Fortuna/__init__.py
-+++ lib/Crypto/Random/Fortuna/__init__.py
-@@ -0,0 +1 @@
-+#

Copied: CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch (from rev 10097, CalendarServer/trunk/lib-patches/pycrypto/__init__.py.patch)
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/lib-patches/pycrypto/__init__.py.patch	2012-11-28 17:35:29 UTC (rev 10098)
@@ -0,0 +1,6 @@
+Index: lib/Crypto/Random/Fortuna/__init__.py
+===================================================================
+--- lib/Crypto/Random/Fortuna/__init__.py
++++ lib/Crypto/Random/Fortuna/__init__.py
+@@ -0,0 +1 @@
++#

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/support/Makefile.Apple
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/support/Makefile.Apple	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/support/Makefile.Apple	2012-11-28 17:35:29 UTC (rev 10098)
@@ -60,16 +60,17 @@
 sqlparse-0.1.2::        $(BuildDirectory)/sqlparse-0.1.2
 setproctitle-1.1.6::	$(BuildDirectory)/setproctitle-1.1.6
 psutil-0.6.1::		$(BuildDirectory)/psutil-0.6.1
+pycrypto-2.5::		$(BuildDirectory)/pycrypto-2.5
 $(Project)::            $(BuildDirectory)/$(Project)
 
-build:: PyKerberos pycalendar PyGreSQL-4.0 sqlparse-0.1.2 setproctitle-1.1.6 psutil-0.6.1 $(Project)
+build:: PyKerberos pycalendar PyGreSQL-4.0 sqlparse-0.1.2 setproctitle-1.1.6 psutil-0.6.1 pycrypto-2.5 $(Project)
 
 setup:
 	$(_v) ./run -g
 
-prep:: setup CalDAVTester.tgz PyKerberos.tgz pycalendar.tgz PyGreSQL-4.0.tgz sqlparse-0.1.2.tgz setproctitle-1.1.6.tgz psutil-0.6.1.tgz
+prep:: setup CalDAVTester.tgz PyKerberos.tgz pycalendar.tgz PyGreSQL-4.0.tgz sqlparse-0.1.2.tgz setproctitle-1.1.6.tgz psutil-0.6.1.tgz pycrypto-2.5.tgz
 
-PyKerberos pycalendar PyGreSQL-4.0 sqlparse-0.1.2 setproctitle-1.1.6 psutil-0.6.1 $(Project)::
+PyKerberos pycalendar PyGreSQL-4.0 sqlparse-0.1.2 setproctitle-1.1.6 psutil-0.6.1 pycrypto-2.5 $(Project)::
 	@echo "Building $@..."
 	$(_v) cd $(BuildDirectory)/$@ && $(Environment) $(PYTHON) setup.py build
 
@@ -81,6 +82,7 @@
 	$(_v) cd $(BuildDirectory)/sqlparse-0.1.2     && $(Environment) $(PYTHON) setup.py install $(PY_INSTALL_FLAGS)
 	$(_v) cd $(BuildDirectory)/setproctitle-1.1.6 && $(Environment) $(PYTHON) setup.py install $(PY_INSTALL_FLAGS)
 	$(_v) cd $(BuildDirectory)/psutil-0.6.1       && $(Environment) $(PYTHON) setup.py install $(PY_INSTALL_FLAGS)
+	$(_v) cd $(BuildDirectory)/pycrypto-2.5       && $(Environment) $(PYTHON) setup.py install $(PY_INSTALL_FLAGS)
 	$(_v) for so in $$(find "$(DSTROOT)$(PY_HOME)/lib" -type f -name '*.so'); do $(STRIP) -Sx "$${so}"; done 
 	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SIPP)$(ETCDIR)$(CALDAVDSUBDIR)"
 	$(_v) $(INSTALL_FILE) "$(Sources)/conf/caldavd-apple.plist" "$(DSTROOT)$(SIPP)$(ETCDIR)$(CALDAVDSUBDIR)/caldavd.plist"
@@ -110,23 +112,23 @@
 	$(_v) $(INSTALL_DIRECTORY) -o "$(CS_USER)" -g "$(CS_GROUP)" -m 0755 "$(DSTROOT)$(VARDIR)/log$(CALDAVDSUBDIR)"
 	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SIPP)$(NSLIBRARYDIR)/LaunchDaemons"
 	$(_v) $(INSTALL_FILE) "$(Sources)/contrib/launchd/calendarserver.plist" "$(DSTROOT)$(SIPP)$(NSLIBRARYDIR)/LaunchDaemons/org.calendarserver.calendarserver.plist"
-	@echo "Installing migration config..."
+	@echo "Installing migration extras script..."
 	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SERVERSETUP)/MigrationExtras"
 	$(_v) $(INSTALL_FILE) "$(Sources)/contrib/migration/calendarmigrator.py" "$(DSTROOT)$(SERVERSETUP)/MigrationExtras/70_calendarmigrator.py"
 	$(_v) chmod ugo+x "$(DSTROOT)$(SERVERSETUP)/MigrationExtras/70_calendarmigrator.py"
-	@echo "Installing server promotion config..."
+	@echo "Installing common extras script..."
+	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SERVERSETUP)/CommonExtras"
+	$(_v) $(INSTALL_FILE) "$(Sources)/contrib/migration/calendarcommonextra.py" "$(DSTROOT)$(SERVERSETUP)/CommonExtras/70_calendarcommonextra.py"
+	$(_v) chmod ugo+x "$(DSTROOT)$(SERVERSETUP)/CommonExtras/70_calendarcommonextra.py"
+	@echo "Installing server promotion extras script..."
 	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SERVERSETUP)/PromotionExtras"
 	$(_v) $(INSTALL_FILE) "$(Sources)/contrib/migration/calendarpromotion.py" "$(DSTROOT)$(SERVERSETUP)/PromotionExtras/59_calendarpromotion.py"
 	$(_v) chmod ugo+x "$(DSTROOT)$(SERVERSETUP)/PromotionExtras/59_calendarpromotion.py"
-	@echo "Installing server demotion config..."
+	@echo "Installing server uninstall extras script..."
 	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SERVERSETUP)/UninstallExtras"
 	$(_v) $(INSTALL_FILE) "$(Sources)/contrib/migration/calendardemotion.py" "$(DSTROOT)$(SERVERSETUP)/UninstallExtras/59_calendardemotion.py"
 	$(_v) chmod ugo+x "$(DSTROOT)$(SERVERSETUP)/UninstallExtras/59_calendardemotion.py"
-	@echo "Installing database configuration scripts..."
-	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SERVERSETUP)/CommonExtras/PostgreSQLExtras"
-	$(_v) $(INSTALL_FILE) "$(Sources)/contrib/create_caldavd_db.sh" "$(DSTROOT)$(SERVERSETUP)/CommonExtras/PostgreSQLExtras/create_caldavd_db.sh"
-	$(_v) chmod ugo+x "$(DSTROOT)$(SERVERSETUP)/CommonExtras/PostgreSQLExtras/create_caldavd_db.sh"
-	@echo "Installing changeip config..."
+	@echo "Installing changeip script..."
 	$(_v) $(INSTALL_DIRECTORY) "$(DSTROOT)$(SIPP)$(LIBEXECDIR)/changeip"
 	$(_v) $(INSTALL_FILE) "$(Sources)/calendarserver/tools/changeip_calendar.py" "$(DSTROOT)$(SIPP)$(LIBEXECDIR)/changeip/changeip_calendar.py"
 	$(_v) chmod ugo+x "$(DSTROOT)$(SIPP)$(LIBEXECDIR)/changeip/changeip_calendar.py"

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/support/build.sh
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/support/build.sh	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/support/build.sh	2012-11-28 17:35:29 UTC (rev 10098)
@@ -93,16 +93,34 @@
 
   patches="${caldav}/lib-patches";
 
-  # Find a command that can hash up a string for us
-  if type -t openssl > /dev/null; then
-    hash="md5";
-    hash () { openssl dgst -md5 "$@"; }
-  elif type -t md5 > /dev/null; then
-    hash="md5";
+  # Find some hashing commands
+  # sha1() = sha1 hash, if available
+  # md5()  = md5 hash, if available
+  # hash() = default hash function
+  # $hash  = name of the type of hash used by hash()
+
+  hash="";
+
+  if type -ft openssl > /dev/null; then
+    if [ -z "${hash}" ]; then hash="md5"; fi;
+    md5 () { "$(type -p openssl)" dgst -md5 "$@"; }
+  elif type -ft md5 > /dev/null; then
+    if [ -z "${hash}" ]; then hash="md5"; fi;
+    md5 () { "$(type -p md5)" "$@"; }
+  elif type -ft md5sum > /dev/null; then
+    if [ -z "${hash}" ]; then hash="md5"; fi;
+    md5 () { "$(type -p md5sum)" "$@"; }
+  fi;
+
+  if type -ft shasum > /dev/null; then
+    if [ -z "${hash}" ]; then hash="sha1"; fi;
+    sha1 () { "$(type -p shasum)" "$@"; }
+  fi;
+
+  if [ "${hash}" == "sha1" ]; then
+    hash () { sha1 "$@"; }
+  elif [ "${hash}" == "md5" ]; then
     hash () { md5 "$@"; }
-  elif type -t md5sum > /dev/null; then
-    hash="md5";
-    hash () { md5sum "$@"; }
   elif type -t cksum > /dev/null; then
     hash="hash";
     hash () { cksum "$@" | cut -f 1 -d " "; }
@@ -110,7 +128,6 @@
     hash="hash";
     hash () { sum "$@" | cut -f 1 -d " "; }
   else
-    hash="";
     hash () { echo "INTERNAL ERROR: No hash function."; exit 1; }
   fi;
 
@@ -173,12 +190,14 @@
 www_get () {
   if ! "${do_get}"; then return 0; fi;
 
-  local md5="";
+  local  md5="";
+  local sha1="";
 
   OPTIND=1;
-  while getopts "m:" option; do
+  while getopts "m:s:" option; do
     case "${option}" in
-      'm') md5="${OPTARG}"; ;;
+      'm')  md5="${OPTARG}"; ;;
+      's') sha1="${OPTARG}"; ;;
     esac;
   done;
   shift $((${OPTIND} - 1));
@@ -211,18 +230,27 @@
       check_hash () {
         local file="$1"; shift;
 
-        if [ "${hash}" == "md5" ]; then
-          local sum="$(hash "${file}" | perl -pe 's|^.*([0-9a-f]{32}).*$|\1|')";
-          if [ -n "${md5}" ]; then
-            echo "Checking MD5 sum for ${name}...";
-            if [ "${md5}" != "${sum}" ]; then
-              echo "ERROR: MD5 sum for downloaded file is wrong: ${sum} != ${md5}";
-              return 1;
-            fi;
-          else
-            echo "MD5 sum for ${name} is ${sum}";
+        local sum="$(md5 "${file}" | perl -pe 's|^.*([0-9a-f]{32}).*$|\1|')";
+        if [ -n "${md5}" ]; then
+          echo "Checking MD5 sum for ${name}...";
+          if [ "${md5}" != "${sum}" ]; then
+            echo "ERROR: MD5 sum for downloaded file is wrong: ${sum} != ${md5}";
+            return 1;
           fi;
+        else
+          echo "MD5 sum for ${name} is ${sum}";
         fi;
+
+        local sum="$(sha1 "${file}" | perl -pe 's|^.*([0-9a-f]{40}).*$|\1|')";
+        if [ -n "${sha1}" ]; then
+          echo "Checking SHA1 sum for ${name}...";
+          if [ "${sha1}" != "${sum}" ]; then
+            echo "ERROR: SHA1 sum for downloaded file is wrong: ${sum} != ${sha1}";
+            return 1;
+          fi;
+        else
+          echo "SHA1 sum for ${name} is ${sum}";
+        fi;
       }
 
       if [ ! -f "${cache_file}" ]; then
@@ -264,7 +292,7 @@
 
           if egrep "^${pkg_host}" "${HOME}/.ssh/known_hosts" > /dev/null 2>&1; then
             echo "Copying cache file up to ${pkg_host}.";
-            if ! scp "${tmp}" "${pkg_host}:/www/hosts/${pkg_host}${pkg_path}/${cache_basename}"; then
+            if ! scp "${tmp}" "${pkg_host}:/var/www/static${pkg_path}/${cache_basename}"; then
               echo "Failed to copy cache file up to ${pkg_host}.";
             fi;
             echo ""
@@ -441,10 +469,10 @@
   local revision="0";     # Revision (if svn)
   local get_type="www";   # Protocol to use
   local  version="";      # Minimum version required
-  local   f_hash="";      # Checksum
+  local   f_hash="";      # Checksum flag
 
   OPTIND=1;
-  while getopts "ofi:er:v:m:" option; do
+  while getopts "ofi:er:v:m:s:" option; do
     case "${option}" in
       'o') optional="true"; ;;
       'f') override="true"; ;;
@@ -452,6 +480,7 @@
       'r') get_type="svn"; revision="${OPTARG}"; ;;
       'v')  version="-v ${OPTARG}"; ;;
       'm')   f_hash="-m ${OPTARG}"; ;;
+      's')   f_hash="-s ${OPTARG}"; ;;
       'i')
         if [ -z "${OPTARG}" ]; then
           inplace=".";
@@ -535,9 +564,10 @@
   local f_hash="";
 
   OPTIND=1;
-  while getopts "m:" option; do
+  while getopts "m:s:" option; do
     case "${option}" in
       'm') f_hash="-m ${OPTARG}"; ;;
+      's') f_hash="-s ${OPTARG}"; ;;
     esac;
   done;
   shift $((${OPTIND} - 1));
@@ -703,12 +733,6 @@
       "${svn_uri_base}/PyKerberos/trunk";
   fi;
 
-  if [ "$(uname -s)" == "Darwin" ]; then
-    py_dependency -r 6656 \
-      "PyOpenDirectory" "opendirectory" "PyOpenDirectory" \
-      "${svn_uri_base}/PyOpenDirectory/trunk";
-  fi;
-
   py_dependency -v 0.5 -r 1038 \
     "xattr" "xattr" "xattr" \
     "http://svn.red-bean.com/bob/xattr/releases/xattr-0.6.1/";
@@ -759,7 +783,7 @@
 
   local sv="0.1.2";
   local sq="sqlparse-${sv}";
-  py_dependency -o -v "${sv}" -m "aa9852ad81822723adcd9f96838de14e" \
+  py_dependency -o -v "${sv}" -s "978874e5ebbd78e6d419e8182ce4fb3c30379642" \
     "SQLParse" "sqlparse" "${sq}" \
     "http://python-sqlparse.googlecode.com/files/${sq}.tar.gz";
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twext/enterprise/dal/test/test_parseschema.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twext/enterprise/dal/test/test_parseschema.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twext/enterprise/dal/test/test_parseschema.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -216,9 +216,9 @@
         """
         for identicalSchema in [
                 "create table sample (example integer unique);",
-                "create table sample (example integer, unique(example));",
+                "create table sample (example integer, unique (example));",
                 "create table sample "
-                "(example integer, constraint unique_example unique(example))"]:
+                "(example integer, constraint unique_example unique (example))"]:
             s = self.schemaFromString(identicalSchema)
             table = s.tableNamed('sample')
             column = table.columnNamed('example')
@@ -242,14 +242,14 @@
             self.assertEqual(expr.op, '>')
             self.assertEqual(constraint.name, checkName)
         checkOneConstraint(
-            "create table sample (example integer check(example >  5));"
+            "create table sample (example integer check (example >  5));"
         )
         checkOneConstraint(
-            "create table sample (example integer, check(example  > 5));"
+            "create table sample (example integer, check (example  > 5));"
         )
         checkOneConstraint(
             "create table sample "
-            "(example integer, constraint gt_5 check(example>5))", "gt_5"
+            "(example integer, constraint gt_5 check (example>5))", "gt_5"
         )
 
 
@@ -273,7 +273,7 @@
             )
         checkOneConstraint(
             "create table sample "
-            "(example integer check(example = lower(example)));"
+            "(example integer check (example = lower (example)));"
         )
 
 
@@ -283,7 +283,7 @@
         listing that column as a unique set.
         """
         s = self.schemaFromString(
-            "create table a (b integer, c integer, unique(b, c), unique(c));"
+            "create table a (b integer, c integer, unique (b, c), unique (c));"
         )
         a = s.tableNamed('a')
         b = a.columnNamed('b')
@@ -310,7 +310,7 @@
         C{primaryKey} attribute on the Table object.
         """
         s = self.schemaFromString(
-            "create table a (b integer, c integer, primary key(b, c))"
+            "create table a (b integer, c integer, primary key (b, c))"
         )
         a = s.tableNamed("a")
         self.assertEquals(

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/aggregate.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/aggregate.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/aggregate.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -67,6 +67,8 @@
                     )
                 recordTypes[recordType] = service
 
+            service.aggregateService = self
+
         self.realmName = realmName
         self._recordTypes = recordTypes
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/augment.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/augment.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/augment.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -60,6 +60,7 @@
         enabledForCalendaring=False,
         autoSchedule=False,
         autoScheduleMode="default",
+        autoAcceptGroup="",
         enabledForAddressBooks=False,
         enabledForLogin=True,
     ):
@@ -72,6 +73,7 @@
         self.enabledForLogin = enabledForLogin
         self.autoSchedule = autoSchedule
         self.autoScheduleMode = autoScheduleMode if autoScheduleMode in allowedAutoScheduleModes else "default"
+        self.autoAcceptGroup = autoAcceptGroup
         self.clonedFromDefault = False
 
 recordTypesMap = {
@@ -459,6 +461,8 @@
         addSubElement(recordNode, xmlaugmentsparser.ELEMENT_AUTOSCHEDULE, "true" if record.autoSchedule else "false")
         if record.autoScheduleMode:
             addSubElement(recordNode, xmlaugmentsparser.ELEMENT_AUTOSCHEDULE_MODE, record.autoScheduleMode)
+        if record.autoAcceptGroup:
+            addSubElement(recordNode, xmlaugmentsparser.ELEMENT_AUTOACCEPTGROUP, record.autoAcceptGroup)
 
     def refresh(self):
         """
@@ -570,11 +574,11 @@
         """
         
         # Query for the record information
-        results = (yield self.query("select UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, LOGINENABLED from AUGMENTS where UID = :1", (uid,)))
+        results = (yield self.query("select UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, AUTOACCEPTGROUP, LOGINENABLED from AUGMENTS where UID = :1", (uid,)))
         if not results:
             returnValue(None)
         else:
-            uid, enabled, serverid, partitionid, enabledForCalendaring, enabledForAddressBooks, autoSchedule, autoScheduleMode, enabledForLogin = results[0]
+            uid, enabled, serverid, partitionid, enabledForCalendaring, enabledForAddressBooks, autoSchedule, autoScheduleMode, autoAcceptGroup, enabledForLogin = results[0]
             
             record = AugmentRecord(
                 uid = uid,
@@ -586,6 +590,7 @@
                 enabledForLogin = enabledForLogin == "T",
                 autoSchedule = autoSchedule == "T",
                 autoScheduleMode = autoScheduleMode,
+                autoAcceptGroup = autoAcceptGroup,
             )
             
             returnValue(record)
@@ -648,6 +653,7 @@
                 ("ADDRESSBOOKS",     "text(1)"),
                 ("AUTOSCHEDULE",     "text(1)"),
                 ("AUTOSCHEDULEMODE", "text"),
+                ("AUTOACCEPTGROUP",  "text"),
                 ("LOGINENABLED",     "text(1)"),
             ),
             ifnotexists=True,
@@ -671,8 +677,8 @@
     def _addRecord(self, record):
         yield self.execute(
             """insert or replace into AUGMENTS
-            (UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, LOGINENABLED)
-            values (:1, :2, :3, :4, :5, :6, :7, :8, :9)""",
+            (UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, AUTOACCEPTGROUP, LOGINENABLED)
+            values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10)""",
             (
                 record.uid,
                 "T" if record.enabled else "F",
@@ -682,6 +688,7 @@
                 "T" if record.enabledForAddressBooks else "F",
                 "T" if record.autoSchedule else "F",
                 record.autoScheduleMode if record.autoScheduleMode else "",
+                record.autoAcceptGroup,
                 "T" if record.enabledForLogin else "F",
             )
         )
@@ -703,8 +710,8 @@
     def _addRecord(self, record):
         yield self.execute(
             """insert into AUGMENTS
-            (UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, LOGINENABLED)
-            values (:1, :2, :3, :4, :5, :6, :7, :8, :9)""",
+            (UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, AUTOACCEPTGROUP, LOGINENABLED)
+            values (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10)""",
             (
                 record.uid,
                 "T" if record.enabled else "F",
@@ -714,6 +721,7 @@
                 "T" if record.enabledForAddressBooks else "F",
                 "T" if record.autoSchedule else "F",
                 record.autoScheduleMode if record.autoScheduleMode else "",
+                record.autoAcceptGroup,
                 "T" if record.enabledForLogin else "F",
             )
         )
@@ -722,8 +730,8 @@
     def _modifyRecord(self, record):
         yield self.execute(
             """update AUGMENTS set
-            (UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, LOGINENABLED) =
-            (:1, :2, :3, :4, :5, :6, :7, :8, :9) where UID = :10""",
+            (UID, ENABLED, SERVERID, PARTITIONID, CALENDARING, ADDRESSBOOKS, AUTOSCHEDULE, AUTOSCHEDULEMODE, AUTOACCEPTGROUP, LOGINENABLED) =
+            (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10) where UID = :11""",
             (
                 record.uid,
                 "T" if record.enabled else "F",
@@ -733,6 +741,7 @@
                 "T" if record.enabledForAddressBooks else "F",
                 "T" if record.autoSchedule else "F",
                 record.autoScheduleMode if record.autoScheduleMode else "",
+                record.autoAcceptGroup,
                 "T" if record.enabledForLogin else "F",
                 record.uid,
             )

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/directory.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/directory.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -82,6 +82,8 @@
     searchContext_location = "location"
     searchContext_attendee = "attendee"
 
+    aggregateService = None
+
     def _generatedGUID(self):
         if not hasattr(self, "_guid"):
             realmName = self.realmName
@@ -477,6 +479,7 @@
             autoaccept = wpframework.get("AutoAcceptsInvitation", False)
             proxy = wpframework.get("CalendaringDelegate", None)
             read_only_proxy = wpframework.get("ReadOnlyCalendaringDelegate", None)
+            autoAcceptGroup = wpframework.get("AutoAcceptGroup", "")
         except (ExpatError, AttributeError), e:
             self.log_error(
                 "Failed to parse ResourceInfo attribute of record (%s)%s (guid=%s): %s\n%s" %
@@ -484,7 +487,7 @@
             )
             raise ValueError("Invalid ResourceInfo")
 
-        return (autoaccept, proxy, read_only_proxy,)
+        return (autoaccept, proxy, read_only_proxy, autoAcceptGroup)
 
 
     def getExternalProxyAssignments(self):
@@ -1245,6 +1248,7 @@
         firstName=None, lastName=None, emailAddresses=set(),
         calendarUserAddresses=set(),
         autoSchedule=False, autoScheduleMode=None,
+        autoAcceptGroup="",
         enabledForCalendaring=None,
         enabledForAddressBooks=None,
         uid=None,
@@ -1280,6 +1284,7 @@
         self.enabledForCalendaring = enabledForCalendaring
         self.autoSchedule = autoSchedule
         self.autoScheduleMode = autoScheduleMode
+        self.autoAcceptGroup = autoAcceptGroup
         self.enabledForAddressBooks = enabledForAddressBooks
         self.enabledForLogin = enabledForLogin
         self.extProxies = extProxies
@@ -1353,6 +1358,7 @@
             self.enabledForAddressBooks = augment.enabledForAddressBooks
             self.autoSchedule = augment.autoSchedule
             self.autoScheduleMode = augment.autoScheduleMode
+            self.autoAcceptGroup = augment.autoAcceptGroup
             self.enabledForLogin = augment.enabledForLogin
 
             if (self.enabledForCalendaring or self.enabledForAddressBooks) and self.recordType == self.service.recordType_groups:
@@ -1556,7 +1562,28 @@
         return True
 
 
+    def autoAcceptMembers(self):
+        """
+        Return the list of GUIDs for which this record will automatically accept
+        invites from (assuming no conflicts).  This list is based on the group
+        assigned to record.autoAcceptGroup.  Cache the expanded group membership
+        within the record.
 
+        @return: the list of members of the autoAcceptGroup, or an empty list if
+            not assigned
+        @rtype: C{list} of GUID C{str}
+        """
+        if not hasattr(self, "_cachedAutoAcceptMembers"):
+            self._cachedAutoAcceptMembers = []
+            if self.autoAcceptGroup:
+                service = self.service.aggregateService or self.service
+                groupRecord = service.recordWithGUID(self.autoAcceptGroup)
+                if groupRecord is not None:
+                    self._cachedAutoAcceptMembers = [m.guid for m in groupRecord.expandedMembers()]
+
+        return self._cachedAutoAcceptMembers
+
+
 class DirectoryError(RuntimeError):
     """
     Generic directory error.

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/idirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/idirectory.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/idirectory.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -102,20 +102,21 @@
         """
         @param tokens: The tokens to search on
         @type tokens: C{list} of C{str} (utf-8 bytes)
-        @param context: An indication of what the end user is searching
-            for; "attendee", "location", or None
+
+        @param context: An indication of what the end user is searching for;
+            "attendee", "location", or None
         @type context: C{str}
-        @return: a deferred sequence of L{IDirectoryRecord}s which
-            match the given tokens and optional context.
 
-        Each token is searched for within each record's full name and
-        email address; if each token is found within a record that
-        record is returned in the results.
+        @return: a deferred sequence of L{IDirectoryRecord}s which match the
+            given tokens and optional context.
 
-        If context is None, all record types are considered.  If
-        context is "location", only locations are considered.  If
-        context is "attendee", only users, groups, and resources
-        are considered.
+            Each token is searched for within each record's full name and email
+            address; if each token is found within a record that record is
+            returned in the results.
+
+            If context is None, all record types are considered.  If context is
+            "location", only locations are considered.  If context is
+            "attendee", only users, groups, and resources are considered.
         """
 
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/ldapdirectory.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/ldapdirectory.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -182,6 +182,7 @@
                 "autoScheduleEnabledValue": "yes",
                 "proxyAttr": None, # list of GUIDs
                 "readOnlyProxyAttr": None, # list of GUIDs
+                "autoAcceptGroupAttr": None, # single group GUID
             },
             "partitionSchema": {
                 "serverIdAttr": None, # maps to augments server-id
@@ -261,6 +262,8 @@
             attrSet.add(self.resourceSchema["resourceInfoAttr"])
         if self.resourceSchema["autoScheduleAttr"]:
             attrSet.add(self.resourceSchema["autoScheduleAttr"])
+        if self.resourceSchema["autoAcceptGroupAttr"]:
+            attrSet.add(self.resourceSchema["autoAcceptGroupAttr"])
         if self.resourceSchema["proxyAttr"]:
             attrSet.add(self.resourceSchema["proxyAttr"])
         if self.resourceSchema["readOnlyProxyAttr"]:
@@ -787,6 +790,7 @@
         proxyGUIDs = ()
         readOnlyProxyGUIDs = ()
         autoSchedule = False
+        autoAcceptGroup = ""
         memberGUIDs = []
 
         # LDAP attribute -> principal matchings
@@ -836,7 +840,8 @@
                         (
                             autoSchedule,
                             proxy,
-                            readOnlyProxy
+                            readOnlyProxy,
+                            autoAcceptGroup
                         ) = self.parseResourceInfo(
                             resourceInfo,
                             guid,
@@ -861,6 +866,9 @@
                 if self.resourceSchema["readOnlyProxyAttr"]:
                     readOnlyProxyGUIDs = set(self._getMultipleLdapAttributes(attrs,
                         self.resourceSchema["readOnlyProxyAttr"]))
+                if self.resourceSchema["autoAcceptGroupAttr"]:
+                    autoAcceptGroup = self._getUniqueLdapAttribute(attrs,
+                        self.resourceSchema["autoAcceptGroupAttr"])
 
         serverID = partitionID = None
         if self.partitionSchema["serverIdAttr"]:
@@ -906,6 +914,7 @@
                 partitionID=partitionID,
                 enabledForCalendaring=enabledForCalendaring,
                 autoSchedule=autoSchedule,
+                autoAcceptGroup=autoAcceptGroup,
                 enabledForAddressBooks=enabledForAddressBooks, # TODO: add to LDAP?
                 enabledForLogin=enabledForLogin,
             )

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/principal.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/principal.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/principal.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -994,14 +994,20 @@
     def getAutoSchedule(self):
         return self.record.autoSchedule
 
-    def canAutoSchedule(self):
+    def canAutoSchedule(self, organizer=None):
         """
         Determine the auto-schedule state based on record state, type and config settings.
+
+        @param organizer: the CUA of the organizer trying to schedule this principal
+        @type organizer: C{str}
         """
-        
+
         if config.Scheduling.Options.AutoSchedule.Enabled:
-            if config.Scheduling.Options.AutoSchedule.Always or self.getAutoSchedule():
-                if self.getCUType() != "INDIVIDUAL" or config.Scheduling.Options.AutoSchedule.AllowUsers:
+            if (config.Scheduling.Options.AutoSchedule.Always or
+                self.getAutoSchedule() or
+                self.autoAcceptFromOrganizer(organizer)):
+                if (self.getCUType() != "INDIVIDUAL" or
+                    config.Scheduling.Options.AutoSchedule.AllowUsers):
                     return True
         return False
 
@@ -1012,9 +1018,65 @@
         augmentRecord.autoScheduleMode = autoScheduleMode
         (yield self.record.service.augmentService.addAugmentRecords([augmentRecord]))
 
-    def getAutoScheduleMode(self):
-        return self.record.autoScheduleMode
+    def getAutoScheduleMode(self, organizer=None):
+        """
+        Return the auto schedule mode value for the principal.  If the optional
+        organizer is provided, and that organizer is a member of the principal's
+        auto-accept group, return "automatic" instead; this allows specifying a
+        priliveged group whose scheduling requests are automatically accepted or
+        declined, regardless of whether the principal is normally managed by a
+        delegate.
 
+        @param organizer: the CUA of the organizer scheduling this principal
+        @type organizer: C{str}
+        @return: auto schedule mode; one of: none, accept-always, decline-always,
+            accept-if-free, decline-if-busy, automatic (see stdconfig.py)
+        @rtype: C{str}
+        """
+        autoScheduleMode = self.record.autoScheduleMode
+        if self.autoAcceptFromOrganizer(organizer):
+            autoScheduleMode = "automatic"
+        return autoScheduleMode
+
+
+    @inlineCallbacks
+    def setAutoAcceptGroup(self, autoAcceptGroup):
+        """
+        Sets the group whose members can automatically schedule with this principal
+        even if this principal's auto-schedule is False (assuming no conflicts).
+
+        @param autoAcceptGroup:  GUID of the group
+        @type autoAcceptGroup: C{str}
+        """
+        self.record.autoAcceptGroup = autoAcceptGroup
+        augmentRecord = (yield self.record.service.augmentService.getAugmentRecord(self.record.guid, self.record.recordType))
+        augmentRecord.autoAcceptGroup = autoAcceptGroup
+        (yield self.record.service.augmentService.addAugmentRecords([augmentRecord]))
+
+    def getAutoAcceptGroup(self):
+        """
+        Returns the GUID of the auto accept group assigned to this principal, or empty
+        string if not assigned
+        """
+        return self.record.autoAcceptGroup
+
+    def autoAcceptFromOrganizer(self, organizer):
+        """
+        Is the organizer a member of this principal's autoAcceptGroup?
+
+        @param organizer: CUA of the organizer
+        @type organizer: C{str}
+        @return: True if the autoAcceptGroup is assigned, and the organizer is a member
+            of that group.  False otherwise.
+        @rtype: C{bool}
+        """
+        if organizer is not None and self.record.autoAcceptGroup is not None:
+            organizerPrincipal = self.parent.principalForCalendarUserAddress(organizer)
+            if organizerPrincipal is not None:
+                if organizerPrincipal.record.guid in self.record.autoAcceptMembers():
+                    return True
+        return False
+
     def getCUType(self):
         return self.record.getCUType()
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/augments.xml
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/augments.xml	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/augments.xml	2012-11-28 17:35:29 UTC (rev 10098)
@@ -120,6 +120,7 @@
     <enable>true</enable>
     <enable-calendar>true</enable-calendar>
     <enable-addressbook>true</enable-addressbook>
+    <auto-accept-group>both_coasts</auto-accept-group>
   </record>
   <record>
     <uid>orion</uid>

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_directory.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_directory.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -757,6 +757,30 @@
             }
         )
 
+    def test_autoAcceptMembers(self):
+        """
+        autoAcceptMembers( ) returns an empty list if no autoAcceptGroup is
+        assigned, or the expanded membership if assigned.
+        """
+
+        # No auto-accept-group for "orion" in augments.xml
+        orion = self.directoryService.recordWithGUID("orion")
+        self.assertEquals( orion.autoAcceptMembers(), [])
+
+        # "both_coasts" group assigned to "apollo" in augments.xml
+        apollo = self.directoryService.recordWithGUID("apollo")
+        self.assertEquals(
+            set(apollo.autoAcceptMembers()),
+            set([
+                "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0",
+                 "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1",
+                 "5A985493-EE2C-4665-94CF-4DFEA3A89500",
+                 "6423F94A-6B76-4A3A-815B-D52CFD77935D",
+                 "right_coast",
+                 "left_coast",
+            ])
+        )
+
 class RecordsMatchingTokensTests(TestCase):
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_ldapdirectory.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_ldapdirectory.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -547,6 +547,7 @@
                     "autoScheduleAttr": None,
                     "proxyAttr": None,
                     "readOnlyProxyAttr": None,
+                    "autoAcceptGroupAttr": None,
                 },
                 "partitionSchema": {
                     "serverIdAttr": "server-id", # maps to augments server-id
@@ -762,6 +763,7 @@
                     "autoScheduleAttr": None,
                     "proxyAttr": None,
                     "readOnlyProxyAttr": None,
+                    "autoAcceptGroupAttr": None,
                 },
                 "partitionSchema": {
                     "serverIdAttr": "server-id", # maps to augments server-id
@@ -979,6 +981,7 @@
                     "autoScheduleAttr": None,
                     "proxyAttr": None,
                     "readOnlyProxyAttr": None,
+                    "autoAcceptGroupAttr": None,
                 },
                 "partitionSchema": {
                     "serverIdAttr": "server-id", # maps to augments server-id
@@ -1192,6 +1195,7 @@
                     "autoScheduleAttr": None,
                     "proxyAttr": None,
                     "readOnlyProxyAttr": None,
+                    "autoAcceptGroupAttr": None,
                 },
                 "partitionSchema": {
                     "serverIdAttr": "server-id", # maps to augments server-id
@@ -1363,7 +1367,7 @@
                      ])
             )
 
-            # Resource with delegates and autoSchedule = True
+            # Resource with delegates, autoSchedule = True, and autoAcceptGroup
 
             dn = "cn=odtestresource,cn=resources,dc=example,dc=com"
             guid = 'D3094652-344B-4633-8DB8-09639FA00FB6'
@@ -1382,6 +1386,8 @@
 <string>6C6CD280-E6E3-11DF-9492-0800200C9A66</string>
 <key>ReadOnlyCalendaringDelegate</key>
 <string>6AA1AE12-592F-4190-A069-547CD83C47C0</string>
+<key>AutoAcceptGroup</key>
+<string>77A8EB52-AA2A-42ED-8843-B2BEE863AC70</string>
 </dict>
 </dict>
 </plist>"""]
@@ -1394,6 +1400,8 @@
             self.assertEquals(record.externalReadOnlyProxies(),
                 set(['6AA1AE12-592F-4190-A069-547CD83C47C0']))
             self.assertTrue(record.autoSchedule)
+            self.assertEquals(record.autoAcceptGroup,
+                '77A8EB52-AA2A-42ED-8843-B2BEE863AC70')
 
             # Resource with no delegates and autoSchedule = False
 
@@ -1422,6 +1430,7 @@
             self.assertEquals(record.externalReadOnlyProxies(),
                 set())
             self.assertFalse(record.autoSchedule)
+            self.assertEquals(record.autoAcceptGroup, "")
 
 
             # Now switch off the resourceInfoAttr and switch to individual
@@ -1432,6 +1441,7 @@
                 "autoScheduleEnabledValue" : "yes",
                 "proxyAttr" : "proxy",
                 "readOnlyProxyAttr" : "read-only-proxy",
+                "autoAcceptGroupAttr" : "auto-accept-group",
             }
 
             # Resource with delegates and autoSchedule = True
@@ -1444,6 +1454,7 @@
                 'auto-schedule' : ['yes'],
                 'proxy' : ['6C6CD280-E6E3-11DF-9492-0800200C9A66'],
                 'read-only-proxy' : ['6AA1AE12-592F-4190-A069-547CD83C47C0'],
+                'auto-accept-group' : ['77A8EB52-AA2A-42ED-8843-B2BEE863AC70'],
             }
             record = self.service._ldapResultToRecord(dn, attrs,
                 self.service.recordType_resources)
@@ -1453,6 +1464,8 @@
             self.assertEquals(record.externalReadOnlyProxies(),
                 set(['6AA1AE12-592F-4190-A069-547CD83C47C0']))
             self.assertTrue(record.autoSchedule)
+            self.assertEquals(record.autoAcceptGroup,
+                '77A8EB52-AA2A-42ED-8843-B2BEE863AC70')
 
         def test_listRecords(self):
             """

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_principal.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_principal.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_principal.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -560,7 +560,7 @@
         """
         DirectoryPrincipalResource.canAutoSchedule()
         """
-        
+
         # Set all resources and locations to auto-schedule, plus one user
         for provisioningResource, recordType, recordResource, record in self._allRecords():
             if record.enabledForCalendaring:
@@ -590,6 +590,27 @@
             if record.enabledForCalendaring:
                 self.assertFalse(recordResource.canAutoSchedule())
 
+
+    def test_canAutoScheduleAutoAcceptGroup(self):
+        """
+        DirectoryPrincipalResource.canAutoSchedule(organizer)
+        """
+
+        # Location "apollo" has an auto-accept group ("both_coasts") set in augments.xml,
+        # therefore any organizer in that group should be able to auto schedule
+
+        for provisioningResource, recordType, recordResource, record in self._allRecords():
+            if record.uid == "apollo":
+
+                # No organizer
+                self.assertFalse(recordResource.canAutoSchedule())
+
+                # Organizer in auto-accept group
+                self.assertTrue(recordResource.canAutoSchedule(organizer="mailto:wsanchez at example.com"))
+                # Organizer not in auto-accept group
+                self.assertFalse(recordResource.canAutoSchedule(organizer="mailto:a at example.com"))
+
+
     @inlineCallbacks
     def test_defaultAccessControlList_principals(self):
         """

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_xmlfile.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_xmlfile.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/test/test_xmlfile.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -25,6 +25,11 @@
 # FIXME: Add tests for GUID hooey, once we figure out what that means here
 
 class XMLFileBase(object):
+    """
+    L{XMLFileBase} is a base/mix-in object for testing L{XMLDirectoryService}
+    (or things that depend on L{IDirectoryService} and need a simple
+    implementation to use).
+    """
     recordTypes = set((
         DirectoryService.recordType_users,
         DirectoryService.recordType_groups,
@@ -44,30 +49,30 @@
     }
 
     groups = {
-        "admin"      : { "password": "admin",       "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "managers"),)                                      },
-        "managers"   : { "password": "managers",    "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "lecroy"),)                                         },
+        "admin"      : { "password": "admin",       "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "managers"),)},
+        "managers"   : { "password": "managers",    "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "lecroy"),)},
         "grunts"     : { "password": "grunts",      "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "wsanchez"),
                                                                                                (DirectoryService.recordType_users , "cdaboo"),
-                                                                                               (DirectoryService.recordType_users , "dreid")) },
-        "right_coast": { "password": "right_coast", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "cdaboo"),)                                         },
+                                                                                               (DirectoryService.recordType_users , "dreid"))},
+        "right_coast": { "password": "right_coast", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "cdaboo"),)},
         "left_coast" : { "password": "left_coast",  "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "wsanchez"),
                                                                                                (DirectoryService.recordType_users , "dreid"),
-                                                                                               (DirectoryService.recordType_users , "lecroy")) },
+                                                                                               (DirectoryService.recordType_users , "lecroy"))},
         "both_coasts": { "password": "both_coasts", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "right_coast"),
-                                                                                               (DirectoryService.recordType_groups, "left_coast"))           },
+                                                                                               (DirectoryService.recordType_groups, "left_coast"))},
         "recursive1_coasts":  { "password": "recursive1_coasts",  "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "recursive2_coasts"),
-                                                                                               (DirectoryService.recordType_users, "wsanchez"))           },
+                                                                                               (DirectoryService.recordType_users, "wsanchez"))},
         "recursive2_coasts":  { "password": "recursive2_coasts",  "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "recursive1_coasts"),
-                                                                                               (DirectoryService.recordType_users, "cdaboo"))           },
+                                                                                               (DirectoryService.recordType_users, "cdaboo"))},
         "non_calendar_group": { "password": "non_calendar_group", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "cdaboo"),
-                                                                                               (DirectoryService.recordType_users , "lecroy"))           },
+                                                                                               (DirectoryService.recordType_users , "lecroy"))},
     }
 
     locations = {
         "mercury": { "password": "mercury", "guid": None, "addresses": ("mailto:mercury at example.com",) },
         "gemini" : { "password": "gemini",  "guid": None, "addresses": ("mailto:gemini at example.com",)  },
         "apollo" : { "password": "apollo",  "guid": None, "addresses": ("mailto:apollo at example.com",)  },
-        "orion"  : { "password": "orion",   "guid": None, "addresses": ("mailto:orion at example.com",)  },
+        "orion"  : { "password": "orion",   "guid": None, "addresses": ("mailto:orion at example.com",)   },
     }
 
     resources = {
@@ -77,17 +82,53 @@
     }
 
     def xmlFile(self):
+        """
+        Create a L{FilePath} that points to a temporary file containing a copy
+        of C{twistedcaldav/directory/test/accounts.xml}.
+
+        @see: L{xmlFile}
+
+        @rtype: L{FilePath}
+        """
         if not hasattr(self, "_xmlFile"):
             self._xmlFile = FilePath(self.mktemp())
             xmlFile.copyTo(self._xmlFile)
         return self._xmlFile
 
+
     def augmentsFile(self):
+        """
+        Create a L{FilePath} that points to a temporary file containing a copy
+        of C{twistedcaldav/directory/test/augments.xml}.
+
+        @see: L{augmentsFile}
+
+        @rtype: L{FilePath}
+        """
         if not hasattr(self, "_augmentsFile"):
             self._augmentsFile = FilePath(self.mktemp())
             augmentsFile.copyTo(self._augmentsFile)
         return self._augmentsFile
 
+
+    def service(self):
+        """
+        Create an L{XMLDirectoryService} based on the contents of the paths
+        returned by L{XMLFileBase.augmentsFile} and L{XMLFileBase.xmlFile}.
+
+        @rtype: L{XMLDirectoryService}
+        """
+        return XMLDirectoryService(
+            {
+                'xmlFile': self.xmlFile(),
+                'augmentService':
+                    augment.AugmentXMLDB(xmlFiles=(self.augmentsFile().path,)),
+            },
+            alwaysStat=True
+        )
+
+
+
 class XMLFile (
     XMLFileBase,
     twistedcaldav.directory.test.util.BasicTestCase,
@@ -96,16 +137,6 @@
     """
     Test XML file based directory implementation.
     """
-    def service(self):
-        directory = XMLDirectoryService(
-            {
-                'xmlFile' : self.xmlFile(),
-                'augmentService' :
-                   augment.AugmentXMLDB(xmlFiles=(self.augmentsFile().path,)),
-            },
-            alwaysStat=True
-        )
-        return directory
 
     def test_changedXML(self):
         service = self.service()

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/xmlaugmentsparser.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/xmlaugmentsparser.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/directory/xmlaugmentsparser.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -43,6 +43,7 @@
 ELEMENT_ENABLELOGIN       = "enable-login"
 ELEMENT_AUTOSCHEDULE      = "auto-schedule"
 ELEMENT_AUTOSCHEDULE_MODE = "auto-schedule-mode"
+ELEMENT_AUTOACCEPTGROUP   = "auto-accept-group"
 
 ATTRIBUTE_REPEAT          = "repeat"
 
@@ -60,6 +61,7 @@
     ELEMENT_ENABLELOGIN:       "enabledForLogin",
     ELEMENT_AUTOSCHEDULE:      "autoSchedule",
     ELEMENT_AUTOSCHEDULE_MODE: "autoScheduleMode",
+    ELEMENT_AUTOACCEPTGROUP:   "autoAcceptGroup",
 }
 
 class XMLAugmentsParser(object):
@@ -103,6 +105,7 @@
                     ELEMENT_PARTITIONID,
                     ELEMENT_HOSTEDAT,
                     ELEMENT_AUTOSCHEDULE_MODE,
+                    ELEMENT_AUTOACCEPTGROUP,
                 ):
                     fields[node.tag] = node.text if node.text else ""
                 elif node.tag in (

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/scheduling/processing.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/scheduling/processing.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/scheduling/processing.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -500,8 +500,12 @@
             new_calendar = iTipProcessing.processNewRequest(self.message, self.recipient.cuaddr, creating=True)
 
             # Handle auto-reply behavior
-            if self.recipient.principal.canAutoSchedule():
-                send_reply, store_inbox, partstat = (yield self.checkAttendeeAutoReply(new_calendar, self.recipient.principal.getAutoScheduleMode()))
+            organizer = normalizeCUAddr(self.message.getOrganizer())
+            if self.recipient.principal.canAutoSchedule(organizer=organizer):
+                # auto schedule mode can depend on who the organizer is
+                mode = self.recipient.principal.getAutoScheduleMode(organizer=organizer)
+                send_reply, store_inbox, partstat = (yield self.checkAttendeeAutoReply(new_calendar,
+                    mode))
 
                 # Only store inbox item when reply is not sent or always for users
                 store_inbox = store_inbox or self.recipient.principal.getCUType() == "INDIVIDUAL"
@@ -533,8 +537,12 @@
             if new_calendar:
 
                 # Handle auto-reply behavior
-                if self.recipient.principal.canAutoSchedule():
-                    send_reply, store_inbox, partstat = (yield self.checkAttendeeAutoReply(new_calendar, self.recipient.principal.getAutoScheduleMode()))
+                organizer = normalizeCUAddr(self.message.getOrganizer())
+                if self.recipient.principal.canAutoSchedule(organizer=organizer):
+                    # auto schedule mode can depend on who the organizer is
+                    mode = self.recipient.principal.getAutoScheduleMode(organizer=organizer)
+                    send_reply, store_inbox, partstat = (yield self.checkAttendeeAutoReply(new_calendar,
+                        mode))
 
                     # Only store inbox item when reply is not sent or always for users
                     store_inbox = store_inbox or self.recipient.principal.getCUType() == "INDIVIDUAL"

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/stdconfig.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/stdconfig.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -285,6 +285,8 @@
 
     "SpawnedDBUser" : "caldav", # The username to use when DBType is empty
 
+    "DBImportFile" : "", # File path to SQL file to import at startup (includes schema)
+
     "DSN"          : "", # Data Source Name.  Used to connect to an external
                            # database if DBType is non-empty.  Format varies
                            # depending on database type.

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/test_xmlutil.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/test_xmlutil.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/test_xmlutil.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -17,7 +17,7 @@
 import twistedcaldav.test.util
 from cStringIO import StringIO
 from twistedcaldav.xmlutil import readXML, writeXML, addSubElement,\
-    changeSubElementText
+    changeSubElementText, createElement, elementToXML, readXMLString
 
 class XMLUtil(twistedcaldav.test.util.TestCase):
     """
@@ -139,3 +139,14 @@
         changeSubElementText(root, "new", "new text")
         self._checkXML(root, XMLUtil.data6)
 
+
+    def test_emoji(self):
+        """
+        Verify we can serialize and parse unicode values above 0xFFFF
+        """
+        name = u"Emoji \U0001F604"
+        elem = createElement("test", text=name)
+        xmlString1 = elementToXML(elem)
+        parsed = readXMLString(xmlString1)[1]
+        xmlString2 = elementToXML(parsed)
+        self.assertEquals(xmlString1, xmlString2)

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/util.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/test/util.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -45,7 +45,11 @@
 from txdav.common.datastore.test.util import deriveQuota
 from txdav.common.datastore.file import CommonDataStore
 
+from twext.python.log import Logger
 
+log = Logger()
+
+
 __all__ = [
     "featureUnimplemented",
     "testUnimplemented",
@@ -633,6 +637,7 @@
         self.input = inputData
         self.output = []
         self.error = []
+        self.terminated = False
 
 
     def connectionMade(self):
@@ -655,14 +660,18 @@
         """
         Some output was received on stderr.
         """
+        # Ignore the Postgres "NOTICE" output
+        if "NOTICE" in data:
+            return
+
         self.error.append(data)
+
         # Attempt to exit promptly if a traceback is displayed, so we don't
         # deal with timeouts.
-        lines = ''.join(self.error).split("\n")
-        if len(lines) > 1:
-            errorReportLine = lines[-2].split(": ", 1)
-            if len(errorReportLine) == 2 and ' ' not in errorReportLine[0] and '\t' not in errorReportLine[0]:
-                self.transport.signalProcess("TERM")
+        if "Traceback" in data and not self.terminated:
+            log.error("Terminating process due to output: %s" % (data,))
+            self.terminated = True
+            self.transport.signalProcess("TERM")
 
 
     def processEnded(self, why):

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/upgrade.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/upgrade.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -756,6 +756,10 @@
 
     docRoot = config.DocumentRoot
 
+    if not os.path.exists(docRoot):
+        log.info("DocumentRoot (%s) doesn't exist; skipping migration" % (docRoot,))
+        return
+
     versionFilePath = os.path.join(docRoot, ".calendarserver_version")
 
     onDiskVersion = 0

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/xmlutil.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/xmlutil.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/twistedcaldav/xmlutil.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -56,7 +56,7 @@
     return etree, etree.getroot()
 
 def elementToXML(element):
-    return XML.tostring(element)
+    return XML.tostring(element, "utf-8")
 
 def writeXML(xmlfile, root):
     

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/subpostgres.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/subpostgres.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -165,7 +165,8 @@
                  maxConnections=20, options=[],
                  testMode=False,
                  uid=None, gid=None,
-                 spawnedDBUser="caldav"):
+                 spawnedDBUser="caldav",
+                 importFileName=None):
         """
         Initialize a L{PostgresService} pointed at a data store directory.
 
@@ -175,6 +176,11 @@
         @param subServiceFactory: a 1-arg callable that will be called with a
             1-arg callable which returns a DB-API cursor.
         @type subServiceFactory: C{callable}
+
+        @param spawnedDBUser: the postgres role
+        @type spawnedDBUser: C{str}
+        @param importFileName: path to SQL file containing previous data to import
+        @type importFileName: C{str}
         """
 
         # FIXME: By default there is very little (4MB) shared memory available,
@@ -225,6 +231,7 @@
         self.uid = uid
         self.gid = gid
         self.spawnedDBUser = spawnedDBUser
+        self.importFileName = importFileName
         self.schema = schema
         self.monitor = None
         self.openConnections = []
@@ -281,6 +288,8 @@
     def ready(self):
         """
         Subprocess is ready.  Time to initialize the subservice.
+        If the database has not been created and there is a dump file,
+        then the dump file is imported.
         """
         createDatabaseConn = self.produceConnection(
             'schema creation', 'postgres'
@@ -301,20 +310,29 @@
                 "create database %s with encoding 'UTF8'" % (self.databaseName)
             )
         except:
-            execSchema = False
+            # database already exists
+            executeSQL = False
         else:
-            execSchema = True
+            # database does not yet exist; if dump file exists, execute it, otherwise
+            # execute schema
+            executeSQL = True
+            sqlToExecute = self.schema
+            if self.importFileName:
+                importFilePath = CachingFilePath(self.importFileName)
+                if importFilePath.exists():
+                    sqlToExecute = importFilePath.getContent()
 
         createDatabaseCursor.close()
         createDatabaseConn.close()
 
-        if execSchema:
+        if executeSQL:
             connection = self.produceConnection()
             cursor = connection.cursor()
-            cursor.execute(self.schema)
+            cursor.execute(sqlToExecute)
             connection.commit()
             connection.close()
 
+        # TODO: anyone know why these two lines are here?
         connection = self.produceConnection()
         cursor = connection.cursor()
 

Copied: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/importFile.sql (from rev 10097, CalendarServer/trunk/txdav/base/datastore/test/importFile.sql)
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/importFile.sql	                        (rev 0)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/importFile.sql	2012-11-28 17:35:29 UTC (rev 10098)
@@ -0,0 +1,2 @@
+CREATE TABLE import_test_table (stub varchar);
+INSERT INTO import_test_table values ('value1');

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/test_subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/test_subpostgres.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/base/datastore/test/test_subpostgres.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -132,3 +132,56 @@
         values = cursor.fetchall()
         self.assertEquals(values, [["dummy"]])
 
+    @inlineCallbacks
+    def test_startService_withDumpFile(self):
+        """
+        Assuming a properly configured environment ($PATH points at an 'initdb'
+        and 'postgres', $PYTHONPATH includes pgdb), starting a
+        L{PostgresService} will start the service passed to it, after importing
+        an existing dump file.
+        """
+
+        test = self
+        class SimpleService1(Service):
+
+            instances = []
+            ready = Deferred()
+
+            def __init__(self, connectionFactory):
+                self.connection = connectionFactory()
+                test.addCleanup(self.connection.close)
+                self.instances.append(self)
+
+
+            def startService(self):
+                cursor = self.connection.cursor()
+                try:
+                    cursor.execute(
+                        "insert into import_test_table values ('value2')"
+                    )
+                except:
+                    self.ready.errback()
+                else:
+                    self.ready.callback(None)
+                finally:
+                    cursor.close()
+
+        # The SQL in importFile.sql will get executed, including the insertion of "value1"
+        importFileName = CachingFilePath(__file__).parent().child("importFile.sql").path
+        svc = PostgresService(
+            CachingFilePath("postgres_3.pgdb"),
+            SimpleService1,
+            "",
+            databaseName="dummy_db",
+            testMode=True,
+            importFileName=importFileName
+        )
+        svc.startService()
+        self.addCleanup(svc.stopService)
+        yield SimpleService1.ready
+        connection = SimpleService1.instances[0].connection
+        cursor = connection.cursor()
+        cursor.execute("select * from import_test_table")
+        values = cursor.fetchall()
+        self.assertEquals(values, [["value1"],["value2"]])
+

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/common.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/common.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -49,12 +49,12 @@
     ICalendarObject, ICalendarHome,
     ICalendar, IAttachment, ICalendarTransaction)
 
-
 from twistedcaldav.customxml import InviteNotification, InviteSummary
 from txdav.caldav.icalendarstore import IAttachmentStorageTransport
 from txdav.caldav.icalendarstore import QuotaExceeded
-from txdav.common.datastore.test.util import deriveQuota
-from txdav.common.datastore.test.util import withSpecialQuota
+from txdav.common.datastore.test.util import (
+    deriveQuota, withSpecialQuota, transactionClean
+)
 from txdav.common.icommondatastore import ConcurrentModification
 from twistedcaldav.ical import Component
 from twistedcaldav.config import config
@@ -593,23 +593,6 @@
 
 
     @inlineCallbacks
-    def test_calendarHomes(self):
-        """
-        Finding all existing calendar homes.
-        """
-        calendarHomes = (yield self.transactionUnderTest().calendarHomes())
-        self.assertEquals(
-            [home.name() for home in calendarHomes],
-            [
-                "home1",
-                "home_no_splits",
-                "home_splits",
-                "home_splits_shared",
-            ]
-        )
-
-
-    @inlineCallbacks
     def test_displayNameNone(self):
         """
         L{ICalendarHome.calendarWithName} returns C{None} for calendars which
@@ -2272,31 +2255,64 @@
 
 
     @inlineCallbacks
-    def test_eachCalendarHome(self):
+    def test_withEachCalendarHomeDo(self):
         """
-        L{ICalendarTransaction.eachCalendarHome} returns an iterator that
-        yields 2-tuples of (transaction, home).
+        L{ICalendarStore.withEachCalendarHomeDo} executes its C{action}
+        argument repeatedly with all homes that have been created.
         """
-        # create some additional calendar homes
         additionalUIDs = set('alpha-uid home2 home3 beta-uid'.split())
         txn = self.transactionUnderTest()
         for name in additionalUIDs:
-            # maybe it's not actually necessary to yield (i.e. wait) for each
-            # one?  commit() should wait for all of them.
             yield txn.calendarHomeWithUID(name, create=True)
         yield self.commit()
-        foundUIDs = set([])
-        lastTxn = None
-        for txn, home in (yield self.storeUnderTest().eachCalendarHome()):
-            self.addCleanup(txn.commit)
-            foundUIDs.add(home.uid())
-            self.assertNotIdentical(lastTxn, txn)
-            lastTxn = txn
-        requiredUIDs = set([
-            uid for uid in self.requirements
-            if self.requirements[uid] is not None
-        ])
-        additionalUIDs.add("home_bad")
-        additionalUIDs.add("home_attachments")
-        expectedUIDs = additionalUIDs.union(requiredUIDs)
-        self.assertEquals(foundUIDs, expectedUIDs)
+        store = yield self.storeUnderTest()
+        def toEachCalendarHome(txn, eachHome):
+            return eachHome.createCalendarWithName("a-new-calendar")
+        result = yield store.withEachCalendarHomeDo(toEachCalendarHome)
+        self.assertEquals(result, None)
+        txn2 = self.transactionUnderTest()
+        for uid in additionalUIDs:
+            home = yield txn2.calendarHomeWithUID(uid)
+            self.assertNotIdentical(
+                None, (yield home.calendarWithName("a-new-calendar"))
+            )
+
+
+    @transactionClean
+    @inlineCallbacks
+    def test_withEachCalendarHomeDont(self):
+        """
+        When the function passed to L{ICalendarStore.withEachCalendarHomeDo}
+        raises an exception, processing is halted and the transaction is
+        aborted.  The exception is re-raised.
+        """
+        # create some calendar homes.
+        additionalUIDs = set('home2 home3'.split())
+        txn = self.transactionUnderTest()
+        for uid in additionalUIDs:
+            yield txn.calendarHomeWithUID(uid, create=True)
+        yield self.commit()
+        # try to create a calendar in all of them, then fail.
+        class AnException(Exception): pass
+        caught = []
+        @inlineCallbacks
+        def toEachCalendarHome(txn, eachHome):
+            caught.append(eachHome.uid())
+            yield eachHome.createCalendarWithName("wont-be-created")
+            raise AnException()
+        store = self.storeUnderTest()
+        yield self.failUnlessFailure(
+            store.withEachCalendarHomeDo(toEachCalendarHome), AnException
+        )
+        self.assertEquals(len(caught), 1)
+        @inlineCallbacks
+        def noNewCalendar(x):
+            home = yield txn.calendarHomeWithUID(uid, create=False)
+            self.assertIdentical(
+                (yield home.calendarWithName("wont-be-created")), None
+            )
+        txn = self.transactionUnderTest()
+        yield noNewCalendar(caught[0])
+        yield noNewCalendar('home2')
+        yield noNewCalendar('home3')
+

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_file.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_file.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -502,25 +502,6 @@
 
 
     @inlineCallbacks
-    def test_calendarHomes(self):
-        """
-        Finding all existing calendar homes.
-        """
-        calendarHomes = (yield self.transactionUnderTest().calendarHomes())
-        self.assertEquals(
-            [home.name() for home in calendarHomes],
-            [
-                "home1",
-                "home_attachments",
-                "home_bad",
-                "home_no_splits",
-                "home_splits",
-                "home_splits_shared",
-            ]
-        )
-
-
-    @inlineCallbacks
     def test_calendarObjectsWithDotFile(self):
         """
         Adding a dotfile to the calendar home should not increase the number of

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_sql.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/datastore/test/test_sql.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -516,21 +516,8 @@
             Where=ch.OWNER_UID == "home_version",
         ).on(txn)[0][0]
         self.assertEqual(int(homeVersion, version))
-        
-        
 
-    def test_eachCalendarHome(self):
-        """
-        L{ICalendarStore.eachCalendarHome} is currently stubbed out by
-        L{txdav.common.datastore.sql.CommonDataStore}.
-        """
-        return super(CalendarSQLStorageTests, self).test_eachCalendarHome()
 
-
-    test_eachCalendarHome.todo = (
-        "stubbed out, as migration only needs to go from file->sql currently")
-
-
     @inlineCallbacks
     def test_homeProvisioningConcurrency(self):
         """

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/icalendarstore.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/icalendarstore.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/caldav/icalendarstore.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -109,14 +109,6 @@
     Transaction functionality required to be implemented by calendar stores.
     """
 
-    def calendarHomes():
-        """
-        Retrieve each calendar home in the store.
-
-        @return: a L{Deferred} which fires with a list of L{ICalendarHome}.
-        """
-
-
     def calendarHomeWithUID(uid, create=False):
         """
         Retrieve the calendar home for the principal with the given C{uid}.
@@ -135,14 +127,40 @@
     API root for calendar data storage.
     """
 
-    def eachCalendarHome(self):
+    def withEachCalendarHomeDo(action, batchSize=None):
         """
-        Enumerate all calendar homes in this store, with each one in an
-        accompanying transaction.
+        Execute a given action with each calendar home present in this store,
+        in serial, committing after each batch of homes of a given size.
 
-        @return: an iterator of 2-tuples of C{(transaction, calendar home)}
-            where C{transaction} is an L{ITransaction} provider and C{calendar
-            home} is an L{ICalendarHome} provider.
+        @note: This does not execute an action with each directory principal
+            for which there might be a calendar home; it works only on calendar
+            homes which have already been provisioned.  To execute an action on
+            every possible calendar user, you will need to inspect the
+            directory API instead.
+
+        @note: The list of calendar homes is loaded incrementally, so this will
+            not necessarily present a consistent snapshot of the entire
+            database at a particular moment.  (If this behavior is desired,
+            pass a C{batchSize} greater than the number of homes in the
+            database.)
+
+        @param action: a 2-argument callable, taking an L{ICalendarTransaction}
+            and an L{ICalendarHome}, and returning a L{Deferred} that fires
+            with C{None} when complete.  Note that C{action} should not commit
+            or abort the given L{ICalendarTransaction}.  If C{action} completes
+            normally, then it will be called again with the next
+            L{ICalendarHome}.  If it raises an exception or returns a
+            L{Deferred} that fails, processing will stop and the L{Deferred}
+            returned from C{withEachCalendarHomeDo} will fail with that same
+            L{Failure}.
+        @type action: L{callable}
+
+        @param batchSize: The maximum count of calendar homes to include in a
+            single transaction.
+        @type batchSize: L{int}
+
+        @return: a L{Deferred} which fires with L{None} when all homes have
+            completed processing, or fails with the traceback.
         """
 
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/common.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/common.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -234,20 +234,6 @@
 
 
     @inlineCallbacks
-    def test_addressbookHomes(self):
-        """
-        Finding all existing addressbook homes.
-        """
-        addressbookHomes = (yield self.transactionUnderTest().addressbookHomes())
-        self.assertEquals(
-            [home.name() for home in addressbookHomes],
-            [
-                "home1",
-            ]
-        )
-
-
-    @inlineCallbacks
     def test_addressbookHomeWithUID_exists(self):
         """
         Finding an existing addressbook home by UID results in an object that
@@ -967,29 +953,3 @@
                 (yield addressbook2.addressbookObjectWithUID(obj.uid())), None)
 
 
-    @inlineCallbacks
-    def test_eachAddressbookHome(self):
-        """
-        L{IAddressbookTransaction.eachAddressbookHome} returns an iterator that
-        yields 2-tuples of (transaction, home).
-        """
-        # create some additional addressbook homes
-        additionalUIDs = set('alpha-uid home2 home3 beta-uid'.split())
-        txn = self.transactionUnderTest()
-        for name in additionalUIDs:
-            yield txn.addressbookHomeWithUID(name, create=True)
-        yield self.commit()
-        foundUIDs = set([])
-        lastTxn = None
-        for txn, home in (yield self.storeUnderTest().eachAddressbookHome()):
-            self.addCleanup(txn.commit)
-            foundUIDs.add(home.uid())
-            self.assertNotIdentical(lastTxn, txn)
-            lastTxn = txn
-        requiredUIDs = set([
-            uid for uid in self.requirements
-            if self.requirements[uid] is not None
-        ])
-        additionalUIDs.add("home_bad")
-        expectedUIDs = additionalUIDs.union(requiredUIDs)
-        self.assertEquals(foundUIDs, expectedUIDs)

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_file.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_file.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -472,21 +472,6 @@
 
 
     @inlineCallbacks
-    def test_addressbookHomes(self):
-        """
-        Finding all existing addressbook homes.
-        """
-        addressbookHomes = (yield self.transactionUnderTest().addressbookHomes())
-        self.assertEquals(
-            [home.name() for home in addressbookHomes],
-            [
-                "home1",
-                "home_bad",
-            ]
-        )
-
-
-    @inlineCallbacks
     def test_addressbookObjectsWithDotFile(self):
         """
         Adding a dotfile to the addressbook home should not create a new

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_sql.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/datastore/test/test_sql.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -206,11 +206,11 @@
         The DATAVERSION column for new calendar homes must match the
         ADDRESSBOOK-DATAVERSION value.
         """
-        
+
         home = yield self.transactionUnderTest().addressbookHomeWithUID("home_version")
         self.assertTrue(home is not None)
         yield self.transactionUnderTest().commit
-        
+
         txn = yield self.transactionUnderTest()
         version = yield txn.calendarserverValue("ADDRESSBOOK-DATAVERSION")[0][0]
         ch = schema.ADDRESSBOOK_HOME
@@ -220,21 +220,8 @@
             Where=ch.OWNER_UID == "home_version",
         ).on(txn)[0][0]
         self.assertEqual(int(homeVersion, version))
-        
-        
 
-    def test_eachAddressbookHome(self):
-        """
-        L{IAddressbookStore.eachAddressbookHome} is currently stubbed out by
-        L{txdav.common.datastore.sql.CommonDataStore}.
-        """
-        return super(AddressBookSQLStorageTests, self).test_eachAddressbookHome()
 
-
-    test_eachAddressbookHome.todo = (
-        "stubbed out, as migration only needs to go from file->sql currently")
-
-
     @inlineCallbacks
     def test_putConcurrency(self):
         """

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/iaddressbookstore.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/iaddressbookstore.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/carddav/iaddressbookstore.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -37,14 +37,6 @@
     Transaction interface that addressbook stores must provide.
     """
 
-    def addressbookHomes():
-        """
-        Retrieve each addressbook home in the store.
-
-        @return: a L{Deferred} which fires with a list of L{ICalendarHome}.
-        """
-
-
     def addressbookHomeWithUID(uid, create=False):
         """
         Retrieve the addressbook home for the principal with the given C{uid}.

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/file.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/file.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -19,6 +19,7 @@
 Common utility functions for a file based datastore.
 """
 
+import sys
 from twext.internet.decorate import memoizedKey
 from twext.python.log import LoggingMixIn
 from txdav.xml.rfc2518 import ResourceType, GETContentType, HRef
@@ -132,6 +133,38 @@
         )
 
 
+    @inlineCallbacks
+    def _withEachHomeDo(self, enumerator, action, batchSize):
+        """
+        Implementation of L{ICalendarStore.withEachCalendarHomeDo} and
+        L{IAddressBookStore.withEachAddressbookHomeDo}.
+        """
+        for txn, home in enumerator():
+            try:
+                yield action(txn, home)
+            except:
+                a, b, c = sys.exc_info()
+                yield txn.abort()
+                raise a, b, c
+            else:
+                yield txn.commit()
+
+
+    def withEachCalendarHomeDo(self, action, batchSize=None):
+        """
+        Implementation of L{ICalendarStore.withEachCalendarHomeDo}.
+        """
+        return self._withEachHomeDo(self._eachCalendarHome, action, batchSize)
+
+
+    def withEachAddressbookHomeDo(self, action, batchSize=None):
+        """
+        Implementation of L{ICalendarStore.withEachCalendarHomeDo}.
+        """
+        return self._withEachHomeDo(self._eachAddressbookHome, action,
+                                    batchSize)
+
+
     def setMigrating(self, state):
         """
         Set the "migrating" state
@@ -149,9 +182,9 @@
 
     def _homesOfType(self, storeType):
         """
-        Common implementation of L{ICalendarStore.eachCalendarHome} and
-        L{IAddressBookStore.eachAddressbookHome}; see those for a description
-        of the return type.
+        Common implementation of L{_eachCalendarHome} and
+        L{_eachAddressbookHome}; see those for a description of the return
+        type.
 
         @param storeType: one of L{EADDRESSBOOKTYPE} or L{ECALENDARTYPE}.
         """
@@ -172,11 +205,11 @@
                         yield (txn, home)
 
 
-    def eachCalendarHome(self):
+    def _eachCalendarHome(self):
         return self._homesOfType(ECALENDARTYPE)
 
 
-    def eachAddressbookHome(self):
+    def _eachAddressbookHome(self):
         return self._homesOfType(EADDRESSBOOKTYPE)
 
 
@@ -228,18 +261,10 @@
         CommonStoreTransaction._homeClass[EADDRESSBOOKTYPE] = AddressBookHome
 
 
-    def calendarHomes(self):
-        return self.homes(ECALENDARTYPE)
-
-
     def calendarHomeWithUID(self, uid, create=False):
         return self.homeWithUID(ECALENDARTYPE, uid, create=create)
 
 
-    def addressbookHomes(self):
-        return self.homes(EADDRESSBOOKTYPE)
-
-
     def addressbookHomeWithUID(self, uid, create=False):
         return self.homeWithUID(EADDRESSBOOKTYPE, uid, create=create)
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -25,6 +25,8 @@
     "CommonHome",
 ]
 
+import sys
+
 from uuid import uuid4, UUID
 
 from zope.interface import implements, directlyProvides
@@ -180,20 +182,48 @@
             self.queryCacher = None
 
 
-    def eachCalendarHome(self):
+    @inlineCallbacks
+    def _withEachHomeDo(self, homeTable, homeFromTxn, action, batchSize):
         """
-        @see: L{ICalendarStore.eachCalendarHome}
+        Implementation of L{ICalendarStore.withEachCalendarHomeDo} and
+        L{IAddressbookStore.withEachAddressbookHomeDo}.
         """
-        return []
+        txn = yield self.newTransaction()
+        try:
+            allUIDs = yield (Select([homeTable.OWNER_UID], From=homeTable)
+                             .on(txn))
+            for [uid] in allUIDs:
+                yield action(txn, (yield homeFromTxn(txn, uid)))
+        except:
+            a, b, c = sys.exc_info()
+            yield txn.abort()
+            raise a, b, c
+        else:
+            yield txn.commit()
 
 
-    def eachAddressbookHome(self):
+    def withEachCalendarHomeDo(self, action, batchSize=None):
         """
-        @see: L{IAddressbookStore.eachAddressbookHome}
+        Implementation of L{ICalendarStore.withEachCalendarHomeDo}.
         """
-        return []
+        return self._withEachHomeDo(
+            schema.CALENDAR_HOME,
+            lambda txn, uid: txn.calendarHomeWithUID(uid),
+            action, batchSize
+        )
 
 
+    def withEachAddressbookHomeDo(self, action, batchSize=None):
+        """
+        Implementation of L{IAddressbookStore.withEachAddressbookHomeDo}.
+        """
+        return self._withEachHomeDo(
+            schema.ADDRESSBOOK_HOME,
+            lambda txn, uid: txn.addressbookHomeWithUID(uid),
+            action, batchSize
+        )
+
+
     def newTransaction(self, label="unlabeled", disableCache=False):
         """
         @see: L{IDataStore.newTransaction}
@@ -467,18 +497,10 @@
         raise RuntimeError("Database key %s cannot be determined." % (key,))
 
 
-    def calendarHomes(self):
-        return self.homes(ECALENDARTYPE)
-
-
     def calendarHomeWithUID(self, uid, create=False):
         return self.homeWithUID(ECALENDARTYPE, uid, create=create)
 
 
-    def addressbookHomes(self):
-        return self.homes(EADDRESSBOOKTYPE)
-
-
     def addressbookHomeWithUID(self, uid, create=False):
         return self.homeWithUID(EADDRESSBOOKTYPE, uid, create=create)
 

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql_schema/current.sql
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql_schema/current.sql	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/sql_schema/current.sql	2012-11-28 17:35:29 UTC (rev 10098)
@@ -35,7 +35,7 @@
   PORT      integer not null,
   TIME      timestamp not null default timezone('UTC', CURRENT_TIMESTAMP),
 
-  primary key(HOSTNAME, PORT)
+  primary key (HOSTNAME, PORT)
 );
 
 
@@ -201,7 +201,7 @@
   CREATED              timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
   MODIFIED             timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
 
-  unique(CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+  unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
 
   -- since the 'inbox' is a 'calendar resource' for the purpose of storing
   -- calendar objects, this constraint has to be selectively enforced by the
@@ -330,8 +330,8 @@
   MANAGED_ID                     varchar(255) not null,
   CALENDAR_OBJECT_RESOURCE_ID    integer      not null references CALENDAR_OBJECT on delete cascade,
 
-  primary key(ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
-  unique(MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
+  primary key (ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
+  unique (MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
 );
 
 
@@ -345,7 +345,7 @@
   VALUE       text         not null, -- FIXME: xml?
   VIEWER_UID  varchar(255),
 
-  primary key(RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
+  primary key (RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
 );
 
 
@@ -410,8 +410,8 @@
   SEEN_BY_SHAREE               boolean      not null,
   MESSAGE                      text,                  -- FIXME: xml?
 
-  primary key(ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_ID), -- implicit index
-  unique(ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
+  primary key (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_ID), -- implicit index
+  unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME)     -- implicit index
 );
 
 create index ADDRESSBOOK_BIND_RESOURCE_ID on
@@ -427,8 +427,8 @@
   CREATED                 timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
   MODIFIED                timestamp    default timezone('UTC', CURRENT_TIMESTAMP),
 
-  unique(ADDRESSBOOK_RESOURCE_ID, RESOURCE_NAME), -- implicit index
-  unique(ADDRESSBOOK_RESOURCE_ID, VCARD_UID)      -- implicit index
+  unique (ADDRESSBOOK_RESOURCE_ID, RESOURCE_NAME), -- implicit index
+  unique (ADDRESSBOOK_RESOURCE_ID, VCARD_UID)      -- implicit index
 );
 
 ---------------
@@ -510,7 +510,7 @@
   USER_AGENT                    varchar(255) default null,
   IP_ADDR                       varchar(255) default null,
 
-  primary key(TOKEN, RESOURCE_KEY) -- implicit index
+  primary key (TOKEN, RESOURCE_KEY) -- implicit index
 );
 
 create index APN_SUBSCRIPTIONS_RESOURCE_KEY

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/test/util.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/test/util.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -197,7 +197,17 @@
         store.label = currentTestID
         cp.startService()
         def stopIt():
-            return cp.stopService()
+            # active transactions should have been shut down.
+            wasBusy = len(cp._busy)
+            busyText = repr(cp._busy)
+            stop = cp.stopService()
+            def checkWasBusy(ignored):
+                if wasBusy:
+                    testCase.fail("Outstanding Transactions: " + busyText)
+                return ignored
+            if deriveValue(testCase, _SPECIAL_TXN_CLEAN, lambda tc: False):
+                stop.addBoth(checkWasBusy)
+            return stop
         testCase.addCleanup(stopIt)
         yield self.cleanStore(testCase, store)
         returnValue(store)
@@ -255,13 +265,20 @@
     for that test.
 
     @param testCase: the test case instance.
+    @type testCase: L{TestCase}
 
     @param attribute: the name of the attribute (the same name passed to
         L{withSpecialValue}).
+    @type attribute: L{str}
 
     @param computeDefault: A 1-argument callable, which will be called with
         C{testCase} to compute a default value for the attribute for the given
         test if no custom one was specified.
+    @type computeDefault: L{callable}
+
+    @return: the value of the given C{attribute} for the given C{testCase}, as
+        decorated with C{withSpecialValue}.
+    @rtype: same type as the return type of L{computeDefault}
     """
     testID = testCase.id()
     testMethodName = testID.split(".")[-1]
@@ -300,6 +317,7 @@
 
 
 _SPECIAL_QUOTA = "__special_quota__"
+_SPECIAL_TXN_CLEAN = "__special_txn_clean__"
 
 
 
@@ -333,12 +351,29 @@
     Test method decorator that will cause L{deriveQuota} to return a different
     value for test cases that run that test method.
 
-    @see: withSpecialValue
+    @see: L{withSpecialValue}
     """
     return withSpecialValue(_SPECIAL_QUOTA, quotaValue)
 
 
 
+def transactionClean(f=None):
+    """
+    Test method decorator that will cause L{buildStore} to check that no
+    transactions were left outstanding at the end of the test, and fail the
+    test if they are outstanding rather than terminating them by shutting down
+    the connection pool service.
+
+    @see: L{withSpecialValue}
+    """
+    decorator = withSpecialValue(_SPECIAL_TXN_CLEAN, True)
+    if f:
+        return decorator(f)
+    else:
+        return decorator
+
+
+
 @inlineCallbacks
 def populateCalendarsFrom(requirements, store, migrating=False):
     """

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/migrate.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/migrate.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/migrate.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -223,6 +223,9 @@
                 lambda fileHome:
                 self.upgrader.migrateOneHome(fileTxn, homeType, fileHome)
             )
+            .addCallbacks(lambda ignored: fileTxn.commit(),
+                          lambda err: fileTxn.abort()
+                                      .addCallback(lambda ign: err))
             .addCallback(lambda ignored: {})
         )
 
@@ -343,7 +346,6 @@
                 "%s home %r already existed not migrating" % (
                     homeType, uid))
             yield sqlTxn.abort()
-            yield fileTxn.commit()
             returnValue(None)
         try:
             if sqlHome is None:
@@ -351,11 +353,9 @@
             yield migrateFunc(fileHome, sqlHome, merge=self.merge)
         except:
             f = Failure()
-            yield fileTxn.abort()
             yield sqlTxn.abort()
             f.raiseException()
         else:
-            yield fileTxn.commit()
             yield sqlTxn.commit()
             # Remove file home after migration. FIXME: instead, this should be a
             # public remove...HomeWithUID() API for de-provisioning.  (If we had
@@ -402,27 +402,20 @@
             )
             self.log_warn("Upgrade helpers ready.")
             parallelizer = Parallelizer(drivers)
+        else:
+            parallelizer = None
 
         self.log_warn("Beginning filesystem -> database upgrade.")
+
         for homeType, eachFunc in [
-                ("calendar", self.fileStore.eachCalendarHome),
-                ("addressbook", self.fileStore.eachAddressbookHome),
+                ("calendar", self.fileStore.withEachCalendarHomeDo),
+                ("addressbook", self.fileStore.withEachAddressbookHomeDo),
             ]:
-            for fileTxn, fileHome in eachFunc():
-                uid = fileHome.uid()
-                self.log_warn("Migrating %s UID %r" % (homeType, uid))
-                if parallel:
-                    # No-op transaction here: make sure everything's unlocked
-                    # before asking the subprocess to handle it.
-                    yield fileTxn.commit()
-                    @inlineCallbacks
-                    def doOneUpgrade(driver, fileUID=uid, homeType=homeType):
-                        yield driver.oneUpgrade(fileUID, homeType)
-                        self.log_warn("Completed migration of %s uid %r" %
-                                      (homeType, fileUID))
-                    yield parallelizer.do(doOneUpgrade)
-                else:
-                    yield self.migrateOneHome(fileTxn, homeType, fileHome)
+            yield eachFunc(
+                lambda txn, home: self._upgradeAction(
+                    txn, home, homeType, parallel, parallelizer
+                )
+            )
 
         if parallel:
             yield parallelizer.done()
@@ -458,6 +451,23 @@
             reactor.callLater(0, wrapped.setServiceParent, self.parent)
 
 
+    @inlineCallbacks
+    def _upgradeAction(self, fileTxn, fileHome, homeType, parallel,
+                       parallelizer):
+        uid = fileHome.uid()
+        self.log_warn("Migrating %s UID %r" % (homeType, uid))
+        if parallel:
+            @inlineCallbacks
+            def doOneUpgrade(driver, fileUID=uid, homeType=homeType):
+                yield driver.oneUpgrade(fileUID, homeType)
+                self.log_warn("Completed migration of %s uid %r" %
+                              (homeType, fileUID))
+            yield parallelizer.do(doOneUpgrade)
+        else:
+            yield self.migrateOneHome(fileTxn, homeType, fileHome)
+
+
+
     def startService(self):
         """
         Start the service.

Modified: CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/test/test_migrate.py
===================================================================
--- CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/test/test_migrate.py	2012-11-28 15:47:54 UTC (rev 10097)
+++ CalendarServer/branches/users/cdaboo/managed-attachments/txdav/common/datastore/upgrade/test/test_migrate.py	2012-11-28 17:35:29 UTC (rev 10098)
@@ -156,7 +156,8 @@
         class StubService(Service, object):
             def startService(self):
                 super(StubService, self).startService()
-                subStarted.callback(None)
+                if not subStarted.called:
+                    subStarted.callback(None)
         from twisted.python import log
         def justOnce(evt):
             if evt.get('isError') and not hasattr(subStarted, 'result'):
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20121128/b3ae8313/attachment-0001.html>


More information about the calendarserver-changes mailing list