[CalendarServer-changes] [12881] CalendarServer/branches/users/sagen/move2who-2

source_changes at macosforge.org source_changes at macosforge.org
Wed Mar 12 11:49:18 PDT 2014


Revision: 12881
          http://trac.calendarserver.org//changeset/12881
Author:   sagen at apple.com
Date:     2014-03-12 11:49:18 -0700 (Wed, 12 Mar 2014)
Log Message:
-----------
Porting more over to twext.who; removed twistedcaldav/directory/directory.py

Modified Paths:
--------------
    CalendarServer/branches/users/sagen/move2who-2/calendarserver/accesslog.py
    CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/caldav.py
    CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/util.py
    CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/principals.py
    CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/util.py
    CalendarServer/branches/users/sagen/move2who-2/conf/auth/accounts-test.xml
    CalendarServer/branches/users/sagen/move2who-2/contrib/performance/loadtest/test_sim.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/addressbook.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendar.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendaruserproxy.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/common.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/principal.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/accounts.xml
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_augment.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_principal.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/util.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/wiki.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/extensions.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/storebridge.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookmultiget.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookquery.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_calendarquery.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_collectioncontents.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_mkcalendar.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_multiget.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_props.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_resource.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_sharing.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_wrapping.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/util.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/upgrade.py
    CalendarServer/branches/users/sagen/move2who-2/txdav/dps/client.py
    CalendarServer/branches/users/sagen/move2who-2/txdav/who/augment.py
    CalendarServer/branches/users/sagen/move2who-2/txdav/who/directory.py
    CalendarServer/branches/users/sagen/move2who-2/txdav/who/groups.py
    CalendarServer/branches/users/sagen/move2who-2/txweb2/channel/http.py
    CalendarServer/branches/users/sagen/move2who-2/txweb2/server.py

Removed Paths:
-------------
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/aggregate.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/appleopendirectory.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/cachingdirectory.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/directory.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/ldapdirectory.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_aggregate.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_buildquery.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_cachedirectory.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_directory.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_modify.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_proxyprincipalmembers.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_resources.py
    CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_xmlfile.py

Modified: CalendarServer/branches/users/sagen/move2who-2/calendarserver/accesslog.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/calendarserver/accesslog.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/calendarserver/accesslog.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -48,7 +48,6 @@
 from twisted.protocols import amp
 
 from twistedcaldav.config import config
-from twistedcaldav.directory.directory import DirectoryService
 
 from txdav.xml import element as davxml
 
@@ -91,17 +90,17 @@
                     if hasattr(request, "authzUser") and str(request.authzUser.children[0]) != uidn:
                         uidz = str(request.authzUser.children[0])
 
-                    def convertUIDtoShortName(uid):
-                        uid = uid.rstrip("/")
-                        uid = uid[uid.rfind("/") + 1:]
-                        record = request.site.resource.getDirectory().recordWithUID(uid)
-                        if record:
-                            if record.recordType == DirectoryService.recordType_users:
-                                return record.shortNames[0]
-                            else:
-                                return "(%s)%s" % (record.recordType, record.shortNames[0],)
-                        else:
-                            return uid
+                    # def convertUIDtoShortName(uid):
+                    #     uid = uid.rstrip("/")
+                    #     uid = uid[uid.rfind("/") + 1:]
+                    #     record = request.site.resource.getDirectory().recordWithUID(uid)
+                    #     if record:
+                    #         if record.recordType == DirectoryService.recordType_users:
+                    #             return record.shortNames[0]
+                    #         else:
+                    #             return "(%s)%s" % (record.recordType, record.shortNames[0],)
+                    #     else:
+                    #         return uid
 
                     # MOVE2WHO
                     # Better to stick the records directly on the request at

Modified: CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/caldav.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/caldav.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/caldav.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -88,12 +88,10 @@
 )
 from txdav.dps.server import directoryFromConfig
 from txdav.dps.client import DirectoryService as DirectoryProxyClientService
-from txdav.who.groups import GroupCacher as NewGroupCacher
+from txdav.who.groups import GroupCacher
 
 from twistedcaldav import memcachepool
 from twistedcaldav.config import config, ConfigurationError
-from twistedcaldav.directory import calendaruserproxy
-from twistedcaldav.directory.directory import GroupMembershipCacheUpdater
 from txdav.who.groups import scheduleNextGroupCachingUpdate
 from twistedcaldav.localization import processLocalizationFiles
 from twistedcaldav.stdconfig import DEFAULT_CONFIG, DEFAULT_CONFIG_FILE
@@ -550,10 +548,7 @@
             )
             self.monitor.addProcessObject(process, PARENT_ENVIRONMENT)
 
-        if (
-            config.DirectoryProxy.Enabled and
-            config.DirectoryProxy.SocketPath != ""
-        ):
+        if (config.DirectoryProxy.SocketPath != ""):
             log.info("Adding directory proxy service")
 
             dpsArgv = [
@@ -1004,26 +999,18 @@
 
         # Optionally set up group cacher
         if config.GroupCaching.Enabled:
-            groupCacher = GroupMembershipCacheUpdater(
-                calendaruserproxy.ProxyDBService,
+            groupCacher = GroupCacher(
                 directory,
-                config.GroupCaching.UpdateSeconds,
-                config.GroupCaching.ExpireSeconds,
-                config.GroupCaching.LockSeconds,
-                namespace=config.GroupCaching.MemcachedPool,
-                useExternalProxies=config.GroupCaching.UseExternalProxies,
+                updateSeconds=config.GroupCaching.UpdateSeconds
             )
-            newGroupCacher = NewGroupCacher(directory)
         else:
             groupCacher = None
-            newGroupCacher = None
 
         def decorateTransaction(txn):
             txn._pushDistributor = pushDistributor
             txn._rootResource = result.rootResource
             txn._mailRetriever = mailRetriever
             txn._groupCacher = groupCacher
-            txn._newGroupCacher = newGroupCacher
 
         store.callWithNewTransactions(decorateTransaction)
 
@@ -1357,19 +1344,12 @@
 
             # Optionally set up group cacher
             if config.GroupCaching.Enabled:
-                groupCacher = GroupMembershipCacheUpdater(
-                    calendaruserproxy.ProxyDBService,
+                groupCacher = GroupCacher(
                     directory,
-                    config.GroupCaching.UpdateSeconds,
-                    config.GroupCaching.ExpireSeconds,
-                    config.GroupCaching.LockSeconds,
-                    namespace=config.GroupCaching.MemcachedPool,
-                    useExternalProxies=config.GroupCaching.UseExternalProxies
+                    updateSeconds=config.GroupCaching.UpdateSeconds
                 )
-                newGroupCacher = NewGroupCacher(directory)
             else:
                 groupCacher = None
-                newGroupCacher = None
 
             # Optionally enable Manhole access
             if config.Manhole.Enabled:
@@ -1405,7 +1385,6 @@
                 txn._rootResource = result.rootResource
                 txn._mailRetriever = mailRetriever
                 txn._groupCacher = groupCacher
-                txn._newGroupCacher = newGroupCacher
 
             store.callWithNewTransactions(decorateTransaction)
 
@@ -1942,26 +1921,18 @@
 
             # Optionally set up group cacher
             if config.GroupCaching.Enabled:
-                groupCacher = GroupMembershipCacheUpdater(
-                    calendaruserproxy.ProxyDBService,
+                groupCacher = GroupCacher(
                     directory,
-                    config.GroupCaching.UpdateSeconds,
-                    config.GroupCaching.ExpireSeconds,
-                    config.GroupCaching.LockSeconds,
-                    namespace=config.GroupCaching.MemcachedPool,
-                    useExternalProxies=config.GroupCaching.UseExternalProxies
+                    updateSeconds=config.GroupCaching.UpdateSeconds
                 )
-                newGroupCacher = NewGroupCacher(directory)
             else:
                 groupCacher = None
-                newGroupCacher = None
 
             def decorateTransaction(txn):
                 txn._pushDistributor = None
                 txn._rootResource = rootResource
                 txn._mailRetriever = mailRetriever
                 txn._groupCacher = groupCacher
-                txn._newGroupCacher = newGroupCacher
 
             store.callWithNewTransactions(decorateTransaction)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/util.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/util.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/calendarserver/tap/util.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -53,10 +53,8 @@
 from twistedcaldav.cache import CacheStoreNotifierFactory
 from twistedcaldav.directory import calendaruserproxy
 from twistedcaldav.directory.addressbook import DirectoryAddressBookHomeProvisioningResource
-from twistedcaldav.directory.aggregate import AggregateDirectoryService
 from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
 from twistedcaldav.directory.digest import QopDigestCredentialFactory
-from twistedcaldav.directory.directory import GroupMembershipCache
 from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
 from twistedcaldav.directory.wiki import WikiDirectoryService
 from calendarserver.push.notifier import NotifierFactory
@@ -491,7 +489,7 @@
     portal.registerChecker(HTTPDigestCredentialChecker(directory))
     portal.registerChecker(PrincipalCredentialChecker())
 
-    realm = directory.realmName or ""
+    realm = directory.realmName.encode("utf-8") or ""
 
     log.info("Configuring authentication for realm: {realm}", realm=realm)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/principals.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/principals.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/principals.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -30,7 +30,6 @@
 
 
 from twistedcaldav.config import config
-from twistedcaldav.directory.directory import UnknownRecordTypeError
 from txdav.who.groups import schedulePolledGroupCachingUpdate
 
 from calendarserver.tools.util import (
@@ -44,11 +43,6 @@
 
 def usage(e=None):
     if e:
-        if isinstance(e, UnknownRecordTypeError):
-            print("Valid record types:")
-            for recordType in config.directory.recordTypes():
-                print("    %s" % (recordType,))
-
         print(e)
         print("")
 

Modified: CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/util.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/util.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/calendarserver/tools/util.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -20,8 +20,6 @@
 
 __all__ = [
     "loadConfig",
-    "getDirectory",
-    "dummyDirectoryRecord",
     "UsageError",
     "booleanArgument",
 ]
@@ -48,8 +46,7 @@
 
 from twistedcaldav import memcachepool
 from twistedcaldav.directory import calendaruserproxy
-from twistedcaldav.directory.aggregate import AggregateDirectoryService
-from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord
+# from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord
 from txdav.who.groups import schedulePolledGroupCachingUpdate
 from calendarserver.push.notifier import NotifierFactory
 
@@ -78,145 +75,145 @@
 
 
 
-def getDirectory(config=config):
+# def getDirectory(config=config):
 
-    class MyDirectoryService (AggregateDirectoryService):
-        def getPrincipalCollection(self):
-            if not hasattr(self, "_principalCollection"):
+#     class MyDirectoryService (AggregateDirectoryService):
+#         def getPrincipalCollection(self):
+#             if not hasattr(self, "_principalCollection"):
 
-                if config.Notifications.Enabled:
-                    # FIXME: NotifierFactory needs reference to the store in order
-                    # to get a txn in order to create a Work item
-                    notifierFactory = NotifierFactory(
-                        None, config.ServerHostName,
-                        config.Notifications.CoalesceSeconds,
-                    )
-                else:
-                    notifierFactory = None
+#                 if config.Notifications.Enabled:
+#                     # FIXME: NotifierFactory needs reference to the store in order
+#                     # to get a txn in order to create a Work item
+#                     notifierFactory = NotifierFactory(
+#                         None, config.ServerHostName,
+#                         config.Notifications.CoalesceSeconds,
+#                     )
+#                 else:
+#                     notifierFactory = None
 
-                # Need a data store
-                _newStore = CommonDataStore(FilePath(config.DocumentRoot),
-                    notifierFactory, self, True, False)
-                if notifierFactory is not None:
-                    notifierFactory.store = _newStore
+#                 # Need a data store
+#                 _newStore = CommonDataStore(FilePath(config.DocumentRoot),
+#                     notifierFactory, self, True, False)
+#                 if notifierFactory is not None:
+#                     notifierFactory.store = _newStore
 
-                #
-                # Instantiating a DirectoryCalendarHomeProvisioningResource with a directory
-                # will register it with the directory (still smells like a hack).
-                #
-                # We need that in order to locate calendar homes via the directory.
-                #
-                from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
-                DirectoryCalendarHomeProvisioningResource(self, "/calendars/", _newStore)
+#                 #
+#                 # Instantiating a DirectoryCalendarHomeProvisioningResource with a directory
+#                 # will register it with the directory (still smells like a hack).
+#                 #
+#                 # We need that in order to locate calendar homes via the directory.
+#                 #
+#                 from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
+#                 DirectoryCalendarHomeProvisioningResource(self, "/calendars/", _newStore)
 
-                from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
-                self._principalCollection = DirectoryPrincipalProvisioningResource("/principals/", self)
+#                 from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
+#                 self._principalCollection = DirectoryPrincipalProvisioningResource("/principals/", self)
 
-            return self._principalCollection
+#             return self._principalCollection
 
-        def setPrincipalCollection(self, coll):
-            # See principal.py line 237:  self.directory.principalCollection = self
-            pass
+#         def setPrincipalCollection(self, coll):
+#             # See principal.py line 237:  self.directory.principalCollection = self
+#             pass
 
-        principalCollection = property(getPrincipalCollection, setPrincipalCollection)
+#         principalCollection = property(getPrincipalCollection, setPrincipalCollection)
 
-        def calendarHomeForRecord(self, record):
-            principal = self.principalCollection.principalForRecord(record)
-            if principal:
-                try:
-                    return principal.calendarHome()
-                except AttributeError:
-                    pass
-            return None
+#         def calendarHomeForRecord(self, record):
+#             principal = self.principalCollection.principalForRecord(record)
+#             if principal:
+#                 try:
+#                     return principal.calendarHome()
+#                 except AttributeError:
+#                     pass
+#             return None
 
-        def calendarHomeForShortName(self, recordType, shortName):
-            principal = self.principalCollection.principalForShortName(recordType, shortName)
-            if principal:
-                return principal.calendarHome()
-            return None
+#         def calendarHomeForShortName(self, recordType, shortName):
+#             principal = self.principalCollection.principalForShortName(recordType, shortName)
+#             if principal:
+#                 return principal.calendarHome()
+#             return None
 
-        def principalForCalendarUserAddress(self, cua):
-            return self.principalCollection.principalForCalendarUserAddress(cua)
+#         def principalForCalendarUserAddress(self, cua):
+#             return self.principalCollection.principalForCalendarUserAddress(cua)
 
-        def principalForUID(self, uid):
-            return self.principalCollection.principalForUID(uid)
+#         def principalForUID(self, uid):
+#             return self.principalCollection.principalForUID(uid)
 
-    # Load augment/proxy db classes now
-    if config.AugmentService.type:
-        augmentClass = namedClass(config.AugmentService.type)
-        augmentService = augmentClass(**config.AugmentService.params)
-    else:
-        augmentService = None
+#     # Load augment/proxy db classes now
+#     if config.AugmentService.type:
+#         augmentClass = namedClass(config.AugmentService.type)
+#         augmentService = augmentClass(**config.AugmentService.params)
+#     else:
+#         augmentService = None
 
-    proxydbClass = namedClass(config.ProxyDBService.type)
-    calendaruserproxy.ProxyDBService = proxydbClass(**config.ProxyDBService.params)
+#     proxydbClass = namedClass(config.ProxyDBService.type)
+#     calendaruserproxy.ProxyDBService = proxydbClass(**config.ProxyDBService.params)
 
-    # Wait for directory service to become available
-    BaseDirectoryService = namedClass(config.DirectoryService.type)
-    config.DirectoryService.params.augmentService = augmentService
-    directory = BaseDirectoryService(config.DirectoryService.params)
-    while not directory.isAvailable():
-        sleep(5)
+#     # Wait for directory service to become available
+#     BaseDirectoryService = namedClass(config.DirectoryService.type)
+#     config.DirectoryService.params.augmentService = augmentService
+#     directory = BaseDirectoryService(config.DirectoryService.params)
+#     while not directory.isAvailable():
+#         sleep(5)
 
-    directories = [directory]
+#     directories = [directory]
 
-    if config.ResourceService.Enabled:
-        resourceClass = namedClass(config.ResourceService.type)
-        config.ResourceService.params.augmentService = augmentService
-        resourceDirectory = resourceClass(config.ResourceService.params)
-        resourceDirectory.realmName = directory.realmName
-        directories.append(resourceDirectory)
+#     if config.ResourceService.Enabled:
+#         resourceClass = namedClass(config.ResourceService.type)
+#         config.ResourceService.params.augmentService = augmentService
+#         resourceDirectory = resourceClass(config.ResourceService.params)
+#         resourceDirectory.realmName = directory.realmName
+#         directories.append(resourceDirectory)
 
-    aggregate = MyDirectoryService(directories, None)
-    aggregate.augmentService = augmentService
+#     aggregate = MyDirectoryService(directories, None)
+#     aggregate.augmentService = augmentService
 
-    #
-    # Wire up the resource hierarchy
-    #
-    principalCollection = aggregate.getPrincipalCollection()
-    root = RootResource(
-        config.DocumentRoot,
-        principalCollections=(principalCollection,),
-    )
-    root.putChild("principals", principalCollection)
+#     #
+#     # Wire up the resource hierarchy
+#     #
+#     principalCollection = aggregate.getPrincipalCollection()
+#     root = RootResource(
+#         config.DocumentRoot,
+#         principalCollections=(principalCollection,),
+#     )
+#     root.putChild("principals", principalCollection)
 
-    # Need a data store
-    _newStore = CommonDataStore(FilePath(config.DocumentRoot), None, aggregate, True, False)
+#     # Need a data store
+#     _newStore = CommonDataStore(FilePath(config.DocumentRoot), None, aggregate, True, False)
 
-    from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
-    calendarCollection = DirectoryCalendarHomeProvisioningResource(
-        aggregate, "/calendars/",
-        _newStore,
-    )
-    root.putChild("calendars", calendarCollection)
+#     from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
+#     calendarCollection = DirectoryCalendarHomeProvisioningResource(
+#         aggregate, "/calendars/",
+#         _newStore,
+#     )
+#     root.putChild("calendars", calendarCollection)
 
-    return aggregate
+#     return aggregate
 
 
 
-class DummyDirectoryService (DirectoryService):
-    realmName = ""
-    baseGUID = "51856FD4-5023-4890-94FE-4356C4AAC3E4"
-    def recordTypes(self):
-        return ()
+# class DummyDirectoryService (DirectoryService):
+#     realmName = ""
+#     baseGUID = "51856FD4-5023-4890-94FE-4356C4AAC3E4"
+#     def recordTypes(self):
+#         return ()
 
 
-    def listRecords(self):
-        return ()
+#     def listRecords(self):
+#         return ()
 
 
-    def recordWithShortName(self):
-        return None
+#     def recordWithShortName(self):
+#         return None
 
-dummyDirectoryRecord = DirectoryRecord(
-    service=DummyDirectoryService(),
-    recordType="dummy",
-    guid="8EF0892F-7CB6-4B8E-B294-7C5A5321136A",
-    shortNames=("dummy",),
-    fullName="Dummy McDummerson",
-    firstName="Dummy",
-    lastName="McDummerson",
-)
+# dummyDirectoryRecord = DirectoryRecord(
+#     service=DummyDirectoryService(),
+#     recordType="dummy",
+#     guid="8EF0892F-7CB6-4B8E-B294-7C5A5321136A",
+#     shortNames=("dummy",),
+#     fullName="Dummy McDummerson",
+#     firstName="Dummy",
+#     lastName="McDummerson",
+# )
 
 class UsageError (StandardError):
     pass

Modified: CalendarServer/branches/users/sagen/move2who-2/conf/auth/accounts-test.xml
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/conf/auth/accounts-test.xml	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/conf/auth/accounts-test.xml	2014-03-12 18:49:18 UTC (rev 12881)
@@ -18,172 +18,339 @@
 
 <!DOCTYPE accounts SYSTEM "accounts.dtd">
 
-<accounts realm="Test Realm">
-  <user>
+<directory realm="Test Realm">
+  <record type="user">
     <uid>admin</uid>
-    <guid>admin</guid>
+    <short-name>admin</short-name>
     <password>admin</password>
-    <name>Super User</name>
-    <first-name>Super</first-name>
-    <last-name>User</last-name>
-  </user>
-  <user>
+    <full-name>Super User</full-name>
+  </record>
+  <record type="user">
     <uid>apprentice</uid>
-    <guid>apprentice</guid>
+    <short-name>apprentice</short-name>
     <password>apprentice</password>
-    <name>Apprentice Super User</name>
-    <first-name>Apprentice</first-name>
-    <last-name>Super User</last-name>
-  </user>
-  <user>
+    <full-name>Apprentice Super User</full-name>
+  </record>
+  <record type="user">
     <uid>wsanchez</uid>
-    <guid>wsanchez</guid>
-    <email-address>wsanchez at example.com</email-address>
+    <short-name>wsanchez</short-name>
+    <email>wsanchez at example.com</email>
     <password>test</password>
-    <name>Wilfredo Sanchez Vega</name>
-    <first-name>Wilfredo</first-name>
-    <last-name>Sanchez Vega</last-name>
-  </user>
-  <user>
+    <full-name>Wilfredo Sanchez Vega</full-name>
+  </record>
+  <record type="user">
     <uid>cdaboo</uid>
-    <guid>cdaboo</guid>
-    <email-address>cdaboo at example.com</email-address>
+    <short-name>cdaboo</short-name>
+    <email>cdaboo at example.com</email>
     <password>test</password>
-    <name>Cyrus Daboo</name>
-    <first-name>Cyrus</first-name>
-    <last-name>Daboo</last-name>
-  </user>
-  <user>
+    <full-name>cyrus Daboo</full-name>
+  </record>
+  <record type="user">
     <uid>sagen</uid>
-    <guid>sagen</guid>
-    <email-address>sagen at example.com</email-address>
+    <short-name>sagen</short-name>
+    <email>sagen at example.com</email>
     <password>test</password>
-    <name>Morgen Sagen</name>
-    <first-name>Morgen</first-name>
-    <last-name>Sagen</last-name>
-  </user>
-  <user>
+    <full-name>Morgen Sagen</full-name>
+  </record>
+  <record type="user">
     <uid>dre</uid>
-    <guid>andre</guid>
-    <email-address>dre at example.com</email-address>
+    <short-name>andre</short-name>
+    <email>dre at example.com</email>
     <password>test</password>
-    <name>Andre LaBranche</name>
-    <first-name>Andre</first-name>
-    <last-name>LaBranche</last-name>
-  </user>
-  <user>
+    <full-name>Andre LaBranche</full-name>
+  </record>
+  <record type="user">
     <uid>glyph</uid>
-    <guid>glyph</guid>
-    <email-address>glyph at example.com</email-address>
+    <short-name>glyph</short-name>
+    <email>glyph at example.com</email>
     <password>test</password>
-    <name>Glyph Lefkowitz</name>
-    <first-name>Glyph</first-name>
-    <last-name>Lefkowitz</last-name>
-  </user>
-  <user>
+    <full-name>Glyph Lefkowitz</full-name>
+  </record>
+  <record type="user">
     <uid>i18nuser</uid>
-    <guid>i18nuser</guid>
-    <email-address>i18nuser at example.com</email-address>
+    <short-name>i18nuser</short-name>
+    <email>i18nuser at example.com</email>
     <password>i18nuser</password>
-    <name>まだ</name>
-    <first-name>ま</first-name>
-    <last-name>だ</last-name>
-  </user>
+    <full-name>まだ</full-name>
+  </record>
+
+  <!-- twext.who xml doesn't (yet) support repeat
   <user repeat="101">
     <uid>user%02d</uid>
     <uid>User %02d</uid>
-    <guid>user%02d</guid>
+    <short-name>user%02d</short-name>
     <password>user%02d</password>
-    <name>User %02d</name>
-    <first-name>User</first-name>
-    <last-name>%02d</last-name>
-    <email-address>user%02d at example.com</email-address>
-  </user>
+    <full-name>User %02d</full-name>
+    <email>user%02d at example.com</email>
+  </record>
   <user repeat="10">
     <uid>public%02d</uid>
-    <guid>public%02d</guid>
+    <short-name>public%02d</short-name>
     <password>public%02d</password>
-    <name>Public %02d</name>
-    <first-name>Public</first-name>
-    <last-name>%02d</last-name>
-  </user>
-  <group>
+    <full-name>Public %02d</full-name>
+  </record>
+  -->
+  <record type="user">
+    <short-name>user01</short-name>
+    <uid>user01</uid>
+    <password>user01</password>
+    <full-name>User 01</full-name>
+    <email>user01 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user02</short-name>
+    <uid>user02</uid>
+    <password>user02</password>
+    <full-name>User 02</full-name>
+    <email>user02 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user03</short-name>
+    <uid>user03</uid>
+    <password>user03</password>
+    <full-name>User 03</full-name>
+    <email>user03 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user04</short-name>
+    <uid>user04</uid>
+    <password>user04</password>
+    <full-name>User 04</full-name>
+    <email>user04 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user05</short-name>
+    <uid>user05</uid>
+    <password>user05</password>
+    <full-name>User 05</full-name>
+    <email>user05 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user06</short-name>
+    <uid>user06</uid>
+    <password>user06</password>
+    <full-name>User 06</full-name>
+    <email>user06 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user07</short-name>
+    <uid>user07</uid>
+    <password>user07</password>
+    <full-name>User 07</full-name>
+    <email>user07 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user08</short-name>
+    <uid>user08</uid>
+    <password>user08</password>
+    <full-name>User 08</full-name>
+    <email>user08 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user09</short-name>
+    <uid>user09</uid>
+    <password>user09</password>
+    <full-name>User 09</full-name>
+    <email>user09 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user10</short-name>
+    <uid>user10</uid>
+    <password>user10</password>
+    <full-name>User 10</full-name>
+    <email>user10 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user11</short-name>
+    <uid>user11</uid>
+    <password>user11</password>
+    <full-name>User 11</full-name>
+    <email>user11 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user12</short-name>
+    <uid>user12</uid>
+    <password>user12</password>
+    <full-name>User 12</full-name>
+    <email>user12 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user13</short-name>
+    <uid>user13</uid>
+    <password>user13</password>
+    <full-name>User 13</full-name>
+    <email>user13 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user14</short-name>
+    <uid>user14</uid>
+    <password>user14</password>
+    <full-name>User 14</full-name>
+    <email>user14 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user15</short-name>
+    <uid>user15</uid>
+    <password>user15</password>
+    <full-name>User 15</full-name>
+    <email>user15 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user16</short-name>
+    <uid>user16</uid>
+    <password>user16</password>
+    <full-name>User 16</full-name>
+    <email>user16 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user17</short-name>
+    <uid>user17</uid>
+    <password>user17</password>
+    <full-name>User 17</full-name>
+    <email>user17 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user18</short-name>
+    <uid>user18</uid>
+    <password>user18</password>
+    <full-name>User 18</full-name>
+    <email>user18 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user19</short-name>
+    <uid>user19</uid>
+    <password>user19</password>
+    <full-name>User 19</full-name>
+    <email>user19 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user20</short-name>
+    <uid>user20</uid>
+    <password>user20</password>
+    <full-name>User 20</full-name>
+    <email>user20 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user21</short-name>
+    <uid>user21</uid>
+    <password>user21</password>
+    <full-name>User 21</full-name>
+    <email>user21 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user22</short-name>
+    <uid>user22</uid>
+    <password>user22</password>
+    <full-name>User 22</full-name>
+    <email>user22 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user23</short-name>
+    <uid>user23</uid>
+    <password>user23</password>
+    <full-name>User 23</full-name>
+    <email>user23 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user24</short-name>
+    <uid>user24</uid>
+    <password>user24</password>
+    <full-name>User 24</full-name>
+    <email>user24 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user25</short-name>
+    <uid>user25</uid>
+    <password>user25</password>
+    <full-name>User 25</full-name>
+    <email>user25 at example.com</email>
+  </record>
+
+  <record type="group">
     <uid>group01</uid>
-    <guid>group01</guid>
+    <short-name>group01</short-name>
     <password>group01</password>
-    <name>Group 01</name>
-    <members>
-      <member type="users">user01</member>
-    </members>
-  </group>
-  <group>
+    <full-name>Group 01</full-name>
+      <member-uid type="users">user01</member-uid>
+  </record>
+  <record type="group">
     <uid>group02</uid>
-    <guid>group02</guid>
+    <short-name>group02</short-name>
     <password>group02</password>
-    <name>Group 02</name>
-    <members>
-      <member type="users">user06</member>
-      <member type="users">user07</member>
-    </members>
-  </group>
-  <group>
+    <full-name>Group 02</full-name>
+      <member-uid type="users">user06</member-uid>
+      <member-uid type="users">user07</member-uid>
+  </record>
+  <record type="group">
     <uid>group03</uid>
-    <guid>group03</guid>
+    <short-name>group03</short-name>
     <password>group03</password>
-    <name>Group 03</name>
-    <members>
-      <member type="users">user08</member>
-      <member type="users">user09</member>
-    </members>
-  </group>
-  <group>
+    <full-name>Group 03</full-name>
+      <member-uid type="users">user08</member-uid>
+      <member-uid type="users">user09</member-uid>
+  </record>
+  <record type="group">
     <uid>group04</uid>
-    <guid>group04</guid>
+    <short-name>group04</short-name>
     <password>group04</password>
-    <name>Group 04</name>
-    <members>
-      <member type="groups">group02</member>
-      <member type="groups">group03</member>
-      <member type="users">user10</member>
-    </members>
-  </group>
-  <group> <!-- delegategroup -->
+    <full-name>Group 04</full-name>
+      <member-uid type="groups">group02</member-uid>
+      <member-uid type="groups">group03</member-uid>
+      <member-uid type="users">user10</member-uid>
+  </record>
+  <record type="group"> <!-- delegategroup -->
     <uid>group05</uid>
-    <guid>group05</guid>
+    <short-name>group05</short-name>
     <password>group05</password>
-    <name>Group 05</name>
-    <members>
-      <member type="groups">group06</member>
-      <member type="users">user20</member>
-    </members>
-  </group>
-  <group> <!-- delegatesubgroup -->
+    <full-name>Group 05</full-name>
+      <member-uid type="groups">group06</member-uid>
+      <member-uid type="users">user20</member-uid>
+  </record>
+  <record type="group"> <!-- delegatesubgroup -->
     <uid>group06</uid>
-    <guid>group06</guid>
+    <short-name>group06</short-name>
     <password>group06</password>
-    <name>Group 06</name>
-    <members>
-      <member type="users">user21</member>
-    </members>
-  </group>
-  <group> <!-- readonlydelegategroup -->
+    <full-name>Group 06</full-name>
+      <member-uid type="users">user21</member-uid>
+  </record>
+  <record type="group"> <!-- readonlydelegategroup -->
     <uid>group07</uid>
-    <guid>group07</guid>
+    <short-name>group07</short-name>
     <password>group07</password>
-    <name>Group 07</name>
-    <members>
-      <member type="users">user22</member>
-      <member type="users">user23</member>
-      <member type="users">user24</member>
-    </members>
-  </group>
-  <group>
+    <full-name>Group 07</full-name>
+      <member-uid type="users">user22</member-uid>
+      <member-uid type="users">user23</member-uid>
+      <member-uid type="users">user24</member-uid>
+  </record>
+  <record type="group">
     <uid>disabledgroup</uid>
-    <guid>disabledgroup</guid>
+    <short-name>disabledgroup</short-name>
     <password>disabledgroup</password>
-    <name>Disabled Group</name>
-    <members>
-      <member type="users">user01</member>
-    </members>
-  </group>
-</accounts>
+    <full-name>Disabled Group</full-name>
+      <member-uid type="users">user01</member-uid>
+  </record>
+</directory>

Modified: CalendarServer/branches/users/sagen/move2who-2/contrib/performance/loadtest/test_sim.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/contrib/performance/loadtest/test_sim.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/contrib/performance/loadtest/test_sim.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -24,32 +24,34 @@
 from twisted.internet.defer import Deferred, succeed
 from twisted.trial.unittest import TestCase
 
-from twistedcaldav.directory.directory import DirectoryRecord
-
 from contrib.performance.stats import NormalDistribution
 from contrib.performance.loadtest.ical import OS_X_10_6
 from contrib.performance.loadtest.profiles import Eventer, Inviter, Accepter
 from contrib.performance.loadtest.population import (
     SmoothRampUp, ClientType, PopulationParameters, Populator, CalendarClientSimulator,
-    ProfileType, SimpleStatistics)
+    ProfileType, SimpleStatistics
+)
 from contrib.performance.loadtest.sim import (
-    Arrival, SimOptions, LoadSimulator, LagTrackingReactor)
+    Arrival, SimOptions, LoadSimulator, LagTrackingReactor,
+    _DirectoryRecord
+)
 
+
 VALID_CONFIG = {
     'server': 'tcp:127.0.0.1:8008',
     'webadmin': {
         'enabled': True,
         'HTTPPort': 8080,
-        },
+    },
     'arrival': {
         'factory': 'contrib.performance.loadtest.population.SmoothRampUp',
         'params': {
             'groups': 10,
             'groupSize': 1,
             'interval': 3,
-            },
         },
-    }
+    },
+}
 
 VALID_CONFIG_PLIST = writePlistToString(VALID_CONFIG)
 
@@ -104,8 +106,9 @@
     realmName = 'stub'
 
     def _user(self, name):
-        record = DirectoryRecord(self, 'user', name, (name,))
-        record.password = 'password-' + name
+        password = 'password-' + name
+        email = name + "@example.com"
+        record = _DirectoryRecord(name, password, name, email)
         return record
 
 
@@ -119,10 +122,10 @@
             [self._user('alice'), self._user('bob'), self._user('carol')],
             Populator(None), None, None, 'http://example.org:1234/', None, None)
         users = sorted([
-                calsim._createUser(0)[0],
-                calsim._createUser(1)[0],
-                calsim._createUser(2)[0],
-                ])
+            calsim._createUser(0)[0],
+            calsim._createUser(1)[0],
+            calsim._createUser(2)[0],
+        ])
         self.assertEqual(['alice', 'bob', 'carol'], users)
 
 
@@ -171,8 +174,9 @@
 
         params = PopulationParameters()
         params.addClient(1, ClientType(
-                BrokenClient, {'runResult': clientRunResult},
-                [ProfileType(BrokenProfile, {'runResult': profileRunResult})]))
+            BrokenClient, {'runResult': clientRunResult},
+            [ProfileType(BrokenProfile, {'runResult': profileRunResult})])
+        )
         sim = CalendarClientSimulator(
             [self._user('alice')], Populator(None), params, None, 'http://example.com:1234/', None, None)
         sim.add(1, 1)
@@ -284,8 +288,9 @@
         config["accounts"] = {
             "loader": "contrib.performance.loadtest.sim.recordsFromCSVFile",
             "params": {
-                "path": accounts.path},
-            }
+                "path": accounts.path
+            },
+        }
         configpath = FilePath(self.mktemp())
         configpath.setContent(writePlistToString(config))
         io = StringIO()
@@ -312,8 +317,9 @@
         config["accounts"] = {
             "loader": "contrib.performance.loadtest.sim.recordsFromCSVFile",
             "params": {
-                "path": ""},
-            }
+                "path": ""
+            },
+        }
         configpath = FilePath(self.mktemp())
         configpath.setContent(writePlistToString(config))
         sim = LoadSimulator.fromCommandLine(['--config', configpath.path],
@@ -406,8 +412,9 @@
         section of the configuration file specified.
         """
         config = FilePath(self.mktemp())
-        config.setContent(writePlistToString({
-                    "server": "https://127.0.0.3:8432/"}))
+        config.setContent(
+            writePlistToString({"server": "https://127.0.0.3:8432/"})
+        )
         sim = LoadSimulator.fromCommandLine(['--config', config.path])
         self.assertEquals(sim.server, "https://127.0.0.3:8432/")
 
@@ -418,16 +425,18 @@
         [arrival] section of the configuration file specified.
         """
         config = FilePath(self.mktemp())
-        config.setContent(writePlistToString({
-                    "arrival": {
-                        "factory": "contrib.performance.loadtest.population.SmoothRampUp",
-                        "params": {
-                            "groups": 10,
-                            "groupSize": 1,
-                            "interval": 3,
-                            },
-                        },
-                    }))
+        config.setContent(
+            writePlistToString({
+                "arrival": {
+                    "factory": "contrib.performance.loadtest.population.SmoothRampUp",
+                    "params": {
+                        "groups": 10,
+                        "groupSize": 1,
+                        "interval": 3,
+                    },
+                },
+            })
+        )
         sim = LoadSimulator.fromCommandLine(['--config', config.path])
         self.assertEquals(
             sim.arrival,
@@ -461,11 +470,17 @@
         section of the configuration file specified.
         """
         config = FilePath(self.mktemp())
-        config.setContent(writePlistToString({
-                    "clients": [{
+        config.setContent(
+            writePlistToString(
+                {
+                    "clients": [
+                        {
                             "software": "contrib.performance.loadtest.ical.OS_X_10_6",
-                            "params": {"foo": "bar"},
-                            "profiles": [{
+                            "params": {
+                                "foo": "bar"
+                            },
+                            "profiles": [
+                                {
                                     "params": {
                                         "interval": 25,
                                         "eventStartDistribution": {
@@ -473,19 +488,38 @@
                                             "params": {
                                                 "mu": 123,
                                                 "sigma": 456,
-                                                }}},
-                                    "class": "contrib.performance.loadtest.profiles.Eventer"}],
+                                            }
+                                        }
+                                    },
+                                    "class": "contrib.performance.loadtest.profiles.Eventer"
+                                }
+                            ],
                             "weight": 3,
-                            }]}))
+                        }
+                    ]
+                }
+            )
+        )
 
         sim = LoadSimulator.fromCommandLine(
             ['--config', config.path, '--clients', config.path]
         )
         expectedParameters = PopulationParameters()
         expectedParameters.addClient(
-            3, ClientType(OS_X_10_6, {"foo": "bar"}, [ProfileType(Eventer, {
+            3,
+            ClientType(
+                OS_X_10_6,
+                {"foo": "bar"},
+                [
+                    ProfileType(
+                        Eventer, {
                             "interval": 25,
-                            "eventStartDistribution": NormalDistribution(123, 456)})]))
+                            "eventStartDistribution": NormalDistribution(123, 456)
+                        }
+                    )
+                ]
+            )
+        )
         self.assertEquals(sim.parameters, expectedParameters)
 
 
@@ -512,9 +546,18 @@
         configuration file are added to the logging system.
         """
         config = FilePath(self.mktemp())
-        config.setContent(writePlistToString({
-            "observers": [{"type":"contrib.performance.loadtest.population.SimpleStatistics", "params":{}, }, ]
-        }))
+        config.setContent(
+            writePlistToString(
+                {
+                    "observers": [
+                        {
+                            "type": "contrib.performance.loadtest.population.SimpleStatistics",
+                            "params": {},
+                        },
+                    ]
+                }
+            )
+        )
         sim = LoadSimulator.fromCommandLine(['--config', config.path])
         self.assertEquals(len(sim.observers), 1)
         self.assertIsInstance(sim.observers[0], SimpleStatistics)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/addressbook.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/addressbook.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/addressbook.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -34,7 +34,6 @@
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
 
 from twistedcaldav.config import config
-from twistedcaldav.directory.idirectory import IDirectoryService
 
 from twistedcaldav.directory.common import CommonUIDProvisioningResource,\
     uidsResourceName, CommonHomeTypeProvisioningResource
@@ -58,7 +57,7 @@
 
 
 
-class DirectoryAddressBookProvisioningResource (
+class DirectoryAddressBookProvisioningResource(
     ReadOnlyResourceMixIn,
     CalDAVComplianceMixIn,
     DAVResourceWithChildrenMixin,
@@ -77,9 +76,9 @@
 
 
 
-class DirectoryAddressBookHomeProvisioningResource (
-        DirectoryAddressBookProvisioningResource
-    ):
+class DirectoryAddressBookHomeProvisioningResource(
+    DirectoryAddressBookProvisioningResource
+):
     """
     Resource which provisions address book home collections as needed.
     """
@@ -104,8 +103,14 @@
         #
         # Create children
         #
-        for recordType in [r.name for r in self.directory.recordTypes()]:
-            self.putChild(recordType, DirectoryAddressBookHomeTypeProvisioningResource(self, recordType))
+        # ...just "users" though.  If we iterate all of the directory's
+        # recordTypes, we also get the proxy sub principal types.
+        for recordTypeName in [
+            self.directory.recordTypeToOldName(r) for r in [
+                self.directory.recordType.user
+            ]
+        ]:
+            self.putChild(recordTypeName, DirectoryAddressBookHomeTypeProvisioningResource(self, r))
 
         self.putChild(uidsResourceName, DirectoryAddressBookHomeUIDProvisioningResource(self))
 
@@ -115,7 +120,7 @@
 
 
     def listChildren(self):
-        return [r.name for r in self.directory.recordTypes()]
+        return [self.directory.recordTypeToOldName(r) for r in self.directory.recordTypes()]
 
 
     def principalCollections(self):
@@ -153,9 +158,9 @@
 
 
 class DirectoryAddressBookHomeTypeProvisioningResource (
-        CommonHomeTypeProvisioningResource,
-        DirectoryAddressBookProvisioningResource
-    ):
+    CommonHomeTypeProvisioningResource,
+    DirectoryAddressBookProvisioningResource
+):
     """
     Resource which provisions address book home collections of a specific
     record type as needed.
@@ -176,19 +181,19 @@
 
 
     def url(self):
-        return joinURL(self._parent.url(), self.recordType)
+        return joinURL(self._parent.url(), self.directory.recordTypeToOldName(self.recordType))
 
 
+    @inlineCallbacks
     def listChildren(self):
         if config.EnablePrincipalListings:
+            children = []
+            for record in (yield self.directory.listRecords(self.recordType)):
+                if record.enabledForAddressBooks:
+                    for shortName in record.shortNames:
+                        children.append(shortName)
 
-            def _recordShortnameExpand():
-                for record in self.directory.listRecords(self.recordType):
-                    if record.enabledForAddressBooks:
-                        for shortName in record.shortNames:
-                            yield shortName
-
-            return _recordShortnameExpand()
+            returnValue(children)
         else:
             # Not a listable collection
             raise HTTPError(responsecode.FORBIDDEN)
@@ -224,9 +229,9 @@
 
 
 class DirectoryAddressBookHomeUIDProvisioningResource (
-        CommonUIDProvisioningResource,
-        DirectoryAddressBookProvisioningResource
-    ):
+    CommonUIDProvisioningResource,
+    DirectoryAddressBookProvisioningResource
+):
 
     homeResourceTypeName = 'addressbooks'
 

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/aggregate.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/aggregate.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/aggregate.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,385 +0,0 @@
-##
-# Copyright (c) 2006-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-"""
-Directory service implementation which aggregates multiple directory
-services.
-"""
-
-__all__ = [
-    "AggregateDirectoryService",
-    "DuplicateRecordTypeError",
-]
-
-import itertools
-from twisted.cred.error import UnauthorizedLogin
-
-from twistedcaldav.directory.idirectory import IDirectoryService
-from twistedcaldav.directory.directory import DirectoryService, DirectoryError
-from twistedcaldav.directory.directory import UnknownRecordTypeError
-from twisted.internet.defer import inlineCallbacks, returnValue
-
-class AggregateDirectoryService(DirectoryService):
-    """
-    L{IDirectoryService} implementation which aggregates multiple directory
-    services.
-
-    @ivar _recordTypes: A map of record types to L{IDirectoryService}s.
-    @type _recordTypes: L{dict} mapping L{bytes} to L{IDirectoryService}
-        provider.
-    """
-    baseGUID = "06FB225F-39E7-4D34-B1D1-29925F5E619B"
-
-    def __init__(self, services, groupMembershipCache):
-        super(AggregateDirectoryService, self).__init__()
-
-        realmName = None
-        recordTypes = {}
-        self.groupMembershipCache = groupMembershipCache
-
-        for service in services:
-            service = IDirectoryService(service)
-
-            if service.realmName != realmName:
-                assert realmName is None, (
-                    "Aggregated directory services must have the same realm name: %r != %r\nServices: %r"
-                    % (service.realmName, realmName, services)
-                )
-                realmName = service.realmName
-
-            if not hasattr(service, "recordTypePrefix"):
-                service.recordTypePrefix = ""
-            prefix = service.recordTypePrefix
-
-            for recordType in (prefix + r for r in service.recordTypes()):
-                if recordType in recordTypes:
-                    raise DuplicateRecordTypeError(
-                        "%r is in multiple services: %s, %s"
-                        % (recordType, recordTypes[recordType], service)
-                    )
-                recordTypes[recordType] = service
-
-            service.aggregateService = self
-
-        self.realmName = realmName
-        self._recordTypes = recordTypes
-
-        # FIXME: This is a temporary workaround until new data store is in
-        # place.  During the purging of deprovisioned users' data, we need
-        # to be able to look up records by uid and shortName.  The purge
-        # tool sticks temporary fake records in here.
-        self._tmpRecords = {
-            "uids" : { },
-            "shortNames" : { },
-        }
-
-
-    def __repr__(self):
-        return "<%s (%s): %r>" % (self.__class__.__name__, self.realmName, self._recordTypes)
-
-
-    #
-    # Define calendarHomesCollection as a property so we can set it on contained services
-    #
-    def _getCalendarHomesCollection(self):
-        return self._calendarHomesCollection
-
-
-    def _setCalendarHomesCollection(self, value):
-        for service in self._recordTypes.values():
-            service.calendarHomesCollection = value
-        self._calendarHomesCollection = value
-
-    calendarHomesCollection = property(_getCalendarHomesCollection, _setCalendarHomesCollection)
-
-    #
-    # Define addressBookHomesCollection as a property so we can set it on contained services
-    #
-    def _getAddressBookHomesCollection(self):
-        return self._addressBookHomesCollection
-
-
-    def _setAddressBookHomesCollection(self, value):
-        for service in self._recordTypes.values():
-            service.addressBookHomesCollection = value
-        self._addressBookHomesCollection = value
-
-    addressBookHomesCollection = property(_getAddressBookHomesCollection, _setAddressBookHomesCollection)
-
-
-    def addService(self, service):
-        """
-        Add another service to this aggregate.
-
-        @param service: the service to add
-        @type service: L{IDirectoryService}
-        """
-        service = IDirectoryService(service)
-
-        if service.realmName != self.realmName:
-            assert self.realmName is None, (
-                "Aggregated directory services must have the same realm name: %r != %r\nServices: %r"
-                % (service.realmName, self.realmName, service)
-            )
-
-        if not hasattr(service, "recordTypePrefix"):
-            service.recordTypePrefix = ""
-        prefix = service.recordTypePrefix
-
-        for recordType in (prefix + r for r in service.recordTypes()):
-            if recordType in self._recordTypes:
-                raise DuplicateRecordTypeError(
-                    "%r is in multiple services: %s, %s"
-                    % (recordType, self.recordTypes[recordType], service)
-                )
-            self._recordTypes[recordType] = service
-
-        service.aggregateService = self
-
-
-    def recordTypes(self):
-        return set(self._recordTypes)
-
-
-    def listRecords(self, recordType):
-        records = self._query("listRecords", recordType)
-        if records is None:
-            return ()
-        else:
-            return records
-
-
-    def recordWithShortName(self, recordType, shortName):
-
-        # FIXME: These temporary records shouldn't be needed when we move
-        # to the new data store API.  They're currently needed when purging
-        # deprovisioned users' data.
-        record = self._tmpRecords["shortNames"].get(shortName, None)
-        if record:
-            return record
-
-        return self._query("recordWithShortName", recordType, shortName)
-
-
-    def recordWithUID(self, uid):
-
-        # FIXME: These temporary records shouldn't be needed when we move
-        # to the new data store API.  They're currently needed when purging
-        # deprovisioned users' data.
-        record = self._tmpRecords["uids"].get(uid, None)
-        if record:
-            return record
-
-        return self._queryAll("recordWithUID", uid)
-
-    recordWithGUID = recordWithUID
-
-    def recordWithAuthID(self, authID):
-        return self._queryAll("recordWithAuthID", authID)
-
-
-    def recordWithCalendarUserAddress(self, address):
-        return self._queryAll("recordWithCalendarUserAddress", address)
-
-
-    def recordWithCachedGroupsAlias(self, recordType, alias):
-        """
-        @param recordType: the type of the record to look up.
-        @param alias: the cached-groups alias of the record to look up.
-        @type alias: C{str}
-
-        @return: a deferred L{IDirectoryRecord} with the given cached-groups
-            alias, or C{None} if no such record is found.
-        """
-        service = self.serviceForRecordType(recordType)
-        return service.recordWithCachedGroupsAlias(recordType, alias)
-
-
-    @inlineCallbacks
-    def recordsMatchingFields(self, fields, operand="or", recordType=None):
-
-        if recordType:
-            services = (self.serviceForRecordType(recordType),)
-        else:
-            services = set(self._recordTypes.values())
-
-        generators = []
-        for service in services:
-            generator = (yield service.recordsMatchingFields(fields,
-                operand=operand, recordType=recordType))
-            generators.append(generator)
-
-        returnValue(itertools.chain(*generators))
-
-
-    @inlineCallbacks
-    def recordsMatchingTokens(self, tokens, context=None):
-        """
-        Combine the results from the sub-services.
-
-        Each token is searched for within each record's full name and email
-        address; if each token is found within a record that record is returned
-        in the results.
-
-        If context is None, all record types are considered.  If context is
-        "location", only locations are considered.  If context is "attendee",
-        only users, groups, and resources are considered.
-
-        @param tokens: The tokens to search on
-        @type tokens: C{list} of C{str} (utf-8 bytes)
-
-        @param context: An indication of what the end user is searching for;
-            "attendee", "location", or None
-        @type context: C{str}
-
-        @return: a deferred sequence of L{IDirectoryRecord}s which match the
-            given tokens and optional context.
-        """
-
-        services = set(self._recordTypes.values())
-
-        generators = []
-        for service in services:
-            generator = (yield service.recordsMatchingTokens(tokens,
-                context=context))
-            generators.append(generator)
-
-        returnValue(itertools.chain(*generators))
-
-
-    def getGroups(self, guids):
-        """
-        Returns a set of group records for the list of guids passed in.  For
-        any group that also contains subgroups, those subgroups' records are
-        also returned, and so on.
-        """
-        recordType = self.recordType_groups
-        service = self.serviceForRecordType(recordType)
-        return service.getGroups(guids)
-
-
-    def serviceForRecordType(self, recordType):
-        try:
-            return self._recordTypes[recordType]
-        except KeyError:
-            raise UnknownRecordTypeError(recordType)
-
-
-    def _query(self, query, recordType, *args):
-        try:
-            service = self.serviceForRecordType(recordType)
-        except UnknownRecordTypeError:
-            return None
-
-        return getattr(service, query)(
-            recordType[len(service.recordTypePrefix):],
-            *[a[len(service.recordTypePrefix):] for a in args]
-        )
-
-
-    def _queryAll(self, query, *args):
-        for service in self._recordTypes.values():
-            try:
-                record = getattr(service, query)(*args)
-            except UnknownRecordTypeError:
-                record = None
-            if record is not None:
-                return record
-        else:
-            return None
-
-
-    def flushCaches(self):
-        for service in self._recordTypes.values():
-            if hasattr(service, "_initCaches"):
-                service._initCaches()
-
-    userRecordTypes = [DirectoryService.recordType_users]
-
-    def requestAvatarId(self, credentials):
-
-        if credentials.authnPrincipal:
-            return credentials.authnPrincipal.record.service.requestAvatarId(credentials)
-
-        raise UnauthorizedLogin("No such user: %s" % (credentials.credentials.username,))
-
-
-    def getResourceInfo(self):
-        results = []
-        for service in self._recordTypes.values():
-            for result in service.getResourceInfo():
-                if result:
-                    results.append(result)
-        return results
-
-
-    def getExternalProxyAssignments(self):
-        service = self.serviceForRecordType(self.recordType_locations)
-        return service.getExternalProxyAssignments()
-
-
-    def createRecord(self, recordType, guid=None, shortNames=(), authIDs=set(),
-        fullName=None, firstName=None, lastName=None, emailAddresses=set(),
-        uid=None, password=None, **kwargs):
-        service = self.serviceForRecordType(recordType)
-        return service.createRecord(recordType, guid=guid,
-            shortNames=shortNames, authIDs=authIDs, fullName=fullName,
-            firstName=firstName, lastName=lastName,
-            emailAddresses=emailAddresses, uid=uid, password=password, **kwargs)
-
-
-    def updateRecord(self, recordType, guid=None, shortNames=(), authIDs=set(),
-        fullName=None, firstName=None, lastName=None, emailAddresses=set(),
-        uid=None, password=None, **kwargs):
-        service = self.serviceForRecordType(recordType)
-        return service.updateRecord(recordType, guid=guid,
-            shortNames=shortNames,
-            authIDs=authIDs, fullName=fullName, firstName=firstName,
-            lastName=lastName, emailAddresses=emailAddresses, uid=uid,
-            password=password, **kwargs)
-
-
-    def destroyRecord(self, recordType, guid=None):
-        service = self.serviceForRecordType(recordType)
-        return service.destroyRecord(recordType, guid=guid)
-
-
-    def setRealm(self, realmName):
-        """
-        Set a new realm name for this and nested services
-        """
-        self.realmName = realmName
-        for service in self._recordTypes.values():
-            service.setRealm(realmName)
-
-
-    def setPrincipalCollection(self, principalCollection):
-        """
-        Set the principal service that the directory relies on for doing proxy tests.
-
-        @param principalService: the principal service.
-        @type principalService: L{DirectoryProvisioningResource}
-        """
-        self.principalCollection = principalCollection
-        for service in self._recordTypes.values():
-            service.setPrincipalCollection(principalCollection)
-
-
-
-class DuplicateRecordTypeError(DirectoryError):
-    """
-    Duplicate record type.
-    """

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/appleopendirectory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/appleopendirectory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/appleopendirectory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,1584 +0,0 @@
-# -*- test-case-name: twistedcaldav.directory.test.test_opendirectory -*-
-##
-# Copyright (c) 2006-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-"""
-Apple OpenDirectory directory service implementation.
-"""
-
-__all__ = [
-    "OpenDirectoryService",
-    "OpenDirectoryInitError",
-]
-
-import sys
-import time
-from uuid import UUID
-
-from twisted.internet.defer import succeed, inlineCallbacks, returnValue
-from twisted.cred.credentials import UsernamePassword
-from txweb2.auth.digest import DigestedCredentials
-from twext.python.log import Logger
-
-from twistedcaldav.directory.cachingdirectory import CachingDirectoryService, \
-    CachingDirectoryRecord
-from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord
-from twistedcaldav.directory.directory import DirectoryError, UnknownRecordTypeError
-from twistedcaldav.directory.util import splitIntoBatches
-from twistedcaldav.directory.principal import cuAddressConverter
-
-from calendarserver.platform.darwin.od import opendirectory, dsattributes, dsquery
-
-
-
-class OpenDirectoryService(CachingDirectoryService):
-    """
-    OpenDirectory implementation of L{IDirectoryService}.
-    """
-    log = Logger()
-
-    baseGUID = "891F8321-ED02-424C-BA72-89C32F215C1E"
-
-    def __repr__(self):
-        return "<%s %r: %r>" % (self.__class__.__name__, self.realmName, self.node)
-
-
-    def __init__(self, params, odModule=None):
-        """
-        @param params: a dictionary containing the following keys:
-
-            - node: an OpenDirectory node name to bind to.
-
-            - restrictEnabledRecords: C{True} if a group in the directory is to
-              be used to determine which calendar users are enabled.
-
-            - restrictToGroup: C{str} guid or name of group used to restrict
-              enabled users.
-
-            - cacheTimeout: C{int} number of minutes before cache is
-              invalidated.
-
-            - negativeCache: C{False} cache the fact that a record wasn't found
-        """
-        defaults = {
-            'node' : '/Search',
-            'restrictEnabledRecords' : False,
-            'restrictToGroup' : '',
-            'cacheTimeout' : 1, # Minutes
-            'batchSize' : 100, # for splitting up large queries
-            'negativeCaching' : False,
-            'recordTypes' : (
-                self.recordType_users,
-                self.recordType_groups,
-            ),
-            'augmentService' : None,
-            'groupMembershipCache' : None,
-        }
-        ignored = ('requireComputerRecord',)
-        params = self.getParams(params, defaults, ignored)
-
-        self._recordTypes = params['recordTypes']
-
-        super(OpenDirectoryService, self).__init__(params['cacheTimeout'],
-                                                   params['negativeCaching'])
-
-        if odModule is None:
-            odModule = opendirectory
-        self.odModule = odModule
-
-        try:
-            directory = self.odModule.odInit(params['node'])
-        except self.odModule.ODError, e:
-            self.log.error("OpenDirectory (node=%s) Initialization error: %s" % (params['node'], e))
-            raise
-
-        self.augmentService = params['augmentService']
-        self.groupMembershipCache = params['groupMembershipCache']
-        self.realmName = params['node']
-        self.directory = directory
-        self.node = params['node']
-        self.restrictEnabledRecords = params['restrictEnabledRecords']
-        self.restrictToGroup = params['restrictToGroup']
-        self.batchSize = params['batchSize']
-        try:
-            UUID(self.restrictToGroup)
-        except:
-            self.restrictToGUID = False
-        else:
-            self.restrictToGUID = True
-        self.restrictedTimestamp = 0
-
-        # Set up the /Local/Default node if it's in the search path so we can
-        # send custom queries to it
-        self.localNode = None
-        try:
-            if self.node == "/Search":
-                result = self.odModule.getNodeAttributes(self.directory, "/Search",
-                    (dsattributes.kDS1AttrSearchPath,))
-                if "/Local/Default" in result[dsattributes.kDS1AttrSearchPath]:
-                    try:
-                        self.localNode = self.odModule.odInit("/Local/Default")
-                    except self.odModule.ODError, e:
-                        self.log.error("Failed to open /Local/Default): %s" % (e,))
-        except AttributeError:
-            pass
-
-
-    @property
-    def restrictedGUIDs(self):
-        """
-        Look up (and cache) the set of guids that are members of the
-        restrictToGroup.  If restrictToGroup is not set, return None to
-        indicate there are no group restrictions.
-        """
-        if self.restrictEnabledRecords:
-            if time.time() - self.restrictedTimestamp > self.cacheTimeout:
-                attributeToMatch = dsattributes.kDS1AttrGeneratedUID if self.restrictToGUID else dsattributes.kDSNAttrRecordName
-                valueToMatch = self.restrictToGroup
-                self.log.debug("Doing restricted group membership check")
-                self.log.debug("opendirectory.queryRecordsWithAttribute_list(%r,%r,%r,%r,%r,%r,%r)" % (
-                    self.directory,
-                    attributeToMatch,
-                    valueToMatch,
-                    dsattributes.eDSExact,
-                    False,
-                    dsattributes.kDSStdRecordTypeGroups,
-                    [dsattributes.kDSNAttrGroupMembers, dsattributes.kDSNAttrNestedGroups, ],
-                ))
-                results = self.odModule.queryRecordsWithAttribute_list(
-                    self.directory,
-                    attributeToMatch,
-                    valueToMatch,
-                    dsattributes.eDSExact,
-                    False,
-                    dsattributes.kDSStdRecordTypeGroups,
-                    [dsattributes.kDSNAttrGroupMembers, dsattributes.kDSNAttrNestedGroups, ],
-                )
-
-                if len(results) == 1:
-                    members = results[0][1].get(dsattributes.kDSNAttrGroupMembers, [])
-                    nestedGroups = results[0][1].get(dsattributes.kDSNAttrNestedGroups, [])
-                else:
-                    members = []
-                    nestedGroups = []
-                self._cachedRestrictedGUIDs = set(self._expandGroupMembership(members, nestedGroups, returnGroups=True))
-                self.log.debug("Got %d restricted group members" % (len(self._cachedRestrictedGUIDs),))
-                self.restrictedTimestamp = time.time()
-            return self._cachedRestrictedGUIDs
-        else:
-            # No restrictions
-            return None
-
-
-    def __cmp__(self, other):
-        if not isinstance(other, DirectoryRecord):
-            return super(DirectoryRecord, self).__eq__(other)
-
-        for attr in ("directory", "node"):
-            diff = cmp(getattr(self, attr), getattr(other, attr))
-            if diff != 0:
-                return diff
-        return 0
-
-
-    def __hash__(self):
-        h = hash(self.__class__.__name__)
-        for attr in ("node",):
-            h = (h + hash(getattr(self, attr))) & sys.maxint
-        return h
-
-
-    def _expandGroupMembership(self, members, nestedGroups, processedGUIDs=None, returnGroups=False):
-
-        if processedGUIDs is None:
-            processedGUIDs = set()
-
-        if isinstance(members, str):
-            members = [members]
-
-        if isinstance(nestedGroups, str):
-            nestedGroups = [nestedGroups]
-
-        for memberGUID in members:
-            if memberGUID not in processedGUIDs:
-                processedGUIDs.add(memberGUID)
-                yield memberGUID
-
-        for groupGUID in nestedGroups:
-            if groupGUID in processedGUIDs:
-                continue
-
-            self.log.debug("opendirectory.queryRecordsWithAttribute_list(%r,%r,%r,%r,%r,%r,%r)" % (
-                self.directory,
-                dsattributes.kDS1AttrGeneratedUID,
-                groupGUID,
-                dsattributes.eDSExact,
-                False,
-                dsattributes.kDSStdRecordTypeGroups,
-                [dsattributes.kDSNAttrGroupMembers, dsattributes.kDSNAttrNestedGroups]
-            ))
-            result = self.odModule.queryRecordsWithAttribute_list(
-                self.directory,
-                dsattributes.kDS1AttrGeneratedUID,
-                groupGUID,
-                dsattributes.eDSExact,
-                False,
-                dsattributes.kDSStdRecordTypeGroups,
-                [dsattributes.kDSNAttrGroupMembers, dsattributes.kDSNAttrNestedGroups]
-            )
-
-            if not result:
-                self.log.error("Couldn't find group %s when trying to expand nested groups."
-                             % (groupGUID,))
-                continue
-
-            group = result[0][1]
-
-            processedGUIDs.add(groupGUID)
-            if returnGroups:
-                yield groupGUID
-
-            for GUID in self._expandGroupMembership(
-                group.get(dsattributes.kDSNAttrGroupMembers, []),
-                group.get(dsattributes.kDSNAttrNestedGroups, []),
-                processedGUIDs,
-                returnGroups,
-            ):
-                yield GUID
-
-
-    def recordTypes(self):
-        return self._recordTypes
-
-
-    def listRecords(self, recordType):
-        """
-        Retrieve all the records of recordType from the directory, but for
-        expediency don't index them or cache them locally, nor in memcached.
-        """
-
-        records = []
-
-        attrs = [
-            dsattributes.kDS1AttrGeneratedUID,
-            dsattributes.kDSNAttrRecordName,
-            dsattributes.kDS1AttrDistinguishedName,
-        ]
-
-        if recordType == DirectoryService.recordType_users:
-            ODRecordType = self._toODRecordTypes[recordType]
-
-        elif recordType in (
-            DirectoryService.recordType_resources,
-            DirectoryService.recordType_locations,
-        ):
-            attrs.append(dsattributes.kDSNAttrResourceInfo)
-            ODRecordType = self._toODRecordTypes[recordType]
-
-        elif recordType == DirectoryService.recordType_groups:
-            attrs.append(dsattributes.kDSNAttrGroupMembers)
-            attrs.append(dsattributes.kDSNAttrNestedGroups)
-            ODRecordType = dsattributes.kDSStdRecordTypeGroups
-
-        self.log.debug("Querying OD for all %s records" % (recordType,))
-        results = self.odModule.listAllRecordsWithAttributes_list(
-            self.directory, ODRecordType, attrs)
-        self.log.debug("Retrieved %d %s records" % (len(results), recordType,))
-
-        for key, value in results:
-            recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-            if not recordGUID:
-                self.log.warn("Ignoring record missing GUID: %s %s" %
-                    (key, value,))
-                continue
-
-            # Skip if group restriction is in place and guid is not
-            # a member (but don't skip any groups)
-            if (recordType != self.recordType_groups and
-                self.restrictedGUIDs is not None):
-                if str(recordGUID) not in self.restrictedGUIDs:
-                    continue
-
-            recordShortNames = self._uniqueTupleFromAttribute(
-                value.get(dsattributes.kDSNAttrRecordName))
-            recordFullName = value.get(
-                dsattributes.kDS1AttrDistinguishedName)
-
-            proxyGUIDs = ()
-            readOnlyProxyGUIDs = ()
-
-            if recordType in (
-                DirectoryService.recordType_resources,
-                DirectoryService.recordType_locations,
-            ):
-                resourceInfo = value.get(dsattributes.kDSNAttrResourceInfo)
-                if resourceInfo is not None:
-                    if type(resourceInfo) is not str:
-                        resourceInfo = resourceInfo[0]
-                    try:
-                        (
-                            _ignore_autoSchedule,
-                            proxy,
-                            readOnlyProxy
-                        ) = self.parseResourceInfo(
-                            resourceInfo,
-                            recordGUID,
-                            recordType,
-                            recordShortNames[0]
-                        )
-                    except ValueError:
-                        continue
-                    if proxy:
-                        proxyGUIDs = (proxy,)
-                    if readOnlyProxy:
-                        readOnlyProxyGUIDs = (readOnlyProxy,)
-
-            # Special case for groups, which have members.
-            if recordType == self.recordType_groups:
-                memberGUIDs = value.get(dsattributes.kDSNAttrGroupMembers)
-                if memberGUIDs is None:
-                    memberGUIDs = ()
-                elif type(memberGUIDs) is str:
-                    memberGUIDs = (memberGUIDs,)
-                nestedGUIDs = value.get(dsattributes.kDSNAttrNestedGroups)
-                if nestedGUIDs:
-                    if type(nestedGUIDs) is str:
-                        nestedGUIDs = (nestedGUIDs,)
-                    memberGUIDs += tuple(nestedGUIDs)
-                else:
-                    nestedGUIDs = ()
-            else:
-                memberGUIDs = ()
-                nestedGUIDs = ()
-
-            record = OpenDirectoryRecord(
-                service=self,
-                recordType=recordType,
-                guid=recordGUID,
-                nodeName="",
-                shortNames=recordShortNames,
-                authIDs=(),
-                fullName=recordFullName,
-                firstName="",
-                lastName="",
-                emailAddresses="",
-                memberGUIDs=memberGUIDs,
-                nestedGUIDs=nestedGUIDs,
-                extProxies=proxyGUIDs,
-                extReadOnlyProxies=readOnlyProxyGUIDs,
-            )
-
-            # (Copied from below)
-            # Look up augment information
-            # TODO: this needs to be deferred but for now we hard code
-            # the deferred result because we know it is completing
-            # immediately.
-            if self.augmentService is not None:
-                d = self.augmentService.getAugmentRecord(record.guid,
-                    recordType)
-                d.addCallback(lambda x: record.addAugmentInformation(x))
-            records.append(record)
-
-        self.log.debug("ListRecords returning %d %s records" % (len(records),
-            recordType))
-
-        return records
-
-
-    def groupsForGUID(self, guid):
-
-        attrs = [
-            dsattributes.kDS1AttrGeneratedUID,
-        ]
-
-        recordType = dsattributes.kDSStdRecordTypeGroups
-
-        guids = set()
-
-        self.log.debug("Looking up which groups %s is a member of" % (guid,))
-        try:
-            self.log.debug("opendirectory.queryRecordsWithAttribute_list(%r,%r,%r,%r,%r,%r,%r)" % (
-                self.directory,
-                dsattributes.kDSNAttrGroupMembers,
-                guid,
-                dsattributes.eDSExact,
-                False,
-                recordType,
-                attrs,
-            ))
-            results = self.odModule.queryRecordsWithAttribute_list(
-                self.directory,
-                dsattributes.kDSNAttrGroupMembers,
-                guid,
-                dsattributes.eDSExact,
-                False,
-                recordType,
-                attrs,
-            )
-        except self.odModule.ODError, ex:
-            self.log.error("OpenDirectory (node=%s) error: %s" % (self.realmName, str(ex)))
-            raise
-
-        for (_ignore_recordShortName, value) in results:
-
-            # Now get useful record info.
-            recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-            if recordGUID:
-                guids.add(recordGUID)
-
-        try:
-            self.log.debug("opendirectory.queryRecordsWithAttribute_list(%r,%r,%r,%r,%r,%r,%r)" % (
-                self.directory,
-                dsattributes.kDSNAttrNestedGroups,
-                guid,
-                dsattributes.eDSExact,
-                False,
-                recordType,
-                attrs,
-            ))
-            results = self.odModule.queryRecordsWithAttribute_list(
-                self.directory,
-                dsattributes.kDSNAttrNestedGroups,
-                guid,
-                dsattributes.eDSExact,
-                False,
-                recordType,
-                attrs,
-            )
-        except self.odModule.ODError, ex:
-            self.log.error("OpenDirectory (node=%s) error: %s" % (self.realmName, str(ex)))
-            raise
-
-        for (_ignore_recordShortName, value) in results:
-
-            # Now get useful record info.
-            recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-            if recordGUID:
-                guids.add(recordGUID)
-
-        self.log.debug("%s is a member of %d groups" % (guid, len(guids)))
-
-        return guids
-
-    _ODFields = {
-        'fullName' : {
-            'odField' : dsattributes.kDS1AttrDistinguishedName,
-            'appliesTo' : set([
-                dsattributes.kDSStdRecordTypeUsers,
-                dsattributes.kDSStdRecordTypeGroups,
-                dsattributes.kDSStdRecordTypeResources,
-                dsattributes.kDSStdRecordTypePlaces,
-            ]),
-        },
-        'firstName' : {
-            'odField' : dsattributes.kDS1AttrFirstName,
-            'appliesTo' : set([
-                dsattributes.kDSStdRecordTypeUsers,
-            ]),
-        },
-        'lastName' : {
-            'odField' : dsattributes.kDS1AttrLastName,
-            'appliesTo' : set([
-                dsattributes.kDSStdRecordTypeUsers,
-            ]),
-        },
-        'emailAddresses' : {
-            'odField' : dsattributes.kDSNAttrEMailAddress,
-            'appliesTo' : set([
-                dsattributes.kDSStdRecordTypeUsers,
-                dsattributes.kDSStdRecordTypeGroups,
-            ]),
-        },
-        'recordName' : {
-            'odField' : dsattributes.kDSNAttrRecordName,
-            'appliesTo' : set([
-                dsattributes.kDSStdRecordTypeUsers,
-                dsattributes.kDSStdRecordTypeGroups,
-                dsattributes.kDSStdRecordTypeResources,
-                dsattributes.kDSStdRecordTypePlaces,
-            ]),
-        },
-        'guid' : {
-            'odField' : dsattributes.kDS1AttrGeneratedUID,
-            'appliesTo' : set([
-                dsattributes.kDSStdRecordTypeUsers,
-                dsattributes.kDSStdRecordTypeGroups,
-                dsattributes.kDSStdRecordTypeResources,
-                dsattributes.kDSStdRecordTypePlaces,
-            ]),
-        },
-    }
-
-    _toODRecordTypes = {
-        DirectoryService.recordType_users :
-            dsattributes.kDSStdRecordTypeUsers,
-        DirectoryService.recordType_groups :
-            dsattributes.kDSStdRecordTypeGroups,
-        DirectoryService.recordType_resources :
-            dsattributes.kDSStdRecordTypeResources,
-        DirectoryService.recordType_locations :
-            dsattributes.kDSStdRecordTypePlaces,
-    }
-
-    _fromODRecordTypes = dict([(b, a) for a, b in _toODRecordTypes.iteritems()])
-
-    def _uniqueTupleFromAttribute(self, attribute):
-        if attribute:
-            if isinstance(attribute, str):
-                return (attribute,)
-            else:
-                s = set()
-                return tuple([(s.add(x), x)[1] for x in attribute if x not in s])
-        else:
-            return ()
-
-
-    def _setFromAttribute(self, attribute, lower=False):
-        if attribute:
-            if isinstance(attribute, str):
-                return set((attribute.lower() if lower else attribute,))
-            else:
-                return set([item.lower() if lower else item for item in attribute])
-        else:
-            return ()
-
-
-    def recordsMatchingTokens(self, tokens, context=None, lookupMethod=None):
-        """
-        @param tokens: The tokens to search on
-        @type tokens: C{list} of C{str} (utf-8 bytes)
-        @param context: An indication of what the end user is searching
-            for; "attendee", "location", or None
-        @type context: C{str}
-        @return: a deferred sequence of L{IDirectoryRecord}s which
-            match the given tokens and optional context.
-
-        Each token is searched for within each record's full name and
-        email address; if each token is found within a record that
-        record is returned in the results.
-
-        If context is None, all record types are considered.  If
-        context is "location", only locations are considered.  If
-        context is "attendee", only users, groups, and resources
-        are considered.
-        """
-
-        if lookupMethod is None:
-            lookupMethod = self.odModule.queryRecordsWithAttributes_list
-
-        def collectResults(results):
-            self.log.debug("Got back %d records from OD" % (len(results),))
-            for _ignore_key, value in results:
-                # self.log.debug("OD result: {key} {value}", key=key, value=value)
-                try:
-                    recordNodeName = value.get(
-                        dsattributes.kDSNAttrMetaNodeLocation)
-                    recordShortNames = self._uniqueTupleFromAttribute(
-                        value.get(dsattributes.kDSNAttrRecordName))
-
-                    recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-
-                    recordType = value.get(dsattributes.kDSNAttrRecordType)
-                    if isinstance(recordType, list):
-                        recordType = recordType[0]
-                    if not recordType:
-                        continue
-                    recordType = self._fromODRecordTypes[recordType]
-
-                    # Skip if group restriction is in place and guid is not
-                    # a member (but don't skip any groups)
-                    if (recordType != self.recordType_groups and
-                        self.restrictedGUIDs is not None):
-                        if str(recordGUID) not in self.restrictedGUIDs:
-                            continue
-
-                    recordAuthIDs = self._setFromAttribute(
-                        value.get(dsattributes.kDSNAttrAltSecurityIdentities))
-                    recordFullName = value.get(
-                        dsattributes.kDS1AttrDistinguishedName)
-                    recordFirstName = value.get(dsattributes.kDS1AttrFirstName)
-                    recordLastName = value.get(dsattributes.kDS1AttrLastName)
-                    recordEmailAddresses = self._setFromAttribute(
-                        value.get(dsattributes.kDSNAttrEMailAddress),
-                        lower=True)
-
-                    # Special case for groups, which have members.
-                    if recordType == self.recordType_groups:
-                        memberGUIDs = value.get(dsattributes.kDSNAttrGroupMembers)
-                        if memberGUIDs is None:
-                            memberGUIDs = ()
-                        elif type(memberGUIDs) is str:
-                            memberGUIDs = (memberGUIDs,)
-                        nestedGUIDs = value.get(dsattributes.kDSNAttrNestedGroups)
-                        if nestedGUIDs:
-                            if type(nestedGUIDs) is str:
-                                nestedGUIDs = (nestedGUIDs,)
-                            memberGUIDs += tuple(nestedGUIDs)
-                        else:
-                            nestedGUIDs = ()
-                    else:
-                        nestedGUIDs = ()
-                        memberGUIDs = ()
-
-                    # Create records but don't store them in our index or
-                    # send them to memcached, because these are transient,
-                    # existing only so we can create principal resource
-                    # objects that are used to generate the REPORT result.
-
-                    record = OpenDirectoryRecord(
-                        service=self,
-                        recordType=recordType,
-                        guid=recordGUID,
-                        nodeName=recordNodeName,
-                        shortNames=recordShortNames,
-                        authIDs=recordAuthIDs,
-                        fullName=recordFullName,
-                        firstName=recordFirstName,
-                        lastName=recordLastName,
-                        emailAddresses=recordEmailAddresses,
-                        memberGUIDs=memberGUIDs,
-                        nestedGUIDs=nestedGUIDs,
-                        extProxies=(),
-                        extReadOnlyProxies=(),
-                    )
-
-                    # (Copied from below)
-                    # Look up augment information
-                    # TODO: this needs to be deferred but for now we hard code
-                    # the deferred result because we know it is completing
-                    # immediately.
-                    if self.augmentService is not None:
-                        d = self.augmentService.getAugmentRecord(record.guid,
-                            recordType)
-                        d.addCallback(lambda x: record.addAugmentInformation(x))
-
-                    yield record
-
-                except KeyError:
-                    pass
-
-
-        def multiQuery(directory, queries, recordTypes, attrs):
-            byGUID = {}
-            sets = []
-
-            caseInsensitive = True
-            for compound in queries:
-                compound = compound.generate()
-
-                try:
-                    startTime = time.time()
-                    queryResults = lookupMethod(
-                        directory,
-                        compound,
-                        caseInsensitive,
-                        recordTypes,
-                        attrs,
-                    )
-                    totalTime = time.time() - startTime
-
-                    newSet = set()
-                    for recordName, data in queryResults:
-                        guid = data.get(dsattributes.kDS1AttrGeneratedUID, None)
-                        if guid:
-                            byGUID[guid] = (recordName, data)
-                            newSet.add(guid)
-
-                    self.log.debug("Attendee OD query: Types %s, Query %s, %.2f sec, %d results" %
-                        (recordTypes, compound, totalTime, len(queryResults)))
-                    sets.append(newSet)
-
-                except self.odModule.ODError, e:
-                    self.log.error("Ignoring OD Error: %d %s" %
-                        (e.message[1], e.message[0]))
-                    continue
-
-            results = []
-            for guid in set.intersection(*sets):
-                recordName, data = byGUID.get(guid, None)
-                if data is not None:
-                    results.append((data[dsattributes.kDSNAttrRecordName], data))
-            return results
-
-        localQueries = buildLocalQueriesFromTokens(tokens, self._ODFields)
-        nestedQuery = buildNestedQueryFromTokens(tokens, self._ODFields)
-
-        # Starting with the record types corresponding to the context...
-        recordTypes = self.recordTypesForSearchContext(context)
-        # ...limit to the types this service supports...
-        recordTypes = [r for r in recordTypes if r in self.recordTypes()]
-        # ...and map those to OD representations...
-        recordTypes = [self._toODRecordTypes[r] for r in recordTypes]
-
-        if recordTypes:
-            # Perform the complex/nested query.  If there was more than one
-            # token, this won't match anything in /Local, therefore we run
-            # the un-nested queries below and AND the results ourselves in
-            # multiQuery.
-            results = multiQuery(
-                self.directory,
-                [nestedQuery],
-                recordTypes,
-                [
-                    dsattributes.kDS1AttrGeneratedUID,
-                    dsattributes.kDSNAttrRecordName,
-                    dsattributes.kDSNAttrAltSecurityIdentities,
-                    dsattributes.kDSNAttrRecordType,
-                    dsattributes.kDS1AttrDistinguishedName,
-                    dsattributes.kDS1AttrFirstName,
-                    dsattributes.kDS1AttrLastName,
-                    dsattributes.kDSNAttrEMailAddress,
-                    dsattributes.kDSNAttrMetaNodeLocation,
-                    dsattributes.kDSNAttrGroupMembers,
-                    dsattributes.kDSNAttrNestedGroups,
-                ]
-            )
-            if self.localNode is not None and len(tokens) > 1:
-                # /Local is in our search path and the complex query above
-                # would not have matched anything in /Local.  So now run
-                # the un-nested queries.
-                results.extend(
-                    multiQuery(
-                        self.localNode,
-                        localQueries,
-                        recordTypes,
-                        [
-                            dsattributes.kDS1AttrGeneratedUID,
-                            dsattributes.kDSNAttrRecordName,
-                            dsattributes.kDSNAttrAltSecurityIdentities,
-                            dsattributes.kDSNAttrRecordType,
-                            dsattributes.kDS1AttrDistinguishedName,
-                            dsattributes.kDS1AttrFirstName,
-                            dsattributes.kDS1AttrLastName,
-                            dsattributes.kDSNAttrEMailAddress,
-                            dsattributes.kDSNAttrMetaNodeLocation,
-                            dsattributes.kDSNAttrGroupMembers,
-                            dsattributes.kDSNAttrNestedGroups,
-                        ]
-                    )
-                )
-            return succeed(collectResults(results))
-        else:
-            return succeed([])
-
-
-    def recordsMatchingFields(self, fields, operand="or", recordType=None,
-        lookupMethod=None):
-
-        if lookupMethod is None:
-            lookupMethod = self.odModule.queryRecordsWithAttribute_list
-
-        # Note that OD applies case-sensitivity globally across the entire
-        # query, not per expression, so the current code uses whatever is
-        # specified in the last field in the fields list
-
-        def collectResults(results):
-            self.log.debug("Got back %d records from OD" % (len(results),))
-            for _ignore_key, value in results:
-                # self.log.debug("OD result: {key} {value}", key=key, value=value)
-                try:
-                    recordNodeName = value.get(
-                        dsattributes.kDSNAttrMetaNodeLocation)
-                    recordShortNames = self._uniqueTupleFromAttribute(
-                        value.get(dsattributes.kDSNAttrRecordName))
-
-                    recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-
-                    recordType = value.get(dsattributes.kDSNAttrRecordType)
-                    if isinstance(recordType, list):
-                        recordType = recordType[0]
-                    if not recordType:
-                        continue
-                    recordType = self._fromODRecordTypes[recordType]
-
-                    # Skip if group restriction is in place and guid is not
-                    # a member (but don't skip any groups)
-                    if (recordType != self.recordType_groups and
-                        self.restrictedGUIDs is not None):
-                        if str(recordGUID) not in self.restrictedGUIDs:
-                            continue
-
-                    recordAuthIDs = self._setFromAttribute(
-                        value.get(dsattributes.kDSNAttrAltSecurityIdentities))
-                    recordFullName = value.get(
-                        dsattributes.kDS1AttrDistinguishedName)
-                    recordFirstName = value.get(dsattributes.kDS1AttrFirstName)
-                    recordLastName = value.get(dsattributes.kDS1AttrLastName)
-                    recordEmailAddresses = self._setFromAttribute(
-                        value.get(dsattributes.kDSNAttrEMailAddress),
-                        lower=True)
-
-                    # Special case for groups, which have members.
-                    if recordType == self.recordType_groups:
-                        memberGUIDs = value.get(dsattributes.kDSNAttrGroupMembers)
-                        if memberGUIDs is None:
-                            memberGUIDs = ()
-                        elif type(memberGUIDs) is str:
-                            memberGUIDs = (memberGUIDs,)
-                        nestedGUIDs = value.get(dsattributes.kDSNAttrNestedGroups)
-                        if nestedGUIDs:
-                            if type(nestedGUIDs) is str:
-                                nestedGUIDs = (nestedGUIDs,)
-                            memberGUIDs += tuple(nestedGUIDs)
-                        else:
-                            nestedGUIDs = ()
-                    else:
-                        nestedGUIDs = ()
-                        memberGUIDs = ()
-
-                    # Create records but don't store them in our index or
-                    # send them to memcached, because these are transient,
-                    # existing only so we can create principal resource
-                    # objects that are used to generate the REPORT result.
-
-                    record = OpenDirectoryRecord(
-                        service=self,
-                        recordType=recordType,
-                        guid=recordGUID,
-                        nodeName=recordNodeName,
-                        shortNames=recordShortNames,
-                        authIDs=recordAuthIDs,
-                        fullName=recordFullName,
-                        firstName=recordFirstName,
-                        lastName=recordLastName,
-                        emailAddresses=recordEmailAddresses,
-                        memberGUIDs=memberGUIDs,
-                        nestedGUIDs=nestedGUIDs,
-                        extProxies=(),
-                        extReadOnlyProxies=(),
-                    )
-
-                    # (Copied from below)
-                    # Look up augment information
-                    # TODO: this needs to be deferred but for now we hard code
-                    # the deferred result because we know it is completing
-                    # immediately.
-                    if self.augmentService is not None:
-                        d = self.augmentService.getAugmentRecord(record.guid,
-                            recordType)
-                        d.addCallback(lambda x: record.addAugmentInformation(x))
-
-                    yield record
-
-                except KeyError:
-                    pass
-
-
-        def multiQuery(directory, queries, attrs, operand):
-            byGUID = {}
-            sets = []
-
-            for query, recordTypes in queries.iteritems():
-                ODField, value, caseless, matchType = query
-                if matchType == "starts-with":
-                    comparison = dsattributes.eDSStartsWith
-                elif matchType == "contains":
-                    comparison = dsattributes.eDSContains
-                else:
-                    comparison = dsattributes.eDSExact
-
-                self.log.debug("Calling OD: Types %s, Field %s, Value %s, Match %s, Caseless %s" %
-                    (recordTypes, ODField, value, matchType, caseless))
-
-                try:
-                    queryResults = lookupMethod(
-                        directory,
-                        ODField,
-                        value,
-                        comparison,
-                        caseless,
-                        recordTypes,
-                        attrs,
-                    )
-
-                    if operand == dsquery.expression.OR:
-                        for recordName, data in queryResults:
-                            guid = data.get(dsattributes.kDS1AttrGeneratedUID, None)
-                            if guid:
-                                byGUID[guid] = (recordName, data)
-                    else: # AND
-                        newSet = set()
-                        for recordName, data in queryResults:
-                            guid = data.get(dsattributes.kDS1AttrGeneratedUID, None)
-                            if guid:
-                                byGUID[guid] = (recordName, data)
-                                newSet.add(guid)
-
-                        sets.append(newSet)
-
-                except self.odModule.ODError, e:
-                    self.log.error("Ignoring OD Error: %d %s" %
-                        (e.message[1], e.message[0]))
-                    continue
-
-            if operand == dsquery.expression.OR:
-                return byGUID.values()
-
-            else:
-                results = []
-                for guid in set.intersection(*sets):
-                    recordName, data = byGUID.get(guid, None)
-                    if data is not None:
-                        results.append((data[dsattributes.kDSNAttrRecordName], data))
-                return results
-
-        operand = (dsquery.expression.OR if operand == "or"
-            else dsquery.expression.AND)
-
-        if recordType is None:
-            # The client is looking for records in any of the four types
-            recordTypes = set(self._toODRecordTypes.values())
-        else:
-            # The client is after only one recordType
-            recordTypes = [self._toODRecordTypes[recordType]]
-
-        queries = buildQueries(recordTypes, fields, self._ODFields)
-
-        results = multiQuery(
-            self.directory,
-            queries,
-            [
-                dsattributes.kDS1AttrGeneratedUID,
-                dsattributes.kDSNAttrRecordName,
-                dsattributes.kDSNAttrAltSecurityIdentities,
-                dsattributes.kDSNAttrRecordType,
-                dsattributes.kDS1AttrDistinguishedName,
-                dsattributes.kDS1AttrFirstName,
-                dsattributes.kDS1AttrLastName,
-                dsattributes.kDSNAttrEMailAddress,
-                dsattributes.kDSNAttrMetaNodeLocation,
-                dsattributes.kDSNAttrGroupMembers,
-                dsattributes.kDSNAttrNestedGroups,
-            ],
-            operand
-        )
-        return succeed(collectResults(results))
-
-
-    def queryDirectory(self, recordTypes, indexType, indexKey,
-        lookupMethod=None):
-
-        if lookupMethod is None:
-            lookupMethod = self.odModule.queryRecordsWithAttribute_list
-
-        origIndexKey = indexKey
-        if indexType == self.INDEX_TYPE_CUA:
-            # The directory doesn't contain CUAs, so we need to convert
-            # the CUA to the appropriate field name and value:
-            queryattr, indexKey = cuAddressConverter(indexKey)
-            # queryattr will be one of:
-            # guid, emailAddresses, or recordName
-            # ...which will need to be mapped to DS
-            queryattr = self._ODFields[queryattr]['odField']
-
-        else:
-            queryattr = {
-                self.INDEX_TYPE_SHORTNAME : dsattributes.kDSNAttrRecordName,
-                self.INDEX_TYPE_GUID      : dsattributes.kDS1AttrGeneratedUID,
-                self.INDEX_TYPE_AUTHID    : dsattributes.kDSNAttrAltSecurityIdentities,
-            }.get(indexType)
-            assert queryattr is not None, "Invalid type for record faulting query"
-        # Make all OD queries case insensitive
-        caseInsensitive = True
-
-        results = []
-        for recordType in recordTypes:
-
-            attrs = [
-                dsattributes.kDS1AttrGeneratedUID,
-                dsattributes.kDSNAttrRecordName,
-                dsattributes.kDSNAttrAltSecurityIdentities,
-                dsattributes.kDSNAttrRecordType,
-                dsattributes.kDS1AttrDistinguishedName,
-                dsattributes.kDS1AttrFirstName,
-                dsattributes.kDS1AttrLastName,
-                dsattributes.kDSNAttrEMailAddress,
-                dsattributes.kDSNAttrMetaNodeLocation,
-            ]
-
-            if recordType == DirectoryService.recordType_users:
-                listRecordTypes = [self._toODRecordTypes[recordType]]
-
-            elif recordType in (
-                DirectoryService.recordType_resources,
-                DirectoryService.recordType_locations,
-            ):
-                if queryattr == dsattributes.kDSNAttrEMailAddress:
-                    continue
-
-                listRecordTypes = [self._toODRecordTypes[recordType]]
-
-            elif recordType == DirectoryService.recordType_groups:
-
-                if queryattr == dsattributes.kDSNAttrEMailAddress:
-                    continue
-
-                listRecordTypes = [dsattributes.kDSStdRecordTypeGroups]
-                attrs.append(dsattributes.kDSNAttrGroupMembers)
-                attrs.append(dsattributes.kDSNAttrNestedGroups)
-
-            else:
-                raise UnknownRecordTypeError("Unknown OpenDirectory record type: %s" % (recordType))
-
-            # Because we're getting transient OD error -14987, try 3 times:
-            for _ignore in xrange(3):
-                try:
-                    self.log.debug("opendirectory.queryRecordsWithAttribute_list(%r,%r,%r,%r,%r,%r,%r)" % (
-                        self.directory,
-                        queryattr,
-                        indexKey,
-                        dsattributes.eDSExact,
-                        caseInsensitive,
-                        listRecordTypes,
-                        attrs,
-                    ))
-                    lookedUp = lookupMethod(
-                            self.directory,
-                            queryattr,
-                            indexKey,
-                            dsattributes.eDSExact,
-                            caseInsensitive,
-                            listRecordTypes,
-                            attrs,
-                        )
-                    results.extend(lookedUp)
-
-                except self.odModule.ODError, ex:
-                    if ex.message[1] == -14987:
-                        # Fall through and retry
-                        self.log.error("OpenDirectory (node=%s) error: %s" % (self.realmName, str(ex)))
-                    elif ex.message[1] == -14140 or ex.message[1] == -14200:
-                        # Unsupported attribute on record - don't fail
-                        return
-                    else:
-                        self.log.error("OpenDirectory (node=%s) error: %s" % (self.realmName, str(ex)))
-                        raise
-                else:
-                    # Success, so break the retry loop
-                    break
-
-        self.log.debug("opendirectory.queryRecordsWithAttribute_list matched records: %s" % (len(results),))
-
-        enabledRecords = []
-        disabledRecords = []
-
-        for (recordShortName, value) in results:
-
-            # Now get useful record info.
-            recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-            recordShortNames = self._uniqueTupleFromAttribute(value.get(dsattributes.kDSNAttrRecordName))
-            recordType = value.get(dsattributes.kDSNAttrRecordType)
-            if isinstance(recordType, list):
-                recordType = recordType[0]
-            recordAuthIDs = self._setFromAttribute(value.get(dsattributes.kDSNAttrAltSecurityIdentities))
-            recordFullName = value.get(dsattributes.kDS1AttrDistinguishedName)
-            recordFirstName = value.get(dsattributes.kDS1AttrFirstName)
-            recordLastName = value.get(dsattributes.kDS1AttrLastName)
-            recordEmailAddresses = self._setFromAttribute(value.get(dsattributes.kDSNAttrEMailAddress), lower=True)
-            recordNodeName = value.get(dsattributes.kDSNAttrMetaNodeLocation)
-
-            if not recordType:
-                self.log.debug("Record (unknown)%s in node %s has no recordType; ignoring."
-                               % (recordShortName, recordNodeName))
-                continue
-
-            recordType = self._fromODRecordTypes[recordType]
-
-            if not recordGUID:
-                self.log.debug("Record (%s)%s in node %s has no GUID; ignoring."
-                               % (recordType, recordShortName, recordNodeName))
-                continue
-
-            if recordGUID.lower().startswith("ffffeeee-dddd-cccc-bbbb-aaaa"):
-                self.log.debug("Ignoring system record (%s)%s in node %s."
-                               % (recordType, recordShortName, recordNodeName))
-                continue
-
-            # If restrictToGroup is in effect, all guids which are not a member
-            # of that group are disabled (overriding the augments db).
-            if (self.restrictedGUIDs is not None):
-                unrestricted = recordGUID in self.restrictedGUIDs
-            else:
-                unrestricted = True
-
-            # Special case for groups, which have members.
-            if recordType == self.recordType_groups:
-                memberGUIDs = value.get(dsattributes.kDSNAttrGroupMembers)
-                if memberGUIDs is None:
-                    memberGUIDs = ()
-                elif type(memberGUIDs) is str:
-                    memberGUIDs = (memberGUIDs,)
-                nestedGUIDs = value.get(dsattributes.kDSNAttrNestedGroups)
-                if nestedGUIDs:
-                    if type(nestedGUIDs) is str:
-                        nestedGUIDs = (nestedGUIDs,)
-                    memberGUIDs += tuple(nestedGUIDs)
-                else:
-                    nestedGUIDs = ()
-            else:
-                memberGUIDs = ()
-                nestedGUIDs = ()
-
-            # Special case for resources and locations
-            autoSchedule = False
-            proxyGUIDs = ()
-            readOnlyProxyGUIDs = ()
-            if recordType in (DirectoryService.recordType_resources, DirectoryService.recordType_locations):
-                resourceInfo = value.get(dsattributes.kDSNAttrResourceInfo)
-                if resourceInfo is not None:
-                    if type(resourceInfo) is not str:
-                        resourceInfo = resourceInfo[0]
-                    try:
-                        autoSchedule, proxy, read_only_proxy = self.parseResourceInfo(resourceInfo, recordGUID, recordType, recordShortName)
-                    except ValueError:
-                        continue
-                    if proxy:
-                        proxyGUIDs = (proxy,)
-                    if read_only_proxy:
-                        readOnlyProxyGUIDs = (read_only_proxy,)
-
-            record = OpenDirectoryRecord(
-                service=self,
-                recordType=recordType,
-                guid=recordGUID,
-                nodeName=recordNodeName,
-                shortNames=recordShortNames,
-                authIDs=recordAuthIDs,
-                fullName=recordFullName,
-                firstName=recordFirstName,
-                lastName=recordLastName,
-                emailAddresses=recordEmailAddresses,
-                memberGUIDs=memberGUIDs,
-                nestedGUIDs=nestedGUIDs,
-                extProxies=proxyGUIDs,
-                extReadOnlyProxies=readOnlyProxyGUIDs,
-            )
-
-            # Look up augment information
-            # TODO: this needs to be deferred but for now we hard code the deferred result because
-            # we know it is completing immediately.
-            if self.augmentService is not None:
-                d = self.augmentService.getAugmentRecord(record.guid,
-                    recordType)
-                d.addCallback(lambda x: record.addAugmentInformation(x))
-
-            # Override based on ResourceInfo
-            if autoSchedule:
-                record.autoSchedule = True
-
-            if not unrestricted:
-                self.log.debug("%s is not enabled because it's not a member of group: %s" % (recordGUID, self.restrictToGroup))
-                record.enabledForCalendaring = False
-                record.enabledForAddressBooks = False
-
-            record.applySACLs()
-
-            if record.enabledForCalendaring:
-                enabledRecords.append(record)
-            else:
-                disabledRecords.append(record)
-
-        record = None
-        if len(enabledRecords) == 1:
-            record = enabledRecords[0]
-        elif len(enabledRecords) == 0 and len(disabledRecords) == 1:
-            record = disabledRecords[0]
-        elif indexType == self.INDEX_TYPE_GUID and len(enabledRecords) > 1:
-            self.log.error("Duplicate records found for GUID %s:" % (indexKey,))
-            for duplicateRecord in enabledRecords:
-                self.log.error("Duplicate: %s" % (", ".join(duplicateRecord.shortNames)))
-
-        if record:
-            if isinstance(origIndexKey, unicode):
-                origIndexKey = origIndexKey.encode("utf-8")
-            self.log.debug("Storing (%s %s) %s in internal cache" % (indexType, origIndexKey, record))
-
-            self.recordCacheForType(recordType).addRecord(record, indexType, origIndexKey)
-
-
-    def getResourceInfo(self):
-        """
-        Resource information including proxy assignments for resource and
-        locations, as well as auto-schedule settings, used to live in the
-        directory.  This method fetches old resource info for migration
-        purposes.
-        """
-        attrs = [
-            dsattributes.kDS1AttrGeneratedUID,
-            dsattributes.kDSNAttrResourceInfo,
-        ]
-
-        for recordType in (dsattributes.kDSStdRecordTypePlaces, dsattributes.kDSStdRecordTypeResources):
-            try:
-                self.log.debug("opendirectory.listAllRecordsWithAttributes_list(%r,%r,%r)" % (
-                    self.directory,
-                    recordType,
-                    attrs,
-                ))
-                results = self.odModule.listAllRecordsWithAttributes_list(
-                    self.directory,
-                    recordType,
-                    attrs,
-                )
-            except self.odModule.ODError, ex:
-                self.log.error("OpenDirectory (node=%s) error: %s" % (self.realmName, str(ex)))
-                raise
-
-            for (recordShortName, value) in results:
-                recordGUID = value.get(dsattributes.kDS1AttrGeneratedUID)
-                resourceInfo = value.get(dsattributes.kDSNAttrResourceInfo)
-                if resourceInfo is not None:
-                    if type(resourceInfo) is not str:
-                        resourceInfo = resourceInfo[0]
-                    try:
-                        autoSchedule, proxy, readOnlyProxy = self.parseResourceInfo(resourceInfo,
-                            recordGUID, recordType, recordShortName)
-                    except ValueError:
-                        continue
-                    yield recordGUID, autoSchedule, proxy, readOnlyProxy
-
-
-    def isAvailable(self):
-        """
-        Returns True if all configured directory nodes are accessible, False otherwise
-        """
-
-        if self.node == "/Search":
-            result = self.odModule.getNodeAttributes(self.directory, "/Search",
-                (dsattributes.kDS1AttrSearchPath,))
-            nodes = result[dsattributes.kDS1AttrSearchPath]
-        else:
-            nodes = [self.node]
-
-        try:
-            for node in nodes:
-                self.odModule.getNodeAttributes(self.directory, node, [dsattributes.kDSNAttrNodePath])
-        except self.odModule.ODError:
-            self.log.warn("OpenDirectory Node %s not available" % (node,))
-            return False
-
-        return True
-
-
-    @inlineCallbacks
-    def getGroups(self, guids):
-        """
-        Returns a set of group records for the list of guids passed in.  For
-        any group that also contains subgroups, those subgroups' records are
-        also returned, and so on.
-        """
-
-        recordsByGUID = {}
-        valuesToFetch = guids
-
-        loop = 1
-        while valuesToFetch:
-            self.log.debug("getGroups loop %d" % (loop,))
-
-            results = []
-
-            for batch in splitIntoBatches(valuesToFetch, self.batchSize):
-                fields = []
-                for value in batch:
-                    fields.append(["guid", value, False, "equals"])
-                self.log.debug("getGroups fetching batch of %d" %
-                    (len(fields),))
-                result = list((yield self.recordsMatchingFields(fields,
-                    recordType=self.recordType_groups)))
-                results.extend(result)
-                self.log.debug("getGroups got back batch of %d for subtotal of %d" %
-                    (len(result), len(results)))
-
-            # Reset values for next iteration
-            valuesToFetch = set()
-
-            for record in results:
-                guid = record.guid
-                if guid not in recordsByGUID:
-                    recordsByGUID[guid] = record
-
-                # record.nestedGUIDs() contains the sub groups of this group
-                for memberGUID in record.nestedGUIDs():
-                    if memberGUID not in recordsByGUID:
-                        self.log.debug("getGroups group %s contains group %s" %
-                            (record.guid, memberGUID))
-                        valuesToFetch.add(memberGUID)
-
-            loop += 1
-
-        returnValue(recordsByGUID.values())
-
-
-
-def buildQueries(recordTypes, fields, mapping):
-    """
-    Determine how many queries need to be performed in order to work around opendirectory
-    quirks, where searching on fields that don't apply to a given recordType returns incorrect
-    results (either none, or all records).
-    """
-
-    queries = {}
-    for recordType in recordTypes:
-        for field, value, caseless, matchType in fields:
-            if field in mapping:
-                if recordType in mapping[field]['appliesTo']:
-                    ODField = mapping[field]['odField']
-                    key = (ODField, value, caseless, matchType)
-                    queries.setdefault(key, []).append(recordType)
-
-    return queries
-
-
-
-def buildLocalQueriesFromTokens(tokens, mapping):
-    """
-    OD /Local doesn't support nested complex queries, so create a list of
-    complex queries that will be ANDed together in recordsMatchingTokens()
-
-    @param tokens: The tokens to search on
-    @type tokens: C{list} of C{str}
-    @param mapping: The mapping of DirectoryRecord attributes to OD attributes
-    @type mapping: C{dict}
-    @return: A list of expression objects
-    @type: C{list}
-    """
-
-    if len(tokens) == 0:
-        return None
-
-    fields = [
-        ("fullName", dsattributes.eDSContains),
-        ("emailAddresses", dsattributes.eDSStartsWith),
-    ]
-
-    results = []
-    for token in tokens:
-        queries = []
-        for field, comparison in fields:
-            ODField = mapping[field]['odField']
-            query = dsquery.match(ODField, token, comparison)
-            queries.append(query)
-        results.append(dsquery.expression(dsquery.expression.OR, queries))
-    return results
-
-
-
-def buildNestedQueryFromTokens(tokens, mapping):
-    """
-    Build a DS query espression such that all the tokens must appear in either
-    the fullName (anywhere), emailAddresses (at the beginning) or record name
-    (at the beginning).
-
-    @param tokens: The tokens to search on
-    @type tokens: C{list} of C{str}
-    @param mapping: The mapping of DirectoryRecord attributes to OD attributes
-    @type mapping: C{dict}
-    @return: The nested expression object
-    @type: dsquery.expression
-    """
-
-    if len(tokens) == 0:
-        return None
-
-    fields = [
-        ("fullName", dsattributes.eDSContains),
-        ("emailAddresses", dsattributes.eDSStartsWith),
-        ("recordName", dsattributes.eDSStartsWith),
-    ]
-
-    outer = []
-    for token in tokens:
-        inner = []
-        for field, comparison in fields:
-            ODField = mapping[field]['odField']
-            query = dsquery.match(ODField, token, comparison)
-            inner.append(query)
-        outer.append(dsquery.expression(dsquery.expression.OR, inner))
-    return dsquery.expression(dsquery.expression.AND, outer)
-
-
-
-class OpenDirectoryRecord(CachingDirectoryRecord):
-    """
-    OpenDirectory implementation of L{IDirectoryRecord}.
-    """
-    def __init__(
-        self, service, recordType, guid, nodeName, shortNames, authIDs,
-        fullName, firstName, lastName, emailAddresses, memberGUIDs, nestedGUIDs,
-        extProxies, extReadOnlyProxies,
-    ):
-        super(OpenDirectoryRecord, self).__init__(
-            service=service,
-            recordType=recordType,
-            guid=guid,
-            shortNames=shortNames,
-            authIDs=authIDs,
-            fullName=fullName,
-            firstName=firstName,
-            lastName=lastName,
-            emailAddresses=emailAddresses,
-            extProxies=extProxies,
-            extReadOnlyProxies=extReadOnlyProxies,
-        )
-        self.nodeName = nodeName
-
-        self._memberGUIDs = tuple(memberGUIDs)
-        self._nestedGUIDs = tuple(nestedGUIDs)
-        self._groupMembershipGUIDs = None
-
-
-    def __repr__(self):
-        if self.service.realmName == self.nodeName:
-            location = self.nodeName
-        else:
-            location = "%s->%s" % (self.service.realmName, self.nodeName)
-
-        return "<%s[%s@%s(%s)] %s(%s) %r>" % (
-            self.__class__.__name__,
-            self.recordType,
-            self.service.guid,
-            location,
-            self.guid,
-            ",".join(self.shortNames),
-            self.fullName
-        )
-
-
-    def members(self):
-        if self.recordType != self.service.recordType_groups:
-            return
-
-        for guid in self._memberGUIDs:
-            userRecord = self.service.recordWithGUID(guid)
-            if userRecord is not None:
-                yield userRecord
-
-
-    def groups(self):
-        if self._groupMembershipGUIDs is None:
-            self._groupMembershipGUIDs = self.service.groupsForGUID(self.guid)
-
-        for guid in self._groupMembershipGUIDs:
-            record = self.service.recordWithGUID(guid)
-            if record:
-                yield record
-
-
-    def memberGUIDs(self):
-        return set(self._memberGUIDs)
-
-
-    def nestedGUIDs(self):
-        return set(self._nestedGUIDs)
-
-
-    def verifyCredentials(self, credentials):
-        if isinstance(credentials, UsernamePassword):
-            # Check cached password
-            try:
-                if credentials.password == self.password:
-                    return True
-            except AttributeError:
-                pass
-
-            # Check with directory services
-            try:
-                if self.service.odModule.authenticateUserBasic(self.service.directory, self.nodeName, self.shortNames[0], credentials.password):
-                    # Cache the password to avoid future DS queries
-                    self.password = credentials.password
-                    return True
-            except self.service.odModule.ODError, e:
-                self.log.error("OpenDirectory (node=%s) error while performing basic authentication for user %s: %s"
-                            % (self.service.realmName, self.shortNames[0], e))
-
-            return False
-
-        elif isinstance(credentials, DigestedCredentials):
-            #
-            # We need a special format for the "challenge" and "response" strings passed into OpenDirectory, as it is
-            # picky about exactly what it receives.
-            #
-            try:
-                if "algorithm" not in credentials.fields:
-                    credentials.fields["algorithm"] = "md5"
-                challenge = 'Digest realm="%(realm)s", nonce="%(nonce)s", algorithm=%(algorithm)s' % credentials.fields
-                response = (
-                    'Digest username="%(username)s", '
-                    'realm="%(realm)s", '
-                    'nonce="%(nonce)s", '
-                    'uri="%(uri)s", '
-                    'response="%(response)s",'
-                    'algorithm=%(algorithm)s'
-                ) % credentials.fields
-            except KeyError, e:
-                self.log.error(
-                    "OpenDirectory (node=%s) error while performing digest authentication for user %s: "
-                    "missing digest response field: %s in: %s"
-                    % (self.service.realmName, self.shortNames[0], e, credentials.fields)
-                )
-                return False
-
-            try:
-                if self.digestcache[credentials.fields["uri"]] == response:
-                    return True
-            except (AttributeError, KeyError):
-                pass
-
-            try:
-                if self.service.odModule.authenticateUserDigest(
-                    self.service.directory,
-                    self.nodeName,
-                    self.shortNames[0],
-                    challenge,
-                    response,
-                    credentials.method
-                ):
-                    try:
-                        cache = self.digestcache
-                    except AttributeError:
-                        cache = self.digestcache = {}
-
-                    cache[credentials.fields["uri"]] = response
-
-                    return True
-                else:
-                    self.log.debug(
-"""OpenDirectory digest authentication failed with:
-    Nodename:  %s
-    Username:  %s
-    Challenge: %s
-    Response:  %s
-    Method:    %s
-""" % (self.nodeName, self.shortNames[0], challenge, response,
-       credentials.method))
-
-            except self.service.odModule.ODError, e:
-                self.log.error(
-                    "OpenDirectory (node=%s) error while performing digest authentication for user %s: %s"
-                    % (self.service.realmName, self.shortNames[0], e)
-                )
-                return False
-
-            return False
-
-        return super(OpenDirectoryRecord, self).verifyCredentials(credentials)
-
-
-
-class OpenDirectoryInitError(DirectoryError):
-    """
-    OpenDirectory initialization error.
-    """

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/cachingdirectory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/cachingdirectory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/cachingdirectory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,473 +0,0 @@
-# -*- test-case-name: twistedcaldav.directory.test.test_cachedirectory -*-
-##
-# Copyright (c) 2009-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-"""
-Caching directory service implementation.
-"""
-
-__all__ = [
-    "CachingDirectoryService",
-    "CachingDirectoryRecord",
-    "DictRecordTypeCache",
-]
-
-
-import time
-
-import base64
-
-from twext.python.log import Logger
-
-from twistedcaldav.config import config
-from twistedcaldav.memcacheclient import ClientFactory, MemcacheError
-from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord, DirectoryError, UnknownRecordTypeError
-from txdav.caldav.datastore.scheduling.cuaddress import normalizeCUAddr
-from twistedcaldav.directory.util import normalizeUUID
-
-
-class RecordTypeCache(object):
-    """
-    Abstract class for a record type cache. We will likely have dict and memcache implementations of this.
-    """
-
-    def __init__(self, directoryService, recordType):
-
-        self.directoryService = directoryService
-        self.recordType = recordType
-
-
-    def addRecord(self, record, indexType, indexKey, useMemcache=True,
-        neverExpire=False):
-        raise NotImplementedError()
-
-
-    def removeRecord(self, record):
-        raise NotImplementedError()
-
-
-    def findRecord(self, indexType, indexKey):
-        raise NotImplementedError()
-
-
-
-class DictRecordTypeCache(RecordTypeCache):
-    """
-    Cache implementation using a dict, and uses memcached to share records
-    with other instances.
-    """
-    log = Logger()
-
-    def __init__(self, directoryService, recordType):
-
-        super(DictRecordTypeCache, self).__init__(directoryService, recordType)
-        self.records = set()
-        self.recordsIndexedBy = {
-            CachingDirectoryService.INDEX_TYPE_GUID     : {},
-            CachingDirectoryService.INDEX_TYPE_SHORTNAME: {},
-            CachingDirectoryService.INDEX_TYPE_CUA    : {},
-            CachingDirectoryService.INDEX_TYPE_AUTHID   : {},
-        }
-        self.directoryService = directoryService
-        self.lastPurgedTime = time.time()
-
-
-    def addRecord(self, record, indexType, indexKey, useMemcache=True,
-        neverExpire=False):
-
-        useMemcache = useMemcache and config.Memcached.Pools.Default.ClientEnabled
-        if neverExpire:
-            record.neverExpire()
-
-        self.records.add(record)
-
-        # Also index/cache on guid
-        indexTypes = [(indexType, indexKey)]
-        if indexType != CachingDirectoryService.INDEX_TYPE_GUID:
-            indexTypes.append((CachingDirectoryService.INDEX_TYPE_GUID,
-                record.guid))
-
-        for indexType, indexKey in indexTypes:
-            self.recordsIndexedBy[indexType][indexKey] = record
-            if useMemcache:
-                key = self.directoryService.generateMemcacheKey(indexType, indexKey,
-                    record.recordType)
-                self.log.debug("Memcache: storing %s" % (key,))
-                try:
-                    self.directoryService.memcacheSet(key, record)
-                except DirectoryMemcacheError:
-                    self.log.error("Memcache: failed to store %s" % (key,))
-                    pass
-
-
-    def removeRecord(self, record):
-        if record in self.records:
-            self.records.remove(record)
-            self.log.debug("Removed record %s" % (record.guid,))
-            for indexType in self.directoryService.indexTypes():
-                try:
-                    indexData = getattr(record, CachingDirectoryService.indexTypeToRecordAttribute[indexType])
-                except AttributeError:
-                    continue
-                if isinstance(indexData, basestring):
-                    indexData = [indexData]
-                for item in indexData:
-                    try:
-                        del self.recordsIndexedBy[indexType][item]
-                    except KeyError:
-                        pass
-
-
-    def findRecord(self, indexType, indexKey):
-        self.purgeExpiredRecords()
-        return self.recordsIndexedBy[indexType].get(indexKey)
-
-
-    def purgeExpiredRecords(self):
-        """
-        Scan the cached records and remove any that have expired.
-        Does nothing if we've scanned within the past cacheTimeout seconds.
-        """
-        if time.time() - self.lastPurgedTime > self.directoryService.cacheTimeout:
-            for record in list(self.records):
-                if record.isExpired():
-                    self.removeRecord(record)
-            self.lastPurgedTime = time.time()
-
-
-
-class CachingDirectoryService(DirectoryService):
-    """
-    Caching Directory implementation of L{IDirectoryService}.
-
-    This is class must be overridden to provide a concrete implementation.
-    """
-    log = Logger()
-
-    INDEX_TYPE_GUID = "guid"
-    INDEX_TYPE_SHORTNAME = "shortname"
-    INDEX_TYPE_CUA = "cua"
-    INDEX_TYPE_AUTHID = "authid"
-
-    indexTypeToRecordAttribute = {
-        "guid"     : "guid",
-        "shortname": "shortNames",
-        "cua"      : "calendarUserAddresses",
-        "authid"   : "authIDs",
-    }
-
-    def __init__(
-        self,
-        cacheTimeout=1,
-        negativeCaching=False,
-        cacheClass=DictRecordTypeCache,
-    ):
-        """
-        @param cacheTimeout: C{int} number of minutes before cache is invalidated.
-        """
-
-        self.cacheTimeout = cacheTimeout * 60
-        self.negativeCaching = negativeCaching
-
-        self.cacheClass = cacheClass
-        self._initCaches()
-
-        super(CachingDirectoryService, self).__init__()
-
-
-    def _getMemcacheClient(self, refresh=False):
-        if refresh or not hasattr(self, "memcacheClient"):
-            self.memcacheClient = ClientFactory.getClient(['%s:%s' %
-                (config.Memcached.Pools.Default.BindAddress, config.Memcached.Pools.Default.Port)],
-                debug=0, pickleProtocol=2)
-        return self.memcacheClient
-
-
-    def memcacheSet(self, key, record):
-
-        hideService = isinstance(record, DirectoryRecord)
-
-        try:
-            if hideService:
-                record.service = None # so we don't pickle service
-
-            key = base64.b64encode(key)
-            if not self._getMemcacheClient().set(key, record, time=self.cacheTimeout):
-                self.log.error("Could not write to memcache, retrying")
-                if not self._getMemcacheClient(refresh=True).set(
-                    key, record,
-                    time=self.cacheTimeout
-                ):
-                    self.log.error("Could not write to memcache again, giving up")
-                    del self.memcacheClient
-                    raise DirectoryMemcacheError("Failed to write to memcache")
-        finally:
-            if hideService:
-                record.service = self
-
-
-    def memcacheGet(self, key):
-
-        key = base64.b64encode(key)
-        try:
-            record = self._getMemcacheClient().get(key)
-            if record is not None and isinstance(record, DirectoryRecord):
-                record.service = self
-        except MemcacheError:
-            self.log.error("Could not read from memcache, retrying")
-            try:
-                record = self._getMemcacheClient(refresh=True).get(key)
-                if record is not None and isinstance(record, DirectoryRecord):
-                    record.service = self
-            except MemcacheError:
-                self.log.error("Could not read from memcache again, giving up")
-                del self.memcacheClient
-                raise DirectoryMemcacheError("Failed to read from memcache")
-        return record
-
-
-    def generateMemcacheKey(self, indexType, indexKey, recordType):
-        """
-        Return a key that can be used to store/retrieve a record in memcache.
-        if short-name is the indexType the recordType be encoded into the key.
-
-        @param indexType: one of the indexTypes( ) values
-        @type indexType: C{str}
-        @param indexKey: the value being indexed
-        @type indexKey: C{str}
-        @param recordType: the type of record being cached
-        @type recordType: C{str}
-        @return: a memcache key comprised of the passed-in values and the directory
-            service's baseGUID
-        @rtype: C{str}
-        """
-        keyVersion = 2
-        if indexType == CachingDirectoryService.INDEX_TYPE_SHORTNAME:
-            return "dir|v%d|%s|%s|%s|%s" % (keyVersion, self.baseGUID, recordType,
-                indexType, indexKey)
-        else:
-            return "dir|v%d|%s|%s|%s" % (keyVersion, self.baseGUID, indexType,
-                indexKey)
-
-
-    def _initCaches(self):
-        self._recordCaches = dict([
-            (recordType, self.cacheClass(self, recordType))
-            for recordType in self.recordTypes()
-        ])
-
-        self._disabledKeys = dict([(indexType, dict()) for indexType in self.indexTypes()])
-
-
-    def indexTypes(self):
-
-        return (
-            CachingDirectoryService.INDEX_TYPE_GUID,
-            CachingDirectoryService.INDEX_TYPE_SHORTNAME,
-            CachingDirectoryService.INDEX_TYPE_CUA,
-            CachingDirectoryService.INDEX_TYPE_AUTHID,
-        )
-
-
-    def recordCacheForType(self, recordType):
-        try:
-            return self._recordCaches[recordType]
-        except KeyError:
-            raise UnknownRecordTypeError(recordType)
-
-
-    def listRecords(self, recordType):
-        return self.recordCacheForType(recordType).records
-
-
-    def recordWithShortName(self, recordType, shortName):
-        return self._lookupRecord((recordType,), CachingDirectoryService.INDEX_TYPE_SHORTNAME, shortName)
-
-
-    def recordWithCalendarUserAddress(self, address):
-        address = normalizeCUAddr(address)
-        record = None
-        if address.startswith("mailto:"):
-            record = self._lookupRecord(None, CachingDirectoryService.INDEX_TYPE_CUA, address)
-            return record if record and record.enabledForCalendaring else None
-        else:
-            return DirectoryService.recordWithCalendarUserAddress(self, address)
-
-
-    def recordWithAuthID(self, authID):
-        return self._lookupRecord(None, CachingDirectoryService.INDEX_TYPE_AUTHID, authID)
-
-
-    def recordWithGUID(self, guid):
-        guid = normalizeUUID(guid)
-        return self._lookupRecord(None, CachingDirectoryService.INDEX_TYPE_GUID, guid)
-
-    recordWithUID = recordWithGUID
-
-    def _lookupRecord(self, recordTypes, indexType, indexKey):
-
-        if recordTypes is None:
-            recordTypes = self.recordTypes()
-        else:
-            # Only use recordTypes this service supports:
-            supportedRecordTypes = self.recordTypes()
-            recordTypes = [t for t in recordTypes if t in supportedRecordTypes]
-            if not recordTypes:
-                return None
-
-        def lookup():
-            for recordType in recordTypes:
-                record = self.recordCacheForType(recordType).findRecord(indexType, indexKey)
-
-                if record:
-                    if record.isExpired():
-                        self.recordCacheForType(recordType).removeRecord(record)
-                        return None
-                    else:
-                        return record
-            else:
-                return None
-
-        record = lookup()
-        if record:
-            return record
-
-        if self.negativeCaching:
-
-            # Check negative cache (take cache entry timeout into account)
-            try:
-                disabledTime = self._disabledKeys[indexType][indexKey]
-                if time.time() - disabledTime < self.cacheTimeout:
-                    return None
-            except KeyError:
-                pass
-
-        # Check memcache
-        if config.Memcached.Pools.Default.ClientEnabled:
-
-            # The only time the recordType arg matters is when indexType is
-            # short-name, and in that case recordTypes will contain exactly
-            # one recordType, so using recordTypes[0] here is always safe:
-            key = self.generateMemcacheKey(indexType, indexKey, recordTypes[0])
-
-            self.log.debug("Memcache: checking %s" % (key,))
-
-            try:
-                record = self.memcacheGet(key)
-            except DirectoryMemcacheError:
-                self.log.error("Memcache: failed to get %s" % (key,))
-                record = None
-
-            if record is None:
-                self.log.debug("Memcache: miss %s" % (key,))
-            else:
-                self.log.debug("Memcache: hit %s" % (key,))
-                self.recordCacheForType(record.recordType).addRecord(record, indexType, indexKey, useMemcache=False)
-                return record
-
-            if self.negativeCaching:
-
-                # Check negative memcache
-                try:
-                    val = self.memcacheGet("-%s" % (key,))
-                except DirectoryMemcacheError:
-                    self.log.error("Memcache: failed to get -%s" % (key,))
-                    val = None
-                if val == 1:
-                    self.log.debug("Memcache: negative %s" % (key,))
-                    self._disabledKeys[indexType][indexKey] = time.time()
-                    return None
-
-        # Try query
-        self.log.debug("Faulting record for attribute '%s' with value '%s'" % (indexType, indexKey,))
-        self.queryDirectory(recordTypes, indexType, indexKey)
-
-        # Now try again from cache
-        record = lookup()
-        if record:
-            self.log.debug("Found record for attribute '%s' with value '%s'" % (indexType, indexKey,))
-            return record
-
-        if self.negativeCaching:
-
-            # Add to negative cache with timestamp
-            self.log.debug("Failed to fault record for attribute '%s' with value '%s'" % (indexType, indexKey,))
-            self._disabledKeys[indexType][indexKey] = time.time()
-
-            if config.Memcached.Pools.Default.ClientEnabled:
-                self.log.debug("Memcache: storing (negative) %s" % (key,))
-                try:
-                    self.memcacheSet("-%s" % (key,), 1)
-                except DirectoryMemcacheError:
-                    self.log.error("Memcache: failed to set -%s" % (key,))
-                    pass
-
-        return None
-
-
-    def queryDirectory(self, recordTypes, indexType, indexKey):
-        raise NotImplementedError()
-
-
-
-class CachingDirectoryRecord(DirectoryRecord):
-
-    def __init__(
-        self, service, recordType, guid,
-        shortNames=(), authIDs=set(),
-        fullName=None, firstName=None, lastName=None, emailAddresses=set(),
-        uid=None, **kwargs
-    ):
-        super(CachingDirectoryRecord, self).__init__(
-            service,
-            recordType,
-            guid,
-            shortNames=shortNames,
-            authIDs=authIDs,
-            fullName=fullName,
-            firstName=firstName,
-            lastName=lastName,
-            emailAddresses=emailAddresses,
-            uid=uid,
-            **kwargs
-        )
-
-        self.cachedTime = time.time()
-
-
-    def neverExpire(self):
-        self.cachedTime = 0
-
-
-    def isExpired(self):
-        """
-        Returns True if this record was created more than cacheTimeout
-        seconds ago
-        """
-        if (
-            self.cachedTime != 0 and
-            time.time() - self.cachedTime > self.service.cacheTimeout
-        ):
-            return True
-        else:
-            return False
-
-
-
-class DirectoryMemcacheError(DirectoryError):
-    """
-    Error communicating with memcached.
-    """

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendar.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendar.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendar.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -35,7 +35,6 @@
 from twisted.internet.defer import succeed, inlineCallbacks, returnValue
 
 from twistedcaldav.config import config
-from twistedcaldav.directory.idirectory import IDirectoryService
 from twistedcaldav.directory.common import uidsResourceName, \
     CommonUIDProvisioningResource, CommonHomeTypeProvisioningResource
 
@@ -48,7 +47,10 @@
 
 log = Logger()
 
+
 # FIXME: copied from resource.py to avoid circular dependency
+
+
 class CalDAVComplianceMixIn(object):
     def davComplianceClasses(self):
         return (
@@ -102,9 +104,14 @@
         #
         # Create children
         #
-        # MOVE2WHO
-        for name, recordType in [(r.name + "s", r) for r in self.directory.recordTypes()]:
-            self.putChild(name, DirectoryCalendarHomeTypeProvisioningResource(self, name, recordType))
+        # ...just "users" though.  If we iterate all of the directory's
+        # recordTypes, we also get the proxy sub principal types.
+        for recordTypeName in [
+            self.directory.recordTypeToOldName(r) for r in [
+                self.directory.recordType.user
+            ]
+        ]:
+            self.putChild(recordTypeName, DirectoryCalendarHomeTypeProvisioningResource(self, recordTypeName, r))
 
         self.putChild(uidsResourceName, DirectoryCalendarHomeUIDProvisioningResource(self))
 
@@ -114,8 +121,7 @@
 
 
     def listChildren(self):
-        # MOVE2WHO
-        return [r.name + "s" for r in self.directory.recordTypes()]
+        return [self.directory.recordTypeToOldName(r) for r in self.directory.recordTypes()]
 
 
     def principalCollections(self):
@@ -153,9 +159,9 @@
 
 
 class DirectoryCalendarHomeTypeProvisioningResource(
-        CommonHomeTypeProvisioningResource,
-        DirectoryCalendarProvisioningResource
-    ):
+    CommonHomeTypeProvisioningResource,
+    DirectoryCalendarProvisioningResource
+):
     """
     Resource which provisions calendar home collections of a specific
     record type as needed.
@@ -181,16 +187,15 @@
         return joinURL(self._parent.url(), self.name)
 
 
+    @inlineCallbacks
     def listChildren(self):
         if config.EnablePrincipalListings:
-
-            def _recordShortnameExpand():
-                for record in self.directory.listRecords(self.recordType):
-                    if record.enabledForCalendaring:
-                        for shortName in record.shortNames:
-                            yield shortName
-
-            return _recordShortnameExpand()
+            children = []
+            for record in (yield self.directory.listRecords(self.recordType)):
+                if record.enabledForCalendaring:
+                    for shortName in record.shortNames:
+                        children.append(shortName)
+            returnValue(children)
         else:
             # Not a listable collection
             raise HTTPError(responsecode.FORBIDDEN)
@@ -226,9 +231,9 @@
 
 
 class DirectoryCalendarHomeUIDProvisioningResource (
-        CommonUIDProvisioningResource,
-        DirectoryCalendarProvisioningResource
-    ):
+    CommonUIDProvisioningResource,
+    DirectoryCalendarProvisioningResource
+):
 
     homeResourceTypeName = 'calendars'
 

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendaruserproxy.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendaruserproxy.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/calendaruserproxy.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -40,10 +40,11 @@
 from twistedcaldav.database import (
     AbstractADBAPIDatabase, ADBAPISqliteMixin, ADBAPIPostgreSQLMixin
 )
-from twistedcaldav.directory.principal import formatLink
-from twistedcaldav.directory.principal import formatLinks
-from twistedcaldav.directory.principal import formatPrincipals
 from twistedcaldav.directory.util import normalizeUUID
+from twistedcaldav.directory.util import (
+    formatLink, formatLinks, formatPrincipals
+)
+
 from twistedcaldav.extensions import (
     DAVPrincipalResource, DAVResourceWithChildrenMixin
 )

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/common.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/common.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/common.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -68,7 +68,7 @@
         name = record.uid
 
         if record is None:
-            log.debug("No directory record with GUID %r" % (name,))
+            log.debug("No directory record with UID %r" % (name,))
             returnValue(None)
 
         # MOVE2WHO

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/directory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/directory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/directory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,1510 +0,0 @@
-# -*- test-case-name: twistedcaldav.directory.test -*-
-##
-# Copyright (c) 2006-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-
-"""
-Generic directory service classes.
-"""
-
-__all__ = [
-    "DirectoryService",
-    "DirectoryRecord",
-    "DirectoryError",
-    "DirectoryConfigurationError",
-    "UnknownRecordTypeError",
-    "GroupMembershipCacheUpdater",
-]
-
-from plistlib import readPlistFromString
-
-from twext.python.log import Logger
-from txweb2.dav.auth import IPrincipalCredentials
-from txweb2.dav.util import joinURL
-
-from twisted.cred.checkers import ICredentialsChecker
-from twisted.cred.error import UnauthorizedLogin
-from twisted.internet.defer import succeed, inlineCallbacks, returnValue
-from twisted.python.filepath import FilePath
-
-from twistedcaldav.config import config
-from twistedcaldav.directory.idirectory import IDirectoryService, IDirectoryRecord
-from twistedcaldav.directory.util import uuidFromName, normalizeUUID
-from twistedcaldav.memcacher import Memcacher
-from txdav.caldav.datastore.scheduling.cuaddress import normalizeCUAddr
-from txdav.caldav.datastore.scheduling.ischedule.localservers import Servers
-
-from txdav.caldav.icalendardirectoryservice import ICalendarStoreDirectoryService, \
-    ICalendarStoreDirectoryRecord
-
-from xml.parsers.expat import ExpatError
-
-from zope.interface import implements
-
-import cPickle as pickle
-import datetime
-import grp
-import itertools
-import os
-import pwd
-import sys
-import types
-from urllib import unquote
-
-log = Logger()
-
-
-class DirectoryService(object):
-    implements(IDirectoryService, ICalendarStoreDirectoryService, ICredentialsChecker)
-
-    log = Logger()
-
-    ##
-    # IDirectoryService
-    ##
-
-    realmName = None
-
-    recordType_users = "users"
-    recordType_people = "people"
-    recordType_groups = "groups"
-    recordType_locations = "locations"
-    recordType_resources = "resources"
-    recordType_addresses = "addresses"
-
-    searchContext_location = "location"
-    searchContext_resource = "resource"
-    searchContext_user = "user"
-    searchContext_group = "group"
-    searchContext_attendee = "attendee"
-
-    aggregateService = None
-
-    def _generatedGUID(self):
-        if not hasattr(self, "_guid"):
-            realmName = self.realmName
-
-            assert self.baseGUID, "Class %s must provide a baseGUID attribute" % (self.__class__.__name__,)
-
-            if realmName is None:
-                self.log.error("Directory service %s has no realm name or GUID; generated service GUID will not be unique." % (self,))
-                realmName = ""
-            else:
-                self.log.info("Directory service %s has no GUID; generating service GUID from realm name." % (self,))
-
-            self._guid = uuidFromName(self.baseGUID, realmName)
-
-        return self._guid
-
-    baseGUID = None
-    guid = property(_generatedGUID)
-
-    # Needed by twistedcaldav.directorybackedaddressbook
-    liveQuery = False
-
-    def setRealm(self, realmName):
-        self.realmName = realmName
-
-
-    def available(self):
-        """
-        By default, the directory is available.  This may return a boolean or a
-        Deferred which fires a boolean.
-
-        A return value of "False" means that the directory is currently
-        unavailable due to the service starting up.
-        """
-        return True
-    # end directorybackedaddressbook requirements
-
-    ##
-    # ICredentialsChecker
-    ##
-
-    # For ICredentialsChecker
-    credentialInterfaces = (IPrincipalCredentials,)
-
-    def requestAvatarId(self, credentials):
-        credentials = IPrincipalCredentials(credentials)
-
-        # FIXME: ?
-        # We were checking if principal is enabled; seems unnecessary in current
-        # implementation because you shouldn't have a principal object for a
-        # disabled directory principal.
-
-        if credentials.authnPrincipal is None:
-            raise UnauthorizedLogin("No such user: %s" % (credentials.credentials.username,))
-
-        # See if record is enabledForLogin
-        if not credentials.authnPrincipal.record.isLoginEnabled():
-            raise UnauthorizedLogin("User not allowed to log in: %s" %
-                (credentials.credentials.username,))
-
-        # Handle Kerberos as a separate behavior
-        try:
-            from twistedcaldav.authkerb import NegotiateCredentials
-        except ImportError:
-            NegotiateCredentials = None
-
-        if NegotiateCredentials and isinstance(credentials.credentials,
-                                               NegotiateCredentials):
-            # If we get here with Kerberos, then authentication has already succeeded
-            return (
-                credentials.authnPrincipal.principalURL(),
-                credentials.authzPrincipal.principalURL(),
-                credentials.authnPrincipal,
-                credentials.authzPrincipal,
-            )
-        else:
-            if credentials.authnPrincipal.record.verifyCredentials(credentials.credentials):
-                return (
-                    credentials.authnPrincipal.principalURL(),
-                    credentials.authzPrincipal.principalURL(),
-                    credentials.authnPrincipal,
-                    credentials.authzPrincipal,
-                )
-            else:
-                raise UnauthorizedLogin("Incorrect credentials for %s" % (credentials.credentials.username,))
-
-
-    def recordTypes(self):
-        raise NotImplementedError("Subclass must implement recordTypes()")
-
-
-    def listRecords(self, recordType):
-        raise NotImplementedError("Subclass must implement listRecords()")
-
-
-    def recordWithShortName(self, recordType, shortName):
-        for record in self.listRecords(recordType):
-            if shortName in record.shortNames:
-                return record
-        return None
-
-
-    def recordWithUID(self, uid):
-        uid = normalizeUUID(uid)
-        for record in self.allRecords():
-            if record.uid == uid:
-                return record
-        return None
-
-
-    def recordWithGUID(self, guid):
-        guid = normalizeUUID(guid)
-        for record in self.allRecords():
-            if record.guid == guid:
-                return record
-        return None
-
-
-    def recordWithAuthID(self, authID):
-        for record in self.allRecords():
-            if authID in record.authIDs:
-                return record
-        return None
-
-
-    def recordWithCalendarUserAddress(self, address):
-        address = normalizeCUAddr(address)
-        record = None
-        if address.startswith("urn:uuid:"):
-            guid = address[9:]
-            record = self.recordWithGUID(guid)
-        elif address.startswith("mailto:"):
-            for record in self.allRecords():
-                if address[7:] in record.emailAddresses:
-                    break
-            else:
-                return None
-        elif address.startswith("/principals/"):
-            parts = map(unquote, address.split("/"))
-            if len(parts) == 4:
-                if parts[2] == "__uids__":
-                    guid = parts[3]
-                    record = self.recordWithGUID(guid)
-                else:
-                    record = self.recordWithShortName(parts[2], parts[3])
-
-        return record if record and record.hasCalendars else None
-
-
-    def recordWithCachedGroupsAlias(self, recordType, alias):
-        """
-        @param recordType: the type of the record to look up.
-        @param alias: the cached-groups alias of the record to look up.
-        @type alias: C{str}
-
-        @return: a deferred L{IDirectoryRecord} with the given cached-groups
-            alias, or C{None} if no such record is found.
-        """
-        # The default implementation uses guid
-        return succeed(self.recordWithGUID(alias))
-
-
-    def allRecords(self):
-        for recordType in self.recordTypes():
-            for record in self.listRecords(recordType):
-                yield record
-
-
-    def recordsMatchingFieldsWithCUType(self, fields, operand="or",
-        cuType=None):
-        if cuType:
-            recordType = DirectoryRecord.fromCUType(cuType)
-        else:
-            recordType = None
-
-        return self.recordsMatchingFields(fields, operand=operand,
-            recordType=recordType)
-
-
-    def recordTypesForSearchContext(self, context):
-        """
-        Map calendarserver-principal-search REPORT context value to applicable record types
-
-        @param context: The context value to map
-        @type context: C{str}
-        @returns: The list of record types the context maps to
-        @rtype: C{list} of C{str}
-        """
-        if context == self.searchContext_location:
-            recordTypes = [self.recordType_locations]
-        elif context == self.searchContext_resource:
-            recordTypes = [self.recordType_resources]
-        elif context == self.searchContext_user:
-            recordTypes = [self.recordType_users]
-        elif context == self.searchContext_group:
-            recordTypes = [self.recordType_groups]
-        elif context == self.searchContext_attendee:
-            recordTypes = [self.recordType_users, self.recordType_groups,
-                self.recordType_resources]
-        else:
-            recordTypes = list(self.recordTypes())
-        return recordTypes
-
-
-    def recordsMatchingTokens(self, tokens, context=None):
-        """
-        @param tokens: The tokens to search on
-        @type tokens: C{list} of C{str} (utf-8 bytes)
-        @param context: An indication of what the end user is searching
-            for; "attendee", "location", or None
-        @type context: C{str}
-        @return: a deferred sequence of L{IDirectoryRecord}s which
-            match the given tokens and optional context.
-
-        Each token is searched for within each record's full name and
-        email address; if each token is found within a record that
-        record is returned in the results.
-
-        If context is None, all record types are considered.  If
-        context is "location", only locations are considered.  If
-        context is "attendee", only users, groups, and resources
-        are considered.
-        """
-
-        # Default, bruteforce method; override with one optimized for each
-        # service
-
-        def fieldMatches(fieldValue, value):
-            if fieldValue is None:
-                return False
-            elif type(fieldValue) in types.StringTypes:
-                fieldValue = (fieldValue,)
-
-            for testValue in fieldValue:
-                testValue = testValue.lower()
-                value = value.lower()
-
-                try:
-                    testValue.index(value)
-                    return True
-                except ValueError:
-                    pass
-
-            return False
-
-        def recordMatches(record):
-            for token in tokens:
-                for fieldName in ["fullName", "emailAddresses"]:
-                    try:
-                        fieldValue = getattr(record, fieldName)
-                        if fieldMatches(fieldValue, token):
-                            break
-                    except AttributeError:
-                        # No value
-                        pass
-                else:
-                    return False
-            return True
-
-
-        def yieldMatches(recordTypes):
-            try:
-                for recordType in [r for r in recordTypes if r in self.recordTypes()]:
-                    for record in self.listRecords(recordType):
-                        if recordMatches(record):
-                            yield record
-
-            except UnknownRecordTypeError:
-                # Skip this service since it doesn't understand this record type
-                pass
-
-        recordTypes = self.recordTypesForSearchContext(context)
-        return succeed(yieldMatches(recordTypes))
-
-
-    def recordsMatchingFields(self, fields, operand="or", recordType=None):
-        # Default, bruteforce method; override with one optimized for each
-        # service
-
-        def fieldMatches(fieldValue, value, caseless, matchType):
-            if fieldValue is None:
-                return False
-            elif type(fieldValue) in types.StringTypes:
-                fieldValue = (fieldValue,)
-
-            for testValue in fieldValue:
-                if caseless:
-                    testValue = testValue.lower()
-                    value = value.lower()
-
-                if matchType == 'starts-with':
-                    if testValue.startswith(value):
-                        return True
-                elif matchType == 'contains':
-                    try:
-                        testValue.index(value)
-                        return True
-                    except ValueError:
-                        pass
-                else: # exact
-                    if testValue == value:
-                        return True
-
-            return False
-
-        def recordMatches(record):
-            if operand == "and":
-                for fieldName, value, caseless, matchType in fields:
-                    try:
-                        fieldValue = getattr(record, fieldName)
-                        if not fieldMatches(fieldValue, value, caseless,
-                            matchType):
-                            return False
-                    except AttributeError:
-                        # No property => no match
-                        return False
-                # we hit on every property
-                return True
-            else: # "or"
-                for fieldName, value, caseless, matchType in fields:
-                    try:
-                        fieldValue = getattr(record, fieldName)
-                        if fieldMatches(fieldValue, value, caseless,
-                            matchType):
-                            return True
-                    except AttributeError:
-                        # No value
-                        pass
-                # we didn't hit any
-                return False
-
-        def yieldMatches(recordType):
-            try:
-                if recordType is None:
-                    recordTypes = list(self.recordTypes())
-                else:
-                    recordTypes = (recordType,)
-
-                for recordType in recordTypes:
-                    for record in self.listRecords(recordType):
-                        if recordMatches(record):
-                            yield record
-
-            except UnknownRecordTypeError:
-                # Skip this service since it doesn't understand this record type
-                pass
-
-        return succeed(yieldMatches(recordType))
-
-
-    def getGroups(self, guids):
-        """
-        This implementation returns all groups, not just the ones specified
-        by guids
-        """
-        return succeed(self.listRecords(self.recordType_groups))
-
-
-    def getResourceInfo(self):
-        return ()
-
-
-    def isAvailable(self):
-        return True
-
-
-    def getParams(self, params, defaults, ignore=None):
-        """ Checks configuration parameters for unexpected/ignored keys, and
-            applies default values. """
-
-        keys = set(params.keys())
-
-        result = {}
-        for key in defaults.iterkeys():
-            if key in params:
-                result[key] = params[key]
-                keys.remove(key)
-            else:
-                result[key] = defaults[key]
-
-        if ignore:
-            for key in ignore:
-                if key in params:
-                    self.log.warn("Ignoring obsolete directory service parameter: %s" % (key,))
-                    keys.remove(key)
-
-        if keys:
-            raise DirectoryConfigurationError("Invalid directory service parameter(s): %s" % (", ".join(list(keys)),))
-        return result
-
-
-    def parseResourceInfo(self, plist, guid, recordType, shortname):
-        """
-        Parse ResourceInfo plist and extract information that the server needs.
-
-        @param plist: the plist that is the attribute value.
-        @type plist: str
-        @param guid: the directory GUID of the record being parsed.
-        @type guid: str
-        @param shortname: the record shortname of the record being parsed.
-        @type shortname: str
-        @return: a C{tuple} of C{bool} for auto-accept, C{str} for proxy GUID, C{str} for read-only proxy GUID.
-        """
-        try:
-            plist = readPlistFromString(plist)
-            wpframework = plist.get("com.apple.WhitePagesFramework", {})
-            autoaccept = wpframework.get("AutoAcceptsInvitation", False)
-            proxy = wpframework.get("CalendaringDelegate", None)
-            read_only_proxy = wpframework.get("ReadOnlyCalendaringDelegate", None)
-            autoAcceptGroup = wpframework.get("AutoAcceptGroup", "")
-        except (ExpatError, AttributeError), e:
-            self.log.error(
-                "Failed to parse ResourceInfo attribute of record (%s)%s (guid=%s): %s\n%s" %
-                (recordType, shortname, guid, e, plist,)
-            )
-            raise ValueError("Invalid ResourceInfo")
-
-        return (autoaccept, proxy, read_only_proxy, autoAcceptGroup)
-
-
-    def getExternalProxyAssignments(self):
-        """
-        Retrieve proxy assignments for locations and resources from the
-        directory and return a list of (principalUID, ([memberUIDs)) tuples,
-        suitable for passing to proxyDB.setGroupMembers( )
-
-        This generic implementation fetches all locations and resources.
-        More specialized implementations can perform whatever operation is
-        most efficient for their particular directory service.
-        """
-        assignments = []
-
-        resources = itertools.chain(
-            self.listRecords(self.recordType_locations),
-            self.listRecords(self.recordType_resources)
-        )
-        for record in resources:
-            guid = record.guid
-            if record.hasCalendars:
-                assignments.append(("%s#calendar-proxy-write" % (guid,),
-                                   record.externalProxies()))
-                assignments.append(("%s#calendar-proxy-read" % (guid,),
-                                   record.externalReadOnlyProxies()))
-
-        return assignments
-
-
-    def createRecord(self, recordType, guid=None, shortNames=(), authIDs=set(),
-        fullName=None, firstName=None, lastName=None, emailAddresses=set(),
-        uid=None, password=None, **kwargs):
-        """
-        Create/persist a directory record based on the given values
-        """
-        raise NotImplementedError("Subclass must implement createRecord")
-
-
-    def updateRecord(self, recordType, guid=None, shortNames=(), authIDs=set(),
-        fullName=None, firstName=None, lastName=None, emailAddresses=set(),
-        uid=None, password=None, **kwargs):
-        """
-        Update/persist a directory record based on the given values
-        """
-        raise NotImplementedError("Subclass must implement updateRecord")
-
-
-    def destroyRecord(self, recordType, guid=None):
-        """
-        Remove a directory record from the directory
-        """
-        raise NotImplementedError("Subclass must implement destroyRecord")
-
-
-    def createRecords(self, data):
-        """
-        Create directory records in bulk
-        """
-        raise NotImplementedError("Subclass must implement createRecords")
-
-
-    def setPrincipalCollection(self, principalCollection):
-        """
-        Set the principal service that the directory relies on for doing proxy tests.
-
-        @param principalService: the principal service.
-        @type principalService: L{DirectoryProvisioningResource}
-        """
-        self.principalCollection = principalCollection
-
-
-    def isProxyFor(self, test, other):
-        """
-        Test whether one record is a calendar user proxy for the specified record.
-
-        @param test: record to test
-        @type test: L{DirectoryRecord}
-        @param other: record to check against
-        @type other: L{DirectoryRecord}
-
-        @return: C{True} if test is a proxy of other.
-        @rtype: C{bool}
-        """
-        return self.principalCollection.isProxyFor(test, other)
-
-
-
-class GroupMembershipCache(Memcacher):
-    """
-    Caches group membership information
-
-    This cache is periodically updated by a side car so that worker processes
-    never have to ask the directory service directly for group membership
-    information.
-
-    Keys in this cache are:
-
-    "groups-for:<GUID>" : comma-separated list of groups that GUID is a member
-    of.  Note that when using LDAP, the key for this is an LDAP DN.
-
-    "group-cacher-populated" : contains a datestamp indicating the most recent
-    population.
-    """
-    log = Logger()
-
-    def __init__(self, namespace, pickle=True, no_invalidation=False,
-        key_normalization=True, expireSeconds=0, lockSeconds=60):
-
-        super(GroupMembershipCache, self).__init__(namespace, pickle=pickle,
-            no_invalidation=no_invalidation,
-            key_normalization=key_normalization)
-
-        self.expireSeconds = expireSeconds
-        self.lockSeconds = lockSeconds
-
-
-    def setGroupsFor(self, guid, memberships):
-        self.log.debug("set groups-for %s : %s" % (guid, memberships))
-        return self.set("groups-for:%s" %
-            (str(guid)), memberships,
-            expireTime=self.expireSeconds)
-
-
-    def getGroupsFor(self, guid):
-        self.log.debug("get groups-for %s" % (guid,))
-        def _value(value):
-            if value:
-                return value
-            else:
-                return set()
-        d = self.get("groups-for:%s" % (str(guid),))
-        d.addCallback(_value)
-        return d
-
-
-    def deleteGroupsFor(self, guid):
-        self.log.debug("delete groups-for %s" % (guid,))
-        return self.delete("groups-for:%s" % (str(guid),))
-
-
-    def setPopulatedMarker(self):
-        self.log.debug("set group-cacher-populated")
-        return self.set("group-cacher-populated", str(datetime.datetime.now()))
-
-
-    @inlineCallbacks
-    def isPopulated(self):
-        self.log.debug("is group-cacher-populated")
-        value = (yield self.get("group-cacher-populated"))
-        returnValue(value is not None)
-
-
-    def acquireLock(self):
-        """
-        Acquire a memcached lock named group-cacher-lock
-
-        return: Deferred firing True if successful, False if someone already has
-            the lock
-        """
-        self.log.debug("add group-cacher-lock")
-        return self.add("group-cacher-lock", "1", expireTime=self.lockSeconds)
-
-
-    def extendLock(self):
-        """
-        Update the expiration time of the memcached lock
-        Return: Deferred firing True if successful, False otherwise
-        """
-        self.log.debug("extend group-cacher-lock")
-        return self.set("group-cacher-lock", "1", expireTime=self.lockSeconds)
-
-
-    def releaseLock(self):
-        """
-        Release the memcached lock
-        Return: Deferred firing True if successful, False otherwise
-        """
-        self.log.debug("delete group-cacher-lock")
-        return self.delete("group-cacher-lock")
-
-
-
-class GroupMembershipCacheUpdater(object):
-    """
-    Responsible for updating memcached with group memberships.  This will run
-    in a sidecar.  There are two sources of proxy data to pull from: the local
-    proxy database, and the location/resource info in the directory system.
-    """
-    log = Logger()
-
-    def __init__(self, proxyDB, directory, updateSeconds, expireSeconds,
-        lockSeconds, cache=None, namespace=None, useExternalProxies=False,
-        externalProxiesSource=None):
-        self.proxyDB = proxyDB
-        self.directory = directory
-        self.updateSeconds = updateSeconds
-        self.useExternalProxies = useExternalProxies
-        if useExternalProxies and externalProxiesSource is None:
-            externalProxiesSource = self.directory.getExternalProxyAssignments
-        self.externalProxiesSource = externalProxiesSource
-
-        if cache is None:
-            assert namespace is not None, "namespace must be specified if GroupMembershipCache is not provided"
-            cache = GroupMembershipCache(namespace, expireSeconds=expireSeconds,
-                lockSeconds=lockSeconds)
-        self.cache = cache
-
-
-    @inlineCallbacks
-    def getGroups(self, guids=None):
-        """
-        Retrieve all groups and their member info (but don't actually fault in
-        the records of the members), and return two dictionaries.  The first
-        contains group records; the keys for this dictionary are the identifiers
-        used by the directory service to specify members.  In OpenDirectory
-        these would be guids, but in LDAP these could be DNs, or some other
-        attribute.  This attribute can be retrieved from a record using
-        record.cachedGroupsAlias().
-        The second dictionary returned maps that member attribute back to the
-        corresponding guid.  These dictionaries are used to reverse-index the
-        groups that users are in by expandedMembers().
-
-        @param guids: if provided, retrieve only the groups corresponding to
-            these guids (including their sub groups)
-        @type guids: list of guid strings
-        """
-        groups = {}
-        aliases = {}
-
-        if guids is None: # get all group guids
-            records = self.directory.listRecords(self.directory.recordType_groups)
-        else: # get only the ones we know have been delegated to
-            records = (yield self.directory.getGroups(guids))
-
-        for record in records:
-            alias = record.cachedGroupsAlias()
-            groups[alias] = record.memberGUIDs()
-            aliases[record.guid] = alias
-
-        returnValue((groups, aliases))
-
-
-    def expandedMembers(self, groups, guid, members=None, seen=None):
-        """
-        Return the complete, flattened set of members of a group, including
-        all sub-groups, based on the group hierarchy described in the
-        groups dictionary.
-        """
-        if members is None:
-            members = set()
-        if seen is None:
-            seen = set()
-
-        if guid not in seen:
-            seen.add(guid)
-            for member in groups[guid]:
-                members.add(member)
-                if member in groups: # it's a group then
-                    self.expandedMembers(groups, member, members=members,
-                                         seen=seen)
-        return members
-
-
-    @inlineCallbacks
-    def updateCache(self, fast=False):
-        """
-        Iterate the proxy database to retrieve all the principals who have been
-        delegated to.  Fault these principals in.  For any of these principals
-        that are groups, expand the members of that group and store those in
-        the cache
-
-        If fast=True, we're in quick-start mode, used only by the master process
-        to start servicing requests as soon as possible.  In this mode we look
-        for DataRoot/memberships_cache which is a pickle of a dictionary whose
-        keys are guids (except when using LDAP where the keys will be DNs), and
-        the values are lists of group guids.  If the cache file does not exist
-        we switch to fast=False.
-
-        The return value is mainly used for unit tests; it's a tuple containing
-        the (possibly modified) value for fast, and the number of members loaded
-        into the cache (which can be zero if fast=True and isPopulated(), or
-        fast=False and the cache is locked by someone else).
-
-        The pickled snapshot file is a dict whose keys represent a record and
-        the values are the guids of the groups that record is a member of.  The
-        keys are normally guids except in the case of a directory system like LDAP
-        where there can be a different attribute used for referring to members,
-        such as a DN.
-        """
-
-        # TODO: add memcached eviction protection
-
-        useLock = True
-
-        # See if anyone has completely populated the group membership cache
-        isPopulated = (yield self.cache.isPopulated())
-
-        if fast:
-            # We're in quick-start mode.  Check first to see if someone has
-            # populated the membership cache, and if so, return immediately
-            if isPopulated:
-                self.log.info("Group membership cache is already populated")
-                returnValue((fast, 0, 0))
-
-            # We don't care what others are doing right now, we need to update
-            useLock = False
-
-        self.log.info("Updating group membership cache")
-
-        dataRoot = FilePath(config.DataRoot)
-        membershipsCacheFile = dataRoot.child("memberships_cache")
-        extProxyCacheFile = dataRoot.child("external_proxy_cache")
-
-        if not membershipsCacheFile.exists():
-            self.log.info("Group membership snapshot file does not yet exist")
-            fast = False
-            previousMembers = {}
-            callGroupsChanged = False
-        else:
-            self.log.info("Group membership snapshot file exists: %s" %
-                (membershipsCacheFile.path,))
-            callGroupsChanged = True
-            try:
-                previousMembers = pickle.loads(membershipsCacheFile.getContent())
-            except:
-                self.log.warn("Could not parse snapshot; will regenerate cache")
-                fast = False
-                previousMembers = {}
-                callGroupsChanged = False
-
-        if useLock:
-            self.log.info("Attempting to acquire group membership cache lock")
-            acquiredLock = (yield self.cache.acquireLock())
-            if not acquiredLock:
-                self.log.info("Group membership cache lock held by another process")
-                returnValue((fast, 0, 0))
-            self.log.info("Acquired lock")
-
-        if not fast and self.useExternalProxies:
-
-            # Load in cached copy of external proxies so we can diff against them
-            previousAssignments = []
-            if extProxyCacheFile.exists():
-                self.log.info("External proxies snapshot file exists: %s" %
-                    (extProxyCacheFile.path,))
-                try:
-                    previousAssignments = pickle.loads(extProxyCacheFile.getContent())
-                except:
-                    self.log.warn("Could not parse external proxies snapshot")
-                    previousAssignments = []
-
-            if useLock:
-                yield self.cache.extendLock()
-
-            self.log.info("Retrieving proxy assignments from directory")
-            assignments = self.externalProxiesSource()
-            self.log.info("%d proxy assignments retrieved from directory" %
-                (len(assignments),))
-
-            if useLock:
-                yield self.cache.extendLock()
-
-            changed, removed = diffAssignments(previousAssignments, assignments)
-            # changed is the list of proxy assignments (either new or updates).
-            # removed is the list of principals who used to have an external
-            #   delegate but don't anymore.
-
-            # populate proxy DB from external resource info
-            if changed:
-                self.log.info("Updating proxy assignments")
-                assignmentCount = 0
-                totalNumAssignments = len(changed)
-                currentAssignmentNum = 0
-                for principalUID, members in changed:
-                    currentAssignmentNum += 1
-                    if currentAssignmentNum % 1000 == 0:
-                        self.log.info("...proxy assignment %d of %d" % (currentAssignmentNum,
-                            totalNumAssignments))
-                    try:
-                        current = (yield self.proxyDB.getMembers(principalUID))
-                        if members != current:
-                            assignmentCount += 1
-                            yield self.proxyDB.setGroupMembers(principalUID, members)
-                    except Exception, e:
-                        self.log.error("Unable to update proxy assignment: principal=%s, members=%s, error=%s" % (principalUID, members, e))
-                self.log.info("Updated %d assignment%s in proxy database" %
-                    (assignmentCount, "" if assignmentCount == 1 else "s"))
-
-            if removed:
-                self.log.info("Deleting proxy assignments")
-                assignmentCount = 0
-                totalNumAssignments = len(removed)
-                currentAssignmentNum = 0
-                for principalUID in removed:
-                    currentAssignmentNum += 1
-                    if currentAssignmentNum % 1000 == 0:
-                        self.log.info("...proxy assignment %d of %d" % (currentAssignmentNum,
-                            totalNumAssignments))
-                    try:
-                        assignmentCount += 1
-                        yield self.proxyDB.setGroupMembers(principalUID, [])
-                    except Exception, e:
-                        self.log.error("Unable to remove proxy assignment: principal=%s, members=%s, error=%s" % (principalUID, members, e))
-                self.log.info("Removed %d assignment%s from proxy database" %
-                    (assignmentCount, "" if assignmentCount == 1 else "s"))
-
-            # Store external proxy snapshot
-            self.log.info("Taking snapshot of external proxies to %s" %
-                (extProxyCacheFile.path,))
-            extProxyCacheFile.setContent(pickle.dumps(assignments))
-
-        if fast:
-            # If there is an on-disk snapshot of the membership information,
-            # load that and put into memcached, bypassing the faulting in of
-            # any records, so that the server can start up quickly.
-
-            self.log.info("Loading group memberships from snapshot")
-            members = pickle.loads(membershipsCacheFile.getContent())
-
-        else:
-            # Fetch the group hierarchy from the directory, fetch the list
-            # of delegated-to guids, intersect those and build a dictionary
-            # containing which delegated-to groups a user is a member of
-
-            self.log.info("Retrieving list of all proxies")
-            # This is always a set of guids:
-            # MOVE2WHO
-            delegatedGUIDs = set() # set((yield self.proxyDB.getAllMembers()))
-            self.log.info("There are %d proxies" % (len(delegatedGUIDs),))
-            self.log.info("Retrieving group hierarchy from directory")
-
-            # "groups" maps a group to its members; the keys and values consist
-            # of whatever directory attribute is used to refer to members.  The
-            # attribute value comes from record.cachedGroupsAlias().
-            # "aliases" maps the record.cachedGroupsAlias() value for a group
-            # back to the group's guid.
-            groups, aliases = (yield self.getGroups(guids=delegatedGUIDs))
-            groupGUIDs = set(aliases.keys())
-            self.log.info("%d groups retrieved from the directory" %
-                (len(groupGUIDs),))
-
-            delegatedGUIDs = delegatedGUIDs.intersection(groupGUIDs)
-            self.log.info("%d groups are proxies" % (len(delegatedGUIDs),))
-
-            # Reverse index the group membership from cache
-            members = {}
-            for groupGUID in delegatedGUIDs:
-                groupMembers = self.expandedMembers(groups, aliases[groupGUID])
-                # groupMembers is in cachedGroupsAlias() format
-                for member in groupMembers:
-                    memberships = members.setdefault(member, set())
-                    memberships.add(groupGUID)
-
-            self.log.info("There are %d users delegated-to via groups" %
-                (len(members),))
-
-            # Store snapshot
-            self.log.info("Taking snapshot of group memberships to %s" %
-                (membershipsCacheFile.path,))
-            membershipsCacheFile.setContent(pickle.dumps(members))
-
-            # Update ownership
-            uid = gid = -1
-            if config.UserName:
-                uid = pwd.getpwnam(config.UserName).pw_uid
-            if config.GroupName:
-                gid = grp.getgrnam(config.GroupName).gr_gid
-            os.chown(membershipsCacheFile.path, uid, gid)
-            if extProxyCacheFile.exists():
-                os.chown(extProxyCacheFile.path, uid, gid)
-
-        self.log.info("Storing %d group memberships in memcached" %
-                       (len(members),))
-        changedMembers = set()
-        totalNumMembers = len(members)
-        currentMemberNum = 0
-        for member, groups in members.iteritems():
-            currentMemberNum += 1
-            if currentMemberNum % 1000 == 0:
-                self.log.info("...membership %d of %d" % (currentMemberNum,
-                    totalNumMembers))
-            # self.log.debug("%s is in %s" % (member, groups))
-            yield self.cache.setGroupsFor(member, groups)
-            if groups != previousMembers.get(member, None):
-                # This principal has had a change in group membership
-                # so invalidate the PROPFIND response cache
-                changedMembers.add(member)
-            try:
-                # Remove from previousMembers; anything still left in
-                # previousMembers when this loop is done will be
-                # deleted from cache (since only members that were
-                # previously in delegated-to groups but are no longer
-                # would still be in previousMembers)
-                del previousMembers[member]
-            except KeyError:
-                pass
-
-        # Remove entries for principals that no longer are in delegated-to
-        # groups
-        for member, groups in previousMembers.iteritems():
-            yield self.cache.deleteGroupsFor(member)
-            changedMembers.add(member)
-
-        # For principals whose group membership has changed, call groupsChanged()
-        if callGroupsChanged and not fast and hasattr(self.directory, "principalCollection"):
-            for member in changedMembers:
-                record = yield self.directory.recordWithCachedGroupsAlias(
-                    self.directory.recordType_users, member)
-                if record is not None:
-                    principal = self.directory.principalCollection.principalForRecord(record)
-                    if principal is not None:
-                        self.log.debug("Group membership changed for %s (%s)" %
-                            (record.shortNames[0], record.guid,))
-                        if hasattr(principal, "groupsChanged"):
-                            yield principal.groupsChanged()
-
-        yield self.cache.setPopulatedMarker()
-
-        if useLock:
-            self.log.info("Releasing lock")
-            yield self.cache.releaseLock()
-
-        self.log.info("Group memberships cache updated")
-
-        returnValue((fast, len(members), len(changedMembers)))
-
-
-
-def diffAssignments(old, new):
-    """
-    Compare two proxy assignment lists and return their differences in the form of
-    two lists -- one for added/updated assignments, and one for removed assignments.
-    @param old: list of (group, set(members)) tuples
-    @type old: C{list}
-    @param new: list of (group, set(members)) tuples
-    @type new: C{list}
-    @return: Tuple of two lists; the first list contains tuples of (proxy-principal,
-        set(members)), and represents all the new or updated assignments.  The
-        second list contains all the proxy-principals which used to have a delegate
-        but don't anymore.
-    """
-    old = dict(old)
-    new = dict(new)
-    changed = []
-    removed = []
-    for key in old.iterkeys():
-        if key not in new:
-            removed.append(key)
-        else:
-            if old[key] != new[key]:
-                changed.append((key, new[key]))
-    for key in new.iterkeys():
-        if key not in old:
-            changed.append((key, new[key]))
-    return changed, removed
-
-
-
-class DirectoryRecord(object):
-    log = Logger()
-
-    implements(IDirectoryRecord, ICalendarStoreDirectoryRecord)
-
-    def __repr__(self):
-        return "<%s[%s@%s(%s)] %s(%s) %r @ %s>" % (
-            self.__class__.__name__,
-            self.recordType,
-            self.service.guid,
-            self.service.realmName,
-            self.guid,
-            ",".join(self.shortNames),
-            self.fullName,
-            self.serverURI(),
-        )
-
-
-    def __init__(
-        self, service, recordType, guid=None,
-        shortNames=(), authIDs=set(), fullName=None,
-        firstName=None, lastName=None, emailAddresses=set(),
-        calendarUserAddresses=set(),
-        autoSchedule=False, autoScheduleMode=None,
-        autoAcceptGroup="",
-        enabledForCalendaring=None,
-        enabledForAddressBooks=None,
-        uid=None,
-        enabledForLogin=True,
-        extProxies=(), extReadOnlyProxies=(),
-        **kwargs
-    ):
-        assert service.realmName is not None
-        assert recordType
-        assert shortNames and isinstance(shortNames, tuple)
-
-        guid = normalizeUUID(guid)
-
-        if uid is None:
-            uid = guid
-
-        if fullName is None:
-            fullName = ""
-
-        self.service = service
-        self.recordType = recordType
-        self.guid = guid
-        self.uid = uid
-        self.enabled = False
-        self.serverID = ""
-        self.shortNames = shortNames
-        self.authIDs = authIDs
-        self.fullName = fullName
-        self.firstName = firstName
-        self.lastName = lastName
-        self.emailAddresses = emailAddresses
-        self.enabledForCalendaring = enabledForCalendaring
-        self.autoSchedule = autoSchedule
-        self.autoScheduleMode = autoScheduleMode
-        self.autoAcceptGroup = autoAcceptGroup
-        self.enabledForAddressBooks = enabledForAddressBooks
-        self.enabledForLogin = enabledForLogin
-        self.extProxies = extProxies
-        self.extReadOnlyProxies = extReadOnlyProxies
-        self.extras = kwargs
-
-
-    def get_calendarUserAddresses(self):
-        """
-        Dynamically construct a calendarUserAddresses attribute which describes
-        this L{DirectoryRecord}.
-
-        @see: L{IDirectoryRecord.calendarUserAddresses}.
-        """
-        if not self.enabledForCalendaring:
-            return frozenset()
-        cuas = set(
-            ["mailto:%s" % (emailAddress,)
-             for emailAddress in self.emailAddresses]
-        )
-        if self.guid:
-            cuas.add("urn:uuid:%s" % (self.guid,))
-            cuas.add(joinURL("/principals", "__uids__", self.guid) + "/")
-        for shortName in self.shortNames:
-            cuas.add(joinURL("/principals", self.recordType, shortName,) + "/")
-
-        return frozenset(cuas)
-
-    calendarUserAddresses = property(get_calendarUserAddresses)
-
-    def __cmp__(self, other):
-        if not isinstance(other, DirectoryRecord):
-            return NotImplemented
-
-        for attr in ("service", "recordType", "shortNames", "guid"):
-            diff = cmp(getattr(self, attr), getattr(other, attr))
-            if diff != 0:
-                return diff
-        return 0
-
-
-    def __hash__(self):
-        h = hash(self.__class__.__name__)
-        for attr in ("service", "recordType", "shortNames", "guid",
-                     "enabled", "enabledForCalendaring"):
-            h = (h + hash(getattr(self, attr))) & sys.maxint
-
-        return h
-
-
-    def cacheToken(self):
-        """
-        Generate a token that can be uniquely used to identify the state of this record for use
-        in a cache.
-        """
-        return hash((
-            self.__class__.__name__,
-            self.service.realmName,
-            self.recordType,
-            self.shortNames,
-            self.guid,
-            self.enabled,
-            self.enabledForCalendaring,
-        ))
-
-
-    def addAugmentInformation(self, augment):
-
-        if augment:
-            self.enabled = augment.enabled
-            self.serverID = augment.serverID
-            self.enabledForCalendaring = augment.enabledForCalendaring
-            self.enabledForAddressBooks = augment.enabledForAddressBooks
-            self.autoSchedule = augment.autoSchedule
-            self.autoScheduleMode = augment.autoScheduleMode
-            self.autoAcceptGroup = augment.autoAcceptGroup
-            self.enabledForLogin = augment.enabledForLogin
-
-            if (self.enabledForCalendaring or self.enabledForAddressBooks) and self.recordType == self.service.recordType_groups:
-                self.enabledForCalendaring = False
-                self.enabledForAddressBooks = False
-
-                # For augment records cloned from the Default augment record,
-                # don't emit this message:
-                if not augment.clonedFromDefault:
-                    self.log.error("Group '%s(%s)' cannot be enabled for calendaring or address books" % (self.guid, self.shortNames[0],))
-
-        else:
-            # Groups are by default always enabled
-            self.enabled = (self.recordType == self.service.recordType_groups)
-            self.serverID = ""
-            self.enabledForCalendaring = False
-            self.enabledForAddressBooks = False
-            self.enabledForLogin = False
-
-
-    def applySACLs(self):
-        """
-        Disable calendaring and addressbooks as dictated by SACLs
-        """
-
-        if config.EnableSACLs and self.CheckSACL:
-            username = self.shortNames[0]
-            if self.CheckSACL(username, "calendar") != 0:
-                self.log.debug("%s is not enabled for calendaring due to SACL"
-                               % (username,))
-                self.enabledForCalendaring = False
-            if self.CheckSACL(username, "addressbook") != 0:
-                self.log.debug("%s is not enabled for addressbooks due to SACL"
-                               % (username,))
-                self.enabledForAddressBooks = False
-
-
-    def displayName(self):
-        return self.fullName if self.fullName else self.shortNames[0]
-
-
-    def isLoginEnabled(self):
-        """
-        Returns True if the user should be allowed to log in, based on the
-        enabledForLogin attribute, which is currently controlled by the
-        DirectoryService implementation.
-        """
-        return self.enabledForLogin
-
-
-    def members(self):
-        return ()
-
-
-    def expandedMembers(self, members=None, seen=None):
-        """
-        Return the complete, flattened set of members of a group, including
-        all sub-groups.
-        """
-        if members is None:
-            members = set()
-        if seen is None:
-            seen = set()
-
-        if self not in seen:
-            seen.add(self)
-            for member in self.members():
-                members.add(member)
-                if member.recordType == self.service.recordType_groups:
-                    member.expandedMembers(members=members, seen=seen)
-
-        return members
-
-
-    def groups(self):
-        return ()
-
-
-    def cachedGroups(self):
-        """
-        Return the set of groups (guids) this record is a member of, based on
-        the data cached by cacheGroupMembership( )
-        """
-        return self.service.groupMembershipCache.getGroupsFor(self.cachedGroupsAlias())
-
-
-    def cachedGroupsAlias(self):
-        """
-        The GroupMembershipCache uses keys based on this value.  Normally it's
-        a record's guid but in a directory system like LDAP which can use a
-        different attribute to refer to group members, we need to be able to
-        look up an entry in the GroupMembershipCache by that attribute.
-        Subclasses which don't use record.guid to look up group membership
-        should override this method.
-        """
-        return self.guid
-
-
-    def externalProxies(self):
-        """
-        Return the set of proxies defined in the directory service, as opposed
-        to assignments in the proxy DB itself.
-        """
-        return set(self.extProxies)
-
-
-    def externalReadOnlyProxies(self):
-        """
-        Return the set of read-only proxies defined in the directory service,
-        as opposed to assignments in the proxy DB itself.
-        """
-        return set(self.extReadOnlyProxies)
-
-
-    def memberGUIDs(self):
-        """
-        Return the set of GUIDs that are members of this group
-        """
-        return set()
-
-
-    def verifyCredentials(self, credentials):
-        return False
-
-
-    def calendarsEnabled(self):
-        return config.EnableCalDAV and self.enabledForCalendaring
-
-
-    def canonicalCalendarUserAddress(self):
-        """
-            Return a CUA for this principal, preferring in this order:
-            urn:uuid: form
-            mailto: form
-            first in calendarUserAddresses list
-        """
-
-        cua = ""
-        for candidate in self.calendarUserAddresses:
-            # Pick the first one, but urn:uuid: and mailto: can override
-            if not cua:
-                cua = candidate
-            # But always immediately choose the urn:uuid: form
-            if candidate.startswith("urn:uuid:"):
-                cua = candidate
-                break
-            # Prefer mailto: if no urn:uuid:
-            elif candidate.startswith("mailto:"):
-                cua = candidate
-        return cua
-
-
-    def enabledAsOrganizer(self):
-        if self.recordType == DirectoryService.recordType_users:
-            return True
-        elif self.recordType == DirectoryService.recordType_groups:
-            return config.Scheduling.Options.AllowGroupAsOrganizer
-        elif self.recordType == DirectoryService.recordType_locations:
-            return config.Scheduling.Options.AllowLocationAsOrganizer
-        elif self.recordType == DirectoryService.recordType_resources:
-            return config.Scheduling.Options.AllowResourceAsOrganizer
-        else:
-            return False
-
-    # Mapping from directory record.recordType to RFC2445 CUTYPE values
-    _cuTypes = {
-        'users' : 'INDIVIDUAL',
-        'groups' : 'GROUP',
-        'resources' : 'RESOURCE',
-        'locations' : 'ROOM',
-    }
-
-    def getCUType(self):
-        return self._cuTypes.get(self.recordType, "UNKNOWN")
-
-
-    @classmethod
-    def fromCUType(cls, cuType):
-        for key, val in cls._cuTypes.iteritems():
-            if val == cuType:
-                return key
-        return None
-
-
-    def canAutoSchedule(self, organizer):
-        if config.Scheduling.Options.AutoSchedule.Enabled:
-            if (config.Scheduling.Options.AutoSchedule.Always or
-                self.autoSchedule or
-                self.autoAcceptFromOrganizer(organizer)):
-                if (self.getCUType() != "INDIVIDUAL" or
-                    config.Scheduling.Options.AutoSchedule.AllowUsers):
-                    return True
-        return False
-
-
-    def getAutoScheduleMode(self, organizer):
-        autoScheduleMode = self.autoScheduleMode
-        if self.autoAcceptFromOrganizer(organizer):
-            autoScheduleMode = "automatic"
-        return autoScheduleMode
-
-
-    def autoAcceptFromOrganizer(self, organizer):
-        if organizer is not None and self.autoAcceptGroup is not None:
-            service = self.service.aggregateService or self.service
-            organizerRecord = service.recordWithCalendarUserAddress(organizer)
-            if organizerRecord is not None:
-                if organizerRecord.guid in self.autoAcceptMembers():
-                    return True
-        return False
-
-
-    def serverURI(self):
-        """
-        URL of the server hosting this record. Return None if hosted on this server.
-        """
-        if config.Servers.Enabled and self.serverID:
-            return Servers.getServerURIById(self.serverID)
-        else:
-            return None
-
-
-    def server(self):
-        """
-        Server hosting this record. Return None if hosted on this server.
-        """
-        if config.Servers.Enabled and self.serverID:
-            return Servers.getServerById(self.serverID)
-        else:
-            return None
-
-
-    def thisServer(self):
-        s = self.server()
-        return s.thisServer if s is not None else True
-
-
-    def autoAcceptMembers(self):
-        """
-        Return the list of GUIDs for which this record will automatically accept
-        invites from (assuming no conflicts).  This list is based on the group
-        assigned to record.autoAcceptGroup.  Cache the expanded group membership
-        within the record.
-
-        @return: the list of members of the autoAcceptGroup, or an empty list if
-            not assigned
-        @rtype: C{list} of GUID C{str}
-        """
-        if not hasattr(self, "_cachedAutoAcceptMembers"):
-            self._cachedAutoAcceptMembers = []
-            if self.autoAcceptGroup:
-                service = self.service.aggregateService or self.service
-                groupRecord = service.recordWithGUID(self.autoAcceptGroup)
-                if groupRecord is not None:
-                    self._cachedAutoAcceptMembers = [m.guid for m in groupRecord.expandedMembers()]
-
-        return self._cachedAutoAcceptMembers
-
-
-    def isProxyFor(self, other):
-        """
-        Test whether the record is a calendar user proxy for the specified record.
-
-        @param other: record to test
-        @type other: L{DirectoryRecord}
-
-        @return: C{True} if it is a proxy.
-        @rtype: C{bool}
-        """
-        return self.service.isProxyFor(self, other)
-
-
-
-class DirectoryError(RuntimeError):
-    """
-    Generic directory error.
-    """
-
-
-
-class DirectoryConfigurationError(DirectoryError):
-    """
-    Invalid directory configuration.
-    """
-
-
-
-class UnknownRecordTypeError(DirectoryError):
-    """
-    Unknown directory record type.
-    """
-    def __init__(self, recordType):
-        DirectoryError.__init__(self, "Invalid record type: %s" % (recordType,))
-        self.recordType = recordType
-
-
-# So CheckSACL will be parameterized
-# We do this after DirectoryRecord is defined
-try:
-    from calendarserver.platform.darwin._sacl import CheckSACL
-    DirectoryRecord.CheckSACL = CheckSACL
-except ImportError:
-    DirectoryRecord.CheckSACL = None

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/ldapdirectory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/ldapdirectory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/ldapdirectory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,2034 +0,0 @@
-##
-# Copyright (c) 2008-2009 Aymeric Augustin. All rights reserved.
-# Copyright (c) 2006-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-"""
-LDAP directory service implementation.  Supports principal-property-search
-and restrictToGroup features.
-
-The following attributes from standard schemas are used:
-* Core (RFC 4519):
-    . cn | commonName
-    . givenName
-    . member (if not using NIS groups)
-    . ou
-    . sn | surname
-    . uid | userid (if using NIS groups)
-* COSINE (RFC 4524):
-    . mail
-* InetOrgPerson (RFC 2798):
-    . displayName (if cn is unavailable)
-* NIS (RFC):
-    . gecos (if cn is unavailable)
-    . memberUid (if using NIS groups)
-"""
-
-__all__ = [
-    "LdapDirectoryService",
-]
-
-import ldap.async
-from ldap.filter import escape_filter_chars as ldapEsc
-
-try:
-    # Note: PAM support is currently untested
-    import PAM
-    pamAvailable = True
-except ImportError:
-    pamAvailable = False
-
-import time
-from twisted.cred.credentials import UsernamePassword
-from twistedcaldav.directory.cachingdirectory import (
-    CachingDirectoryService, CachingDirectoryRecord
-)
-from twistedcaldav.directory.directory import DirectoryConfigurationError
-from twistedcaldav.directory.augment import AugmentRecord
-from twistedcaldav.directory.util import splitIntoBatches, normalizeUUID
-from twisted.internet.defer import succeed, inlineCallbacks, returnValue
-from twisted.internet.threads import deferToThread
-from twext.python.log import Logger
-from txweb2.http import HTTPError, StatusResponse
-from txweb2 import responsecode
-
-
-
-class LdapDirectoryService(CachingDirectoryService):
-    """
-    LDAP based implementation of L{IDirectoryService}.
-    """
-    log = Logger()
-
-    baseGUID = "5A871574-0C86-44EE-B11B-B9440C3DC4DD"
-
-    def __repr__(self):
-        return "<%s %r: %r>" % (
-            self.__class__.__name__, self.realmName, self.uri
-        )
-
-
-    def __init__(self, params):
-        """
-        @param params: a dictionary containing the following keys:
-            cacheTimeout, realmName, uri, tls, tlsCACertFile, tlsCACertDir,
-            tlsRequireCert, credentials, rdnSchema, groupSchema, resourceSchema
-            poddingSchema
-        """
-
-        defaults = {
-            "augmentService": None,
-            "groupMembershipCache": None,
-            "cacheTimeout": 1,  # Minutes
-            "negativeCaching": False,
-            "warningThresholdSeconds": 3,
-            "batchSize": 500,  # for splitting up large queries
-            "requestTimeoutSeconds": 10,
-            "requestResultsLimit": 200,
-            "optimizeMultiName": False,
-            "queryLocationsImplicitly": True,
-            "restrictEnabledRecords": False,
-            "restrictToGroup": "",
-            "recordTypes": ("users", "groups"),
-            "uri": "ldap://localhost/",
-            "tls": False,
-            "tlsCACertFile": None,
-            "tlsCACertDir": None,
-            "tlsRequireCert": None,  # never, allow, try, demand, hard
-            "credentials": {
-                "dn": None,
-                "password": None,
-            },
-            "authMethod": "LDAP",
-            "rdnSchema": {
-                "base": "dc=example,dc=com",
-                "guidAttr": "entryUUID",
-                "users": {
-                    "rdn": "ou=People",
-                    "filter": None,  # additional filter for this type
-                    "loginEnabledAttr": "",  # attribute controlling login
-                    "loginEnabledValue": "yes",  # "True" value of above attribute
-                    "calendarEnabledAttr": "",  # attribute controlling enabledForCalendaring
-                    "calendarEnabledValue": "yes",  # "True" value of above attribute
-                    "mapping": {  # maps internal record names to LDAP
-                        "recordName": "uid",
-                        "fullName": "cn",
-                        "emailAddresses": ["mail"],  # multiple LDAP fields supported
-                        "firstName": "givenName",
-                        "lastName": "sn",
-                    },
-                },
-                "groups": {
-                    "rdn": "ou=Group",
-                    "filter": None,  # additional filter for this type
-                    "mapping": {  # maps internal record names to LDAP
-                        "recordName": "cn",
-                        "fullName": "cn",
-                        "emailAddresses": ["mail"],  # multiple LDAP fields supported
-                        "firstName": "givenName",
-                        "lastName": "sn",
-                    },
-                },
-                "locations": {
-                    "rdn": "ou=Places",
-                    "filter": None,  # additional filter for this type
-                    "calendarEnabledAttr": "",  # attribute controlling enabledForCalendaring
-                    "calendarEnabledValue": "yes",  # "True" value of above attribute
-                    "associatedAddressAttr": "",
-                    "mapping": {  # maps internal record names to LDAP
-                        "recordName": "cn",
-                        "fullName": "cn",
-                        "emailAddresses": ["mail"],  # multiple LDAP fields supported
-                    },
-                },
-                "resources": {
-                    "rdn": "ou=Resources",
-                    "filter": None,  # additional filter for this type
-                    "calendarEnabledAttr": "",  # attribute controlling enabledForCalendaring
-                    "calendarEnabledValue": "yes",  # "True" value of above attribute
-                    "mapping": {  # maps internal record names to LDAP
-                        "recordName": "cn",
-                        "fullName": "cn",
-                        "emailAddresses": ["mail"],  # multiple LDAP fields supported
-                    },
-                },
-                "addresses": {
-                    "rdn": "ou=Buildings",
-                    "filter": None,  # additional filter for this type
-                    "streetAddressAttr": "",
-                    "geoAttr": "",
-                    "mapping": {  # maps internal record names to LDAP
-                        "recordName": "cn",
-                        "fullName": "cn",
-                    },
-                },
-            },
-            "groupSchema": {
-                "membersAttr": "member",  # how members are specified
-                "nestedGroupsAttr": None,  # how nested groups are specified
-                "memberIdAttr": None,  # which attribute the above refer to (None means use DN)
-            },
-            "resourceSchema": {
-                # Either set this attribute to retrieve the plist version
-                # of resource-info, as in a Leopard OD server, or...
-                "resourceInfoAttr": None,
-                # ...set the above to None and instead specify these
-                # individually:
-                "autoScheduleAttr": None,
-                "autoScheduleEnabledValue": "yes",
-                "proxyAttr": None,  # list of GUIDs
-                "readOnlyProxyAttr": None,  # list of GUIDs
-                "autoAcceptGroupAttr": None,  # single group GUID
-            },
-            "poddingSchema": {
-                "serverIdAttr": None,  # maps to augments server-id
-            },
-        }
-        ignored = None
-        params = self.getParams(params, defaults, ignored)
-
-        self._recordTypes = params["recordTypes"]
-
-        super(LdapDirectoryService, self).__init__(params["cacheTimeout"],
-                                                   params["negativeCaching"])
-
-        self.warningThresholdSeconds = params["warningThresholdSeconds"]
-        self.batchSize = params["batchSize"]
-        self.requestTimeoutSeconds = params["requestTimeoutSeconds"]
-        self.requestResultsLimit = params["requestResultsLimit"]
-        self.optimizeMultiName = params["optimizeMultiName"]
-        if self.batchSize > self.requestResultsLimit:
-            self.batchSize = self.requestResultsLimit
-        self.queryLocationsImplicitly = params["queryLocationsImplicitly"]
-        self.augmentService = params["augmentService"]
-        self.groupMembershipCache = params["groupMembershipCache"]
-        self.realmName = params["uri"]
-        self.uri = params["uri"]
-        self.tls = params["tls"]
-        self.tlsCACertFile = params["tlsCACertFile"]
-        self.tlsCACertDir = params["tlsCACertDir"]
-        self.tlsRequireCert = params["tlsRequireCert"]
-        self.credentials = params["credentials"]
-        self.authMethod = params["authMethod"]
-        self.rdnSchema = params["rdnSchema"]
-        self.groupSchema = params["groupSchema"]
-        self.resourceSchema = params["resourceSchema"]
-        self.poddingSchema = params["poddingSchema"]
-
-        self.base = ldap.dn.str2dn(self.rdnSchema["base"])
-
-        # Certain attributes (such as entryUUID) may be hidden and not
-        # returned by default when queried for all attributes. Therefore it is
-        # necessary to explicitly pass all the possible attributes list
-        # for ldap searches.  Dynamically build the attribute list based on
-        # config.
-        attrSet = set()
-
-        if self.rdnSchema["guidAttr"]:
-            attrSet.add(self.rdnSchema["guidAttr"])
-        for recordType in self.recordTypes():
-            if self.rdnSchema[recordType]["attr"]:
-                attrSet.add(self.rdnSchema[recordType]["attr"])
-            for n in ("calendarEnabledAttr", "associatedAddressAttr",
-                      "streetAddressAttr", "geoAttr"):
-                if self.rdnSchema[recordType].get(n, False):
-                    attrSet.add(self.rdnSchema[recordType][n])
-            for attrList in self.rdnSchema[recordType]["mapping"].values():
-                if attrList:
-                    # Since emailAddresses can map to multiple LDAP fields,
-                    # support either string or list
-                    if isinstance(attrList, str):
-                        attrList = [attrList]
-                    for attr in attrList:
-                        attrSet.add(attr)
-            # Also put the guidAttr attribute into the mappings for each type
-            # so recordsMatchingFields can query on guid
-            self.rdnSchema[recordType]["mapping"]["guid"] = self.rdnSchema["guidAttr"]
-            # Also put the memberIdAttr attribute into the mappings for each type
-            # so recordsMatchingFields can query on memberIdAttr
-            self.rdnSchema[recordType]["mapping"]["memberIdAttr"] = self.groupSchema["memberIdAttr"]
-        if self.groupSchema["membersAttr"]:
-            attrSet.add(self.groupSchema["membersAttr"])
-        if self.groupSchema["nestedGroupsAttr"]:
-            attrSet.add(self.groupSchema["nestedGroupsAttr"])
-        if self.groupSchema["memberIdAttr"]:
-            attrSet.add(self.groupSchema["memberIdAttr"])
-        if self.rdnSchema["users"]["loginEnabledAttr"]:
-            attrSet.add(self.rdnSchema["users"]["loginEnabledAttr"])
-        if self.resourceSchema["resourceInfoAttr"]:
-            attrSet.add(self.resourceSchema["resourceInfoAttr"])
-        if self.resourceSchema["autoScheduleAttr"]:
-            attrSet.add(self.resourceSchema["autoScheduleAttr"])
-        if self.resourceSchema["autoAcceptGroupAttr"]:
-            attrSet.add(self.resourceSchema["autoAcceptGroupAttr"])
-        if self.resourceSchema["proxyAttr"]:
-            attrSet.add(self.resourceSchema["proxyAttr"])
-        if self.resourceSchema["readOnlyProxyAttr"]:
-            attrSet.add(self.resourceSchema["readOnlyProxyAttr"])
-        if self.poddingSchema["serverIdAttr"]:
-            attrSet.add(self.poddingSchema["serverIdAttr"])
-        self.attrlist = list(attrSet)
-
-        self.typeDNs = {}
-        for recordType in self.recordTypes():
-            self.typeDNs[recordType] = ldap.dn.str2dn(
-                self.rdnSchema[recordType]["rdn"].lower()
-            ) + self.base
-
-        self.ldap = None
-
-        # Separate LDAP connection used solely for authenticating clients
-        self.authLDAP = None
-
-        # Restricting access by directory group
-        self.restrictEnabledRecords = params['restrictEnabledRecords']
-        self.restrictToGroup = params['restrictToGroup']
-        self.restrictedTimestamp = 0
-
-
-    def recordTypes(self):
-        return self._recordTypes
-
-
-    def listRecords(self, recordType):
-
-        # Build base for this record Type
-        base = self.typeDNs[recordType]
-
-        # Build filter
-        filterstr = "(!(objectClass=organizationalUnit))"
-        typeFilter = self.rdnSchema[recordType].get("filter", "")
-        if typeFilter:
-            filterstr = "(&%s%s)" % (filterstr, typeFilter)
-
-        # Query the LDAP server
-        self.log.debug(
-            "Querying ldap for records matching base {base} and "
-            "filter {filter} for attributes {attrs}.",
-            base=ldap.dn.dn2str(base), filter=filterstr,
-            attrs=self.attrlist
-        )
-
-        # This takes a while, so if you don't want to have a "long request"
-        # warning logged, use this instead of timedSearch:
-        # results = self.ldap.search_s(ldap.dn.dn2str(base),
-        #     ldap.SCOPE_SUBTREE, filterstr=filterstr, attrlist=self.attrlist)
-        results = self.timedSearch(
-            ldap.dn.dn2str(base), ldap.SCOPE_SUBTREE,
-            filterstr=filterstr, attrlist=self.attrlist
-        )
-
-        records = []
-        numMissingGuids = 0
-        guidAttr = self.rdnSchema["guidAttr"]
-        for dn, attrs in results:
-            dn = normalizeDNstr(dn)
-
-            unrestricted = self.isAllowedByRestrictToGroup(dn, attrs)
-
-            try:
-                record = self._ldapResultToRecord(dn, attrs, recordType)
-                # self.log.debug("Got LDAP record {record}", record=record)
-            except MissingGuidException:
-                numMissingGuids += 1
-                continue
-
-            if not unrestricted:
-                self.log.debug(
-                    "{dn} is not enabled because it's not a member of group: "
-                    "{group}", dn=dn, group=self.restrictToGroup
-                )
-                record.enabledForCalendaring = False
-                record.enabledForAddressBooks = False
-
-            records.append(record)
-
-        if numMissingGuids:
-            self.log.info(
-                "{num} {recordType} records are missing {attr}",
-                num=numMissingGuids, recordType=recordType, attr=guidAttr
-            )
-
-        return records
-
-
-    @inlineCallbacks
-    def recordWithCachedGroupsAlias(self, recordType, alias):
-        """
-        @param recordType: the type of the record to look up.
-        @param alias: the cached-groups alias of the record to look up.
-        @type alias: C{str}
-
-        @return: a deferred L{IDirectoryRecord} with the given cached-groups
-            alias, or C{None} if no such record is found.
-        """
-        memberIdAttr = self.groupSchema["memberIdAttr"]
-        attributeToSearch = "memberIdAttr" if memberIdAttr else "dn"
-
-        fields = [[attributeToSearch, alias, False, "equals"]]
-        results = yield self.recordsMatchingFields(
-            fields, recordType=recordType
-        )
-        if results:
-            returnValue(results[0])
-        else:
-            returnValue(None)
-
-
-    def getExternalProxyAssignments(self):
-        """
-        Retrieve proxy assignments for locations and resources from the
-        directory and return a list of (principalUID, ([memberUIDs)) tuples,
-        suitable for passing to proxyDB.setGroupMembers( )
-        """
-        assignments = []
-
-        guidAttr = self.rdnSchema["guidAttr"]
-        readAttr = self.resourceSchema["readOnlyProxyAttr"]
-        writeAttr = self.resourceSchema["proxyAttr"]
-        if not (guidAttr and readAttr and writeAttr):
-            self.log.error(
-                "LDAP configuration requires guidAttr, proxyAttr, and "
-                "readOnlyProxyAttr in order to use external proxy assignments "
-                "efficiently; falling back to slower method"
-            )
-            # Fall back to the less-specialized version
-            return super(
-                LdapDirectoryService, self
-            ).getExternalProxyAssignments()
-
-        # Build filter
-        filterstr = "(|(%s=*)(%s=*))" % (readAttr, writeAttr)
-        # ...taking into account only calendar-enabled records
-        enabledAttr = self.rdnSchema["locations"]["calendarEnabledAttr"]
-        enabledValue = self.rdnSchema["locations"]["calendarEnabledValue"]
-        if enabledAttr and enabledValue:
-            filterstr = "(&(%s=%s)%s)" % (enabledAttr, enabledValue, filterstr)
-
-        attrlist = [guidAttr, readAttr, writeAttr]
-
-        # Query the LDAP server
-        self.log.debug(
-            "Querying ldap for records matching base {base} and filter "
-            "{filter} for attributes {attrs}.",
-            base=ldap.dn.dn2str(self.base), filter=filterstr,
-            attrs=attrlist
-        )
-
-        results = self.timedSearch(ldap.dn.dn2str(self.base),
-                                   ldap.SCOPE_SUBTREE, filterstr=filterstr,
-                                   attrlist=attrlist)
-
-        for dn, attrs in results:
-            dn = normalizeDNstr(dn)
-            guid = self._getUniqueLdapAttribute(attrs, guidAttr)
-            if guid:
-                guid = normalizeUUID(guid)
-                readDelegate = self._getUniqueLdapAttribute(attrs, readAttr)
-                if readDelegate:
-                    readDelegate = normalizeUUID(readDelegate)
-                    assignments.append(
-                        ("%s#calendar-proxy-read" % (guid,), [readDelegate])
-                    )
-                writeDelegate = self._getUniqueLdapAttribute(attrs, writeAttr)
-                if writeDelegate:
-                    writeDelegate = normalizeUUID(writeDelegate)
-                    assignments.append(
-                        ("%s#calendar-proxy-write" % (guid,), [writeDelegate])
-                    )
-
-        return assignments
-
-
-    def getLDAPConnection(self):
-        if self.ldap is None:
-            self.log.info("Connecting to LDAP {uri}", uri=repr(self.uri))
-            self.ldap = self.createLDAPConnection()
-            self.log.info(
-                "Connection established to LDAP {uri}", uri=repr(self.uri)
-            )
-            if self.credentials.get("dn", ""):
-                try:
-                    self.log.info(
-                        "Binding to LDAP {dn}",
-                        dn=repr(self.credentials.get("dn"))
-                    )
-                    self.ldap.simple_bind_s(
-                        self.credentials.get("dn"),
-                        self.credentials.get("password"),
-                    )
-                    self.log.info(
-                        "Successfully authenticated with LDAP as {dn}",
-                        dn=repr(self.credentials.get("dn"))
-                    )
-                except ldap.INVALID_CREDENTIALS:
-                    self.log.error(
-                        "Can't bind to LDAP {uri}: check credentials",
-                        uri=self.uri
-                    )
-                    raise DirectoryConfigurationError()
-
-        return self.ldap
-
-
-    def createLDAPConnection(self):
-        """
-        Create and configure LDAP connection
-        """
-        cxn = ldap.initialize(self.uri)
-
-        if self.tlsCACertFile:
-            cxn.set_option(ldap.OPT_X_TLS_CACERTFILE, self.tlsCACertFile)
-        if self.tlsCACertDir:
-            cxn.set_option(ldap.OPT_X_TLS_CACERTDIR, self.tlsCACertDir)
-
-        if self.tlsRequireCert == "never":
-            cxn.set_option(ldap.OPT_X_TLS, ldap.OPT_X_TLS_NEVER)
-        elif self.tlsRequireCert == "allow":
-            cxn.set_option(ldap.OPT_X_TLS, ldap.OPT_X_TLS_ALLOW)
-        elif self.tlsRequireCert == "try":
-            cxn.set_option(ldap.OPT_X_TLS, ldap.OPT_X_TLS_TRY)
-        elif self.tlsRequireCert == "demand":
-            cxn.set_option(ldap.OPT_X_TLS, ldap.OPT_X_TLS_DEMAND)
-        elif self.tlsRequireCert == "hard":
-            cxn.set_option(ldap.OPT_X_TLS, ldap.OPT_X_TLS_HARD)
-
-        if self.tls:
-            cxn.start_tls_s()
-
-        return cxn
-
-
-    def authenticate(self, dn, password):
-        """
-        Perform simple bind auth, raising ldap.INVALID_CREDENTIALS if
-        bad password
-        """
-        TRIES = 3
-
-        for _ignore_i in xrange(TRIES):
-            self.log.debug("Authenticating {dn}", dn=dn)
-
-            if self.authLDAP is None:
-                self.log.debug("Creating authentication connection to LDAP")
-                self.authLDAP = self.createLDAPConnection()
-
-            try:
-                startTime = time.time()
-                self.authLDAP.simple_bind_s(dn, password)
-                # Getting here means success, so break the retry loop
-                break
-
-            except ldap.INAPPROPRIATE_AUTH:
-                # Seen when using an empty password, treat as invalid creds
-                raise ldap.INVALID_CREDENTIALS()
-
-            except ldap.NO_SUCH_OBJECT:
-                self.log.error(
-                    "LDAP Authentication error for {dn}: NO_SUCH_OBJECT",
-                    dn=dn
-                )
-                # fall through to try again; could be transient
-
-            except ldap.INVALID_CREDENTIALS:
-                raise
-
-            except ldap.SERVER_DOWN:
-                self.log.error("Lost connection to LDAP server.")
-                self.authLDAP = None
-                # Fall through and retry if TRIES has been reached
-
-            except Exception, e:
-                self.log.error(
-                    "LDAP authentication failed with {e}.", e=e
-                )
-                raise
-
-            finally:
-                totalTime = time.time() - startTime
-                if totalTime > self.warningThresholdSeconds:
-                    self.log.error(
-                        "LDAP auth exceeded threshold: {time:.2f} seconds for "
-                        "{dn}", time=totalTime, dn=dn
-                    )
-
-        else:
-            self.log.error(
-                "Giving up on LDAP authentication after {count:d} tries.  "
-                "Responding with 503.", count=TRIES
-            )
-            raise HTTPError(StatusResponse(
-                responsecode.SERVICE_UNAVAILABLE, "LDAP server unavailable"
-            ))
-
-        self.log.debug("Authentication succeeded for {dn}", dn=dn)
-
-
-    def timedSearch(
-        self, base, scope, filterstr="(objectClass=*)", attrlist=None,
-        timeoutSeconds=-1, resultLimit=0
-    ):
-        """
-        Execute an LDAP query, retrying up to 3 times in case the LDAP server
-        has gone down and we need to reconnect. If it takes longer than the
-        configured threshold, emit a log error.
-        The number of records requested is controlled by resultLimit (0=no
-        limit).
-        If timeoutSeconds is not -1, the query will abort after the specified
-        number of seconds and the results retrieved so far are returned.
-        """
-        TRIES = 3
-
-        for i in xrange(TRIES):
-            try:
-                s = ldap.async.List(self.getLDAPConnection())
-                s.startSearch(
-                    base, scope, filterstr, attrList=attrlist,
-                    timeout=timeoutSeconds, sizelimit=resultLimit
-                )
-                startTime = time.time()
-                s.processResults()
-            except ldap.NO_SUCH_OBJECT:
-                return []
-            except ldap.FILTER_ERROR, e:
-                self.log.error(
-                    "LDAP filter error: {e} {filter}", e=e, filter=filterstr
-                )
-                return []
-            except ldap.SIZELIMIT_EXCEEDED, e:
-                self.log.debug(
-                    "LDAP result limit exceeded: {limit:d}", limit=resultLimit
-                )
-            except ldap.TIMELIMIT_EXCEEDED, e:
-                self.log.warn(
-                    "LDAP timeout exceeded: {t:d} seconds", t=timeoutSeconds
-                )
-            except ldap.SERVER_DOWN:
-                self.ldap = None
-                self.log.error(
-                    "LDAP server unavailable (tried {count:d} times)",
-                    count=(i + 1)
-                )
-                continue
-
-            # change format, ignoring resultsType
-            result = [
-                resultItem for _ignore_resultType, resultItem in s.allResults
-            ]
-
-            totalTime = time.time() - startTime
-            if totalTime > self.warningThresholdSeconds:
-                if filterstr and len(filterstr) > 100:
-                    filterstr = "%s..." % (filterstr[:100],)
-                self.log.error(
-                    "LDAP query exceeded threshold: {time:.2f} seconds for "
-                    "{base} {filter} {attrs} (#results={count:d})",
-                    time=totalTime, base=base, filter=filterstr,
-                    attrs=attrlist, count=len(result),
-                )
-            return result
-
-        raise HTTPError(StatusResponse(
-            responsecode.SERVICE_UNAVAILABLE, "LDAP server unavailable"
-        ))
-
-
-    def isAllowedByRestrictToGroup(self, dn, attrs):
-        """
-        Check to see if the principal with the given DN and LDAP attributes is
-        a member of the restrictToGroup.
-
-        @param dn: an LDAP dn
-        @type dn: C{str}
-        @param attrs: LDAP attributes
-        @type attrs: C{dict}
-        @return: True if principal is in the group (or restrictEnabledRecords if turned off).
-        @rtype: C{boolean}
-        """
-        if not self.restrictEnabledRecords:
-            return True
-        if self.groupSchema["memberIdAttr"]:
-            value = self._getUniqueLdapAttribute(
-                attrs, self.groupSchema["memberIdAttr"]
-            )
-        else:  # No memberIdAttr implies DN
-            value = dn
-        return value in self.restrictedPrincipals
-
-
-    @property
-    def restrictedPrincipals(self):
-        """
-        Look up (and cache) the set of guids that are members of the
-        restrictToGroup.  If restrictToGroup is not set, return None to
-        indicate there are no group restrictions.
-        """
-        if self.restrictEnabledRecords:
-
-            if time.time() - self.restrictedTimestamp > self.cacheTimeout:
-                # fault in the members of group of name self.restrictToGroup
-                recordType = self.recordType_groups
-                base = self.typeDNs[recordType]
-                # TODO: This shouldn't be hardcoded to cn
-                filterstr = "(cn=%s)" % (self.restrictToGroup,)
-                self.log.debug(
-                    "Retrieving ldap record with base {base} and filter "
-                    "{filter}.",
-                    base=ldap.dn.dn2str(base), filter=filterstr
-                )
-                result = self.timedSearch(
-                    ldap.dn.dn2str(base),
-                    ldap.SCOPE_SUBTREE,
-                    filterstr=filterstr,
-                    attrlist=self.attrlist
-                )
-
-                members = []
-                nestedGroups = []
-
-                if len(result) == 1:
-                    dn, attrs = result[0]
-                    dn = normalizeDNstr(dn)
-                    if self.groupSchema["membersAttr"]:
-                        members = self._getMultipleLdapAttributes(
-                            attrs,
-                            self.groupSchema["membersAttr"]
-                        )
-                        if not self.groupSchema["memberIdAttr"]:  # DNs
-                            members = [normalizeDNstr(m) for m in members]
-                        members = set(members)
-
-                    if self.groupSchema["nestedGroupsAttr"]:
-                        nestedGroups = self._getMultipleLdapAttributes(
-                            attrs,
-                            self.groupSchema["nestedGroupsAttr"]
-                        )
-                        if not self.groupSchema["memberIdAttr"]:  # DNs
-                            nestedGroups = [
-                                normalizeDNstr(g) for g in nestedGroups
-                            ]
-                        nestedGroups = set(nestedGroups)
-                    else:
-                        # Since all members are lumped into the same attribute,
-                        # treat them all as nestedGroups instead
-                        nestedGroups = members
-                        members = set()
-
-                self._cachedRestrictedPrincipals = set(
-                    self._expandGroupMembership(members, nestedGroups)
-                )
-                self.log.info(
-                    "Got {count} restricted group members",
-                    count=len(self._cachedRestrictedPrincipals)
-                )
-                self.restrictedTimestamp = time.time()
-            return self._cachedRestrictedPrincipals
-        else:
-            # No restrictions
-            return None
-
-
-    def _expandGroupMembership(self, members, nestedGroups, processedItems=None):
-        """
-        A generator which recursively yields principals which are included within nestedGroups
-
-        @param members:  If the LDAP service is configured to use different attributes to
-            indicate member users and member nested groups, members will include the non-groups.
-            Otherwise, members will be empty and only nestedGroups will be used.
-        @type members: C{set}
-        @param nestedGroups:  If the LDAP service is configured to use different attributes to
-            indicate member users and member nested groups, nestedGroups will include only
-            the groups; otherwise nestedGroups will include all members
-        @type members: C{set}
-        @param processedItems: The set of members that have already been looked up in LDAP
-            so the code doesn't have to look up the same member twice or get stuck in a
-            membership loop.
-        @type processedItems: C{set}
-        @return: All members of the group, the values will correspond to memberIdAttr
-            if memberIdAttr is set in the group schema, or DNs otherwise.
-        @rtype: generator of C{str}
-        """
-
-        if processedItems is None:
-            processedItems = set()
-
-        if isinstance(members, str):
-            members = [members]
-
-        if isinstance(nestedGroups, str):
-            nestedGroups = [nestedGroups]
-
-        for member in members:
-            if member not in processedItems:
-                processedItems.add(member)
-                yield member
-
-        for group in nestedGroups:
-            if group in processedItems:
-                continue
-
-            recordType = self.recordType_groups
-            base = self.typeDNs[recordType]
-            if self.groupSchema["memberIdAttr"]:
-                scope = ldap.SCOPE_SUBTREE
-                base = self.typeDNs[recordType]
-                filterstr = "(%s=%s)" % (self.groupSchema["memberIdAttr"], group)
-            else:  # Use DN
-                scope = ldap.SCOPE_BASE
-                base = ldap.dn.str2dn(group)
-                filterstr = "(objectClass=*)"
-
-            self.log.debug(
-                "Retrieving ldap record with base {base} and filter {filter}.",
-                base=ldap.dn.dn2str(base), filter=filterstr
-            )
-            result = self.timedSearch(ldap.dn.dn2str(base),
-                                      scope,
-                                      filterstr=filterstr,
-                                      attrlist=self.attrlist)
-
-            if len(result) == 0:
-                continue
-
-            subMembers = set()
-            subNestedGroups = set()
-            if len(result) == 1:
-                dn, attrs = result[0]
-                dn = normalizeDNstr(dn)
-                if self.groupSchema["membersAttr"]:
-                    subMembers = self._getMultipleLdapAttributes(
-                        attrs,
-                        self.groupSchema["membersAttr"]
-                    )
-                    if not self.groupSchema["memberIdAttr"]:  # these are DNs
-                        subMembers = [normalizeDNstr(m) for m in subMembers]
-                    subMembers = set(subMembers)
-
-                if self.groupSchema["nestedGroupsAttr"]:
-                    subNestedGroups = self._getMultipleLdapAttributes(
-                        attrs,
-                        self.groupSchema["nestedGroupsAttr"]
-                    )
-                    if not self.groupSchema["memberIdAttr"]:  # these are DNs
-                        subNestedGroups = [normalizeDNstr(g) for g in subNestedGroups]
-                    subNestedGroups = set(subNestedGroups)
-
-            processedItems.add(group)
-            yield group
-
-            for item in self._expandGroupMembership(subMembers,
-                                                    subNestedGroups,
-                                                    processedItems):
-                yield item
-
-
-    def _getUniqueLdapAttribute(self, attrs, *keys):
-        """
-        Get the first value for one or several attributes
-        Useful when attributes have aliases (e.g. sn vs. surname)
-        """
-        for key in keys:
-            values = attrs.get(key)
-            if values is not None:
-                return values[0]
-        return None
-
-
-    def _getMultipleLdapAttributes(self, attrs, *keys):
-        """
-        Get all values for one or several attributes
-        """
-        results = []
-        for key in keys:
-            if key:
-                values = attrs.get(key)
-                if values is not None:
-                    results += values
-        return results
-
-
-    def _ldapResultToRecord(self, dn, attrs, recordType):
-        """
-        Convert the attrs returned by a LDAP search into a LdapDirectoryRecord
-        object.
-
-        If guidAttr was specified in the config but is missing from attrs,
-        raises MissingGuidException
-        """
-
-        guid = None
-        authIDs = set()
-        fullName = None
-        firstName = ""
-        lastName = ""
-        emailAddresses = set()
-        enabledForCalendaring = None
-        enabledForAddressBooks = None
-        uid = None
-        enabledForLogin = True
-        extras = {}
-
-        shortNames = tuple(self._getMultipleLdapAttributes(attrs, self.rdnSchema[recordType]["mapping"]["recordName"]))
-        if not shortNames:
-            raise MissingRecordNameException()
-
-        # First check for and add guid
-        guidAttr = self.rdnSchema["guidAttr"]
-        if guidAttr:
-            guid = self._getUniqueLdapAttribute(attrs, guidAttr)
-            if not guid:
-                self.log.debug(
-                    "LDAP data for {shortNames} is missing guid attribute "
-                    "{attr}",
-                    shortNames=shortNames, attr=guidAttr
-                )
-                raise MissingGuidException()
-            guid = normalizeUUID(guid)
-
-        # Find or build email
-        # (The emailAddresses mapping is a list of ldap fields)
-        emailAddressesMappedTo = self.rdnSchema[recordType]["mapping"].get("emailAddresses", "")
-        # Supporting either string or list for emailAddresses:
-        if isinstance(emailAddressesMappedTo, str):
-            emailAddresses = set(self._getMultipleLdapAttributes(attrs, self.rdnSchema[recordType]["mapping"].get("emailAddresses", "")))
-        else:
-            emailAddresses = set(self._getMultipleLdapAttributes(attrs, *self.rdnSchema[recordType]["mapping"]["emailAddresses"]))
-        emailSuffix = self.rdnSchema[recordType].get("emailSuffix", None)
-
-        if len(emailAddresses) == 0 and emailSuffix:
-            emailPrefix = self._getUniqueLdapAttribute(
-                attrs,
-                self.rdnSchema[recordType].get("attr", "cn")
-            )
-            emailAddresses.add(emailPrefix + emailSuffix)
-
-        proxyGUIDs = ()
-        readOnlyProxyGUIDs = ()
-        autoSchedule = False
-        autoAcceptGroup = ""
-        memberGUIDs = []
-
-        # LDAP attribute -> principal matchings
-        if recordType == self.recordType_users:
-            fullName = self._getUniqueLdapAttribute(attrs, self.rdnSchema[recordType]["mapping"]["fullName"])
-            firstName = self._getUniqueLdapAttribute(attrs, self.rdnSchema[recordType]["mapping"]["firstName"])
-            lastName = self._getUniqueLdapAttribute(attrs, self.rdnSchema[recordType]["mapping"]["lastName"])
-            enabledForCalendaring = True
-            enabledForAddressBooks = True
-
-        elif recordType == self.recordType_groups:
-            fullName = self._getUniqueLdapAttribute(attrs, self.rdnSchema[recordType]["mapping"]["fullName"])
-            enabledForCalendaring = False
-            enabledForAddressBooks = False
-            enabledForLogin = False
-
-            if self.groupSchema["membersAttr"]:
-                members = self._getMultipleLdapAttributes(attrs, self.groupSchema["membersAttr"])
-                memberGUIDs.extend(members)
-            if self.groupSchema["nestedGroupsAttr"]:
-                members = self._getMultipleLdapAttributes(attrs, self.groupSchema["nestedGroupsAttr"])
-                memberGUIDs.extend(members)
-
-            # Normalize members if they're in DN form
-            if not self.groupSchema["memberIdAttr"]:  # empty = dn
-                guids = list(memberGUIDs)
-                memberGUIDs = []
-                for dnStr in guids:
-                    try:
-                        dnStr = normalizeDNstr(dnStr)
-                        memberGUIDs.append(dnStr)
-                    except Exception, e:
-                        # LDAP returned an illegal DN value, log and ignore it
-                        self.log.warn("Bad LDAP DN: {dn!r}", dn=dnStr)
-
-        elif recordType in (self.recordType_resources,
-                            self.recordType_locations):
-            fullName = self._getUniqueLdapAttribute(attrs, self.rdnSchema[recordType]["mapping"]["fullName"])
-            enabledForCalendaring = True
-            enabledForAddressBooks = False
-            enabledForLogin = False
-            if self.resourceSchema["resourceInfoAttr"]:
-                resourceInfo = self._getUniqueLdapAttribute(
-                    attrs,
-                    self.resourceSchema["resourceInfoAttr"]
-                )
-                if resourceInfo:
-                    try:
-                        (
-                            autoSchedule,
-                            proxy,
-                            readOnlyProxy,
-                            autoAcceptGroup
-                        ) = self.parseResourceInfo(
-                            resourceInfo,
-                            guid,
-                            recordType,
-                            shortNames[0]
-                        )
-                        if proxy:
-                            proxyGUIDs = (proxy,)
-                        if readOnlyProxy:
-                            readOnlyProxyGUIDs = (readOnlyProxy,)
-                    except ValueError, e:
-                        self.log.error(
-                            "Unable to parse resource info: {e}", e=e
-                        )
-            else:  # the individual resource attributes might be specified
-                if self.resourceSchema["autoScheduleAttr"]:
-                    autoScheduleValue = self._getUniqueLdapAttribute(
-                        attrs,
-                        self.resourceSchema["autoScheduleAttr"]
-                    )
-                    autoSchedule = (
-                        autoScheduleValue == self.resourceSchema["autoScheduleEnabledValue"]
-                    )
-                if self.resourceSchema["proxyAttr"]:
-                    proxyGUIDs = set(
-                        self._getMultipleLdapAttributes(
-                            attrs,
-                            self.resourceSchema["proxyAttr"]
-                        )
-                    )
-                if self.resourceSchema["readOnlyProxyAttr"]:
-                    readOnlyProxyGUIDs = set(
-                        self._getMultipleLdapAttributes(
-                            attrs,
-                            self.resourceSchema["readOnlyProxyAttr"]
-                        )
-                    )
-                if self.resourceSchema["autoAcceptGroupAttr"]:
-                    autoAcceptGroup = self._getUniqueLdapAttribute(
-                        attrs,
-                        self.resourceSchema["autoAcceptGroupAttr"]
-                    )
-
-            if recordType == self.recordType_locations:
-                if self.rdnSchema[recordType].get("associatedAddressAttr", ""):
-                    associatedAddress = self._getUniqueLdapAttribute(
-                        attrs,
-                        self.rdnSchema[recordType]["associatedAddressAttr"]
-                    )
-                    if associatedAddress:
-                        extras["associatedAddress"] = associatedAddress
-
-        elif recordType == self.recordType_addresses:
-            if self.rdnSchema[recordType].get("geoAttr", ""):
-                geo = self._getUniqueLdapAttribute(
-                    attrs,
-                    self.rdnSchema[recordType]["geoAttr"]
-                )
-                if geo:
-                    extras["geo"] = geo
-            if self.rdnSchema[recordType].get("streetAddressAttr", ""):
-                street = self._getUniqueLdapAttribute(
-                    attrs,
-                    self.rdnSchema[recordType]["streetAddressAttr"]
-                )
-                if street:
-                    extras["streetAddress"] = street
-
-        serverID = None
-        if self.poddingSchema["serverIdAttr"]:
-            serverID = self._getUniqueLdapAttribute(
-                attrs,
-                self.poddingSchema["serverIdAttr"]
-            )
-
-        record = LdapDirectoryRecord(
-            service=self,
-            recordType=recordType,
-            guid=guid,
-            shortNames=shortNames,
-            authIDs=authIDs,
-            fullName=fullName,
-            firstName=firstName,
-            lastName=lastName,
-            emailAddresses=emailAddresses,
-            uid=uid,
-            dn=dn,
-            memberGUIDs=memberGUIDs,
-            extProxies=proxyGUIDs,
-            extReadOnlyProxies=readOnlyProxyGUIDs,
-            attrs=attrs,
-            **extras
-        )
-
-        if self.augmentService is not None:
-            # Look up augment information
-            # TODO: this needs to be deferred but for now we hard code
-            # the deferred result because we know it is completing
-            # immediately.
-            d = self.augmentService.getAugmentRecord(record.guid, recordType)
-            d.addCallback(lambda x: record.addAugmentInformation(x))
-
-        else:
-            # Generate augment record based on information retrieved from LDAP
-            augmentRecord = AugmentRecord(
-                guid,
-                enabled=True,
-                serverID=serverID,
-                enabledForCalendaring=enabledForCalendaring,
-                autoSchedule=autoSchedule,
-                autoAcceptGroup=autoAcceptGroup,
-                enabledForAddressBooks=enabledForAddressBooks,  # TODO: add to LDAP?
-                enabledForLogin=enabledForLogin,
-            )
-            record.addAugmentInformation(augmentRecord)
-
-        # Override with LDAP login control if attribute specified
-        if recordType == self.recordType_users:
-            loginEnabledAttr = self.rdnSchema[recordType]["loginEnabledAttr"]
-            if loginEnabledAttr:
-                loginEnabledValue = self.rdnSchema[recordType]["loginEnabledValue"]
-                record.enabledForLogin = self._getUniqueLdapAttribute(
-                    attrs, loginEnabledAttr
-                ) == loginEnabledValue
-
-        # Override with LDAP calendar-enabled control if attribute specified
-        calendarEnabledAttr = self.rdnSchema[recordType].get("calendarEnabledAttr", "")
-        if calendarEnabledAttr:
-            calendarEnabledValue = self.rdnSchema[recordType]["calendarEnabledValue"]
-            record.enabledForCalendaring = self._getUniqueLdapAttribute(
-                attrs,
-                calendarEnabledAttr
-            ) == calendarEnabledValue
-
-        return record
-
-
-    def queryDirectory(
-        self, recordTypes, indexType, indexKey, queryMethod=None
-    ):
-        """
-        Queries the LDAP directory for the record which has an attribute value
-        matching the indexType and indexKey parameters.
-
-        recordTypes is a list of record types to limit the search to.
-        indexType specifies one of the CachingDirectoryService constants
-            identifying which attribute to search on.
-        indexKey is the value to search for.
-
-        Nothing is returned -- the resulting record (if any) is placed in
-        the cache.
-        """
-
-        if queryMethod is None:
-            queryMethod = self.timedSearch
-
-        self.log.debug(
-            "LDAP query for types {types}, indexType {indexType} and "
-            "indexKey {indexKey}",
-            types=recordTypes, indexType=indexType, indexKey=indexKey
-        )
-
-        guidAttr = self.rdnSchema["guidAttr"]
-        for recordType in recordTypes:
-            # Build base for this record Type
-            base = self.typeDNs[recordType]
-
-            # Build filter
-            filterstr = "(!(objectClass=organizationalUnit))"
-            typeFilter = self.rdnSchema[recordType].get("filter", "")
-            if typeFilter:
-                filterstr = "(&%s%s)" % (filterstr, typeFilter)
-
-            if indexType == self.INDEX_TYPE_GUID:
-                # Query on guid only works if guid attribute has been defined.
-                # Support for query on guid even if is auto-generated should
-                # be added.
-                if not guidAttr:
-                    return
-                filterstr = "(&%s(%s=%s))" % (filterstr, guidAttr, indexKey)
-
-            elif indexType == self.INDEX_TYPE_SHORTNAME:
-                filterstr = "(&%s(%s=%s))" % (
-                    filterstr,
-                    self.rdnSchema[recordType]["mapping"]["recordName"],
-                    ldapEsc(indexKey)
-                )
-
-            elif indexType == self.INDEX_TYPE_CUA:
-                # indexKey is of the form "mailto:test at example.net"
-                email = indexKey[7:]  # strip "mailto:"
-                emailSuffix = self.rdnSchema[recordType].get(
-                    "emailSuffix", None
-                )
-                if (
-                    emailSuffix is not None and
-                    email.partition("@")[2] == emailSuffix
-                ):
-                    filterstr = "(&%s(|(&(!(mail=*))(%s=%s))(mail=%s)))" % (
-                        filterstr,
-                        self.rdnSchema[recordType].get("attr", "cn"),
-                        email.partition("@")[0],
-                        ldapEsc(email)
-                    )
-                else:
-                    # emailAddresses can map to multiple LDAP fields
-                    ldapFields = self.rdnSchema[recordType]["mapping"].get(
-                        "emailAddresses", ""
-                    )
-                    if isinstance(ldapFields, str):
-                        if ldapFields:
-                            subfilter = (
-                                "(%s=%s)" % (ldapFields, ldapEsc(email))
-                            )
-                        else:
-                            # No LDAP attribute assigned for emailAddresses
-                            continue
-
-                    else:
-                        subfilter = []
-                        for ldapField in ldapFields:
-                            if ldapField:
-                                subfilter.append(
-                                    "(%s=%s)" % (ldapField, ldapEsc(email))
-                                )
-                        if not subfilter:
-                            # No LDAP attribute assigned for emailAddresses
-                            continue
-
-                        subfilter = "(|%s)" % ("".join(subfilter))
-                    filterstr = "(&%s%s)" % (filterstr, subfilter)
-
-            elif indexType == self.INDEX_TYPE_AUTHID:
-                return
-
-            # Query the LDAP server
-            self.log.debug(
-                "Retrieving ldap record with base %s and filter %s.",
-                base=ldap.dn.dn2str(base), filter=filterstr,
-            )
-            result = queryMethod(
-                ldap.dn.dn2str(base),
-                ldap.SCOPE_SUBTREE,
-                filterstr=filterstr,
-                attrlist=self.attrlist,
-            )
-
-            if result:
-                dn, attrs = result.pop()
-                dn = normalizeDNstr(dn)
-
-                unrestricted = self.isAllowedByRestrictToGroup(dn, attrs)
-
-                try:
-                    record = self._ldapResultToRecord(dn, attrs, recordType)
-                    self.log.debug("Got LDAP record {rec}", rec=record)
-
-                    if not unrestricted:
-                        self.log.debug(
-                            "{dn} is not enabled because it's not a member of "
-                            "group {group!r}",
-                            dn=dn, group=self.restrictToGroup
-                        )
-                        record.enabledForCalendaring = False
-                        record.enabledForAddressBooks = False
-
-                    record.applySACLs()
-
-                    self.recordCacheForType(recordType).addRecord(
-                        record, indexType, indexKey
-                    )
-
-                    # We got a match, so don't bother checking other types
-                    break
-
-                except MissingRecordNameException:
-                    self.log.warn(
-                        "Ignoring record missing record name "
-                        "attribute: recordType {recordType}, indexType "
-                        "{indexType} and indexKey {indexKey}",
-                        recordTypes=recordTypes, indexType=indexType,
-                        indexKey=indexKey,
-                    )
-
-                except MissingGuidException:
-                    self.log.warn(
-                        "Ignoring record missing guid attribute: "
-                        "recordType {recordType}, indexType {indexType} and "
-                        "indexKey {indexKey}",
-                        recordTypes=recordTypes, indexType=indexType,
-                        indexKey=indexKey
-                    )
-
-
-    def recordsMatchingTokens(self, tokens, context=None, limitResults=50, timeoutSeconds=10):
-        """
-        # TODO: hook up limitResults to the client limit in the query
-
-        @param tokens: The tokens to search on
-        @type tokens: C{list} of C{str} (utf-8 bytes)
-        @param context: An indication of what the end user is searching
-            for; "attendee", "location", or None
-        @type context: C{str}
-        @return: a deferred sequence of L{IDirectoryRecord}s which
-            match the given tokens and optional context.
-
-        Each token is searched for within each record's full name and
-        email address; if each token is found within a record that
-        record is returned in the results.
-
-        If context is None, all record types are considered.  If
-        context is "location", only locations are considered.  If
-        context is "attendee", only users, groups, and resources
-        are considered.
-        """
-        self.log.debug(
-            "Peforming calendar user search for {tokens} ({context})",
-            tokens=tokens, context=context
-        )
-        startTime = time.time()
-        records = []
-        recordTypes = self.recordTypesForSearchContext(context)
-        recordTypes = [r for r in recordTypes if r in self.recordTypes()]
-
-        typeCounts = {}
-        for recordType in recordTypes:
-            if limitResults == 0:
-                self.log.debug("LDAP search aggregate limit reached")
-                break
-            typeCounts[recordType] = 0
-            base = self.typeDNs[recordType]
-            scope = ldap.SCOPE_SUBTREE
-            extraFilter = self.rdnSchema[recordType].get("filter", "")
-            filterstr = buildFilterFromTokens(
-                recordType,
-                self.rdnSchema[recordType]["mapping"],
-                tokens,
-                extra=extraFilter
-            )
-
-            if filterstr is not None:
-                # Query the LDAP server
-                self.log.debug(
-                    "LDAP search {base} {filter} (limit={limit:d})",
-                    base=ldap.dn.dn2str(base), filter=filterstr,
-                    limit=limitResults,
-                )
-                results = self.timedSearch(
-                    ldap.dn.dn2str(base),
-                    scope,
-                    filterstr=filterstr,
-                    attrlist=self.attrlist,
-                    timeoutSeconds=timeoutSeconds,
-                    resultLimit=limitResults
-                )
-                numMissingGuids = 0
-                numMissingRecordNames = 0
-                numNotEnabled = 0
-                for dn, attrs in results:
-                    dn = normalizeDNstr(dn)
-                    # Skip if group restriction is in place and guid is not
-                    # a member
-                    if (
-                            recordType != self.recordType_groups and
-                            not self.isAllowedByRestrictToGroup(dn, attrs)
-                    ):
-                        continue
-
-                    try:
-                        record = self._ldapResultToRecord(dn, attrs, recordType)
-
-                        # For non-group records, if not enabled for calendaring do
-                        # not include in principal property search results
-                        if (recordType != self.recordType_groups):
-                            if not record.enabledForCalendaring:
-                                numNotEnabled += 1
-                                continue
-
-                        records.append(record)
-                        typeCounts[recordType] += 1
-                        limitResults -= 1
-
-                    except MissingGuidException:
-                        numMissingGuids += 1
-
-                    except MissingRecordNameException:
-                        numMissingRecordNames += 1
-
-                self.log.debug(
-                    "LDAP search returned {resultCount:d} results, "
-                    "{typeCount:d} usable",
-                    resultCount=len(results), typeCount=typeCounts[recordType]
-                )
-
-        typeCountsStr = ", ".join(
-            ["%s:%d" % (rt, ct) for (rt, ct) in typeCounts.iteritems()]
-        )
-        totalTime = time.time() - startTime
-        self.log.info(
-            "Calendar user search for {tokens} matched {recordCount:d} "
-            "records ({typeCount}) in {time!.2f} seconds",
-            tokens=tokens, recordCount=len(records),
-            typeCount=typeCountsStr, time=totalTime,
-        )
-        return succeed(records)
-
-
-    @inlineCallbacks
-    def recordsMatchingFields(self, fields, operand="or", recordType=None):
-        """
-        Carries out the work of a principal-property-search against LDAP
-        Returns a deferred list of directory records.
-        """
-        records = []
-
-        self.log.debug(
-            "Performing principal property search for {fields}", fields=fields
-        )
-
-        if recordType is None:
-            # Make a copy since we're modifying it
-            recordTypes = list(self.recordTypes())
-
-            # principal-property-search syntax doesn't provide a way to ask
-            # for 3 of the 4 types (either all types or a single type).  This
-            # is wasteful in the case of iCal looking for event attendees
-            # since it always ignores the locations.  This config flag lets
-            # you skip querying for locations in this case:
-            if not self.queryLocationsImplicitly:
-                if self.recordType_locations in recordTypes:
-                    recordTypes.remove(self.recordType_locations)
-        else:
-            recordTypes = [recordType]
-
-        guidAttr = self.rdnSchema["guidAttr"]
-        for recordType in recordTypes:
-
-            base = self.typeDNs[recordType]
-
-            if fields[0][0] == "dn":
-                # DN's are not an attribute that can be searched on by filter
-                scope = ldap.SCOPE_BASE
-                filterstr = "(objectClass=*)"
-                base = ldap.dn.str2dn(fields[0][1])
-
-            else:
-                scope = ldap.SCOPE_SUBTREE
-                filterstr = buildFilter(
-                    recordType,
-                    self.rdnSchema[recordType]["mapping"],
-                    fields,
-                    operand=operand,
-                    optimizeMultiName=self.optimizeMultiName
-                )
-
-            if filterstr is not None:
-                # Query the LDAP server
-                self.log.debug(
-                    "LDAP search {base} {scope} {filter}",
-                    base=ldap.dn.dn2str(base), scope=scope, filter=filterstr
-                )
-                results = (yield deferToThread(
-                    self.timedSearch,
-                    ldap.dn.dn2str(base),
-                    scope,
-                    filterstr=filterstr,
-                    attrlist=self.attrlist,
-                    timeoutSeconds=self.requestTimeoutSeconds,
-                    resultLimit=self.requestResultsLimit)
-                )
-                self.log.debug(
-                    "LDAP search returned {count} results", count=len(results)
-                )
-                numMissingGuids = 0
-                numMissingRecordNames = 0
-                for dn, attrs in results:
-                    dn = normalizeDNstr(dn)
-                    # Skip if group restriction is in place and guid is not
-                    # a member
-                    if (
-                        recordType != self.recordType_groups and
-                        not self.isAllowedByRestrictToGroup(dn, attrs)
-                    ):
-                        continue
-
-                    try:
-                        record = self._ldapResultToRecord(dn, attrs, recordType)
-
-                        # For non-group records, if not enabled for calendaring do
-                        # not include in principal property search results
-                        if (recordType != self.recordType_groups):
-                            if not record.enabledForCalendaring:
-                                continue
-
-                        records.append(record)
-
-                    except MissingGuidException:
-                        numMissingGuids += 1
-
-                    except MissingRecordNameException:
-                        numMissingRecordNames += 1
-
-                if numMissingGuids:
-                    self.log.warn(
-                        "{count:d} {type} records are missing {attr}",
-                        count=numMissingGuids, type=recordType, attr=guidAttr
-                    )
-
-                if numMissingRecordNames:
-                    self.log.warn(
-                        "{count:d} {type} records are missing record name",
-                        count=numMissingRecordNames, type=recordType,
-                    )
-
-        self.log.debug(
-            "Principal property search matched {count} records",
-            count=len(records)
-        )
-        returnValue(records)
-
-
-    @inlineCallbacks
-    def getGroups(self, guids):
-        """
-        Returns a set of group records for the list of guids passed in.  For
-        any group that also contains subgroups, those subgroups' records are
-        also returned, and so on.
-        """
-
-        recordsByAlias = {}
-
-        groupsDN = self.typeDNs[self.recordType_groups]
-        memberIdAttr = self.groupSchema["memberIdAttr"]
-
-        # First time through the loop we search using the attribute
-        # corresponding to guid, since that is what the proxydb uses.
-        # Subsequent iterations fault in groups via the attribute
-        # used to identify members.
-        attributeToSearch = "guid"
-        valuesToFetch = guids
-
-        while valuesToFetch:
-            results = []
-
-            if attributeToSearch == "dn":
-                # Since DN can't be searched on in a filter we have to call
-                # recordsMatchingFields for *each* DN.
-                for value in valuesToFetch:
-                    fields = [["dn", value, False, "equals"]]
-                    result = (
-                        yield self.recordsMatchingFields(
-                            fields,
-                            recordType=self.recordType_groups
-                        )
-                    )
-                    results.extend(result)
-            else:
-                for batch in splitIntoBatches(valuesToFetch, self.batchSize):
-                    fields = []
-                    for value in batch:
-                        fields.append([attributeToSearch, value, False, "equals"])
-                    result = (
-                        yield self.recordsMatchingFields(
-                            fields,
-                            recordType=self.recordType_groups
-                        )
-                    )
-                    results.extend(result)
-
-            # Reset values for next iteration
-            valuesToFetch = set()
-
-            for record in results:
-                alias = record.cachedGroupsAlias()
-                if alias not in recordsByAlias:
-                    recordsByAlias[alias] = record
-
-                # record.memberGUIDs() contains the members of this group,
-                # but it might not be in guid form; it will be data from
-                # self.groupSchema["memberIdAttr"]
-                for memberAlias in record.memberGUIDs():
-                    if not memberIdAttr:
-                        # Members are identified by dn so we can take a short
-                        # cut:  we know we only need to examine groups, and
-                        # those will be children of the groups DN
-                        if not dnContainedIn(ldap.dn.str2dn(memberAlias),
-                                             groupsDN):
-                            continue
-                    if memberAlias not in recordsByAlias:
-                        valuesToFetch.add(memberAlias)
-
-            # Switch to the LDAP attribute used for identifying members
-            # for subsequent iterations.  If memberIdAttr is not specified
-            # in the config, we'll search using dn.
-            attributeToSearch = "memberIdAttr" if memberIdAttr else "dn"
-
-        returnValue(recordsByAlias.values())
-
-
-    def recordTypeForDN(self, dnStr):
-        """
-        Examine a DN to determine which recordType it belongs to
-        @param dn: DN to compare
-        @type dn: string
-        @return: recordType string, or None if no match
-        """
-        dn = ldap.dn.str2dn(dnStr.lower())
-        for recordType in self.recordTypes():
-            base = self.typeDNs[recordType]  # already lowercase
-            if dnContainedIn(dn, base):
-                return recordType
-        return None
-
-
-
-def dnContainedIn(child, parent):
-    """
-    Return True if child dn is contained within parent dn, otherwise False.
-    """
-    return child[-len(parent):] == parent
-
-
-
-def normalizeDNstr(dnStr):
-    """
-    Convert to lowercase and remove extra whitespace
-    @param dnStr: dn
-    @type dnStr: C{str}
-    @return: normalized dn C{str}
-    """
-    return ' '.join(ldap.dn.dn2str(ldap.dn.str2dn(dnStr.lower())).split())
-
-
-
-def _convertValue(value, matchType):
-    if matchType == "starts-with":
-        value = "%s*" % (ldapEsc(value),)
-    elif matchType == "contains":
-        value = "*%s*" % (ldapEsc(value),)
-    # otherwise it's an exact match
-    else:
-        value = ldapEsc(value)
-    return value
-
-
-
-def buildFilter(recordType, mapping, fields, operand="or", optimizeMultiName=False):
-    """
-    Create an LDAP filter string from a list of tuples representing directory
-    attributes to search
-
-    mapping is a dict mapping internal directory attribute names to ldap names.
-    fields is a list of tuples...
-        (directory field name, value to search, caseless (ignored), matchType)
-    ...where matchType is one of "starts-with", "contains", "exact"
-    """
-
-    converted = []
-    combined = {}
-    for field, value, caseless, matchType in fields:
-        ldapField = mapping.get(field, None)
-        if ldapField:
-            combined.setdefault(field, []).append((value, caseless, matchType))
-            value = _convertValue(value, matchType)
-            if isinstance(ldapField, str):
-                converted.append("(%s=%s)" % (ldapField, value))
-            else:
-                subConverted = []
-                for lf in ldapField:
-                    subConverted.append("(%s=%s)" % (lf, value))
-                converted.append("(|%s)" % "".join(subConverted))
-
-    if len(converted) == 0:
-        return None
-
-    if optimizeMultiName and recordType in ("users", "groups"):
-        for field in [key for key in combined.keys() if key != "guid"]:
-            if len(combined.get(field, [])) > 1:
-                # Client is searching on more than one name -- interpret this as the user
-                # explicitly looking up a user by name (ignoring other record types), and
-                # try the various firstName/lastName permutations:
-                if recordType == "users":
-                    converted = []
-                    for firstName, _ignore_firstCaseless, firstMatchType in combined["firstName"]:
-                        for lastName, _ignore_lastCaseless, lastMatchType in combined["lastName"]:
-                            if firstName != lastName:
-                                firstValue = _convertValue(firstName, firstMatchType)
-                                lastValue = _convertValue(lastName, lastMatchType)
-                                converted.append(
-                                    "(&(%s=%s)(%s=%s))" %
-                                    (mapping["firstName"], firstValue,
-                                     mapping["lastName"], lastValue)
-                                )
-                else:
-                    return None
-
-    if len(converted) == 1:
-        filterstr = converted[0]
-    else:
-        operand = ("|" if operand == "or" else "&")
-        filterstr = "(%s%s)" % (operand, "".join(converted))
-
-    if filterstr:
-        # To reduce the amount of records returned, filter out the ones
-        # that don't have (possibly) required attribute values (record
-        # name, guid)
-        additional = []
-        for key in ("recordName", "guid"):
-            if key in mapping:
-                additional.append("(%s=*)" % (mapping.get(key),))
-        if additional:
-            filterstr = "(&%s%s)" % ("".join(additional), filterstr)
-
-    return filterstr
-
-
-
-def buildFilterFromTokens(recordType, mapping, tokens, extra=None):
-    """
-    Create an LDAP filter string from a list of query tokens.  Each token is
-    searched for in each LDAP attribute corresponding to "fullName" and
-    "emailAddresses" (could be multiple LDAP fields for either).
-
-    @param recordType: The recordType to use to customize the filter
-    @param mapping: A dict mapping internal directory attribute names to ldap names.
-    @type mapping: C{dict}
-    @param tokens: The list of tokens to search for
-    @type tokens: C{list}
-    @param extra: Extra filter to "and" into the final filter
-    @type extra: C{str} or None
-    @return: An LDAP filterstr
-    @rtype: C{str}
-    """
-
-    filterStr = None
-
-    # Eliminate any substring duplicates
-    tokenSet = set()
-    for token in tokens:
-        collision = False
-        for existing in tokenSet:
-            if token in existing:
-                collision = True
-                break
-            elif existing in token:
-                tokenSet.remove(existing)
-                break
-        if not collision:
-            tokenSet.add(token)
-
-    tokens = [ldapEsc(t) for t in tokenSet]
-    if len(tokens) == 0:
-        return None
-    tokens.sort()
-
-    attributes = [
-        ("fullName", "(%s=*%s*)"),
-        ("emailAddresses", "(%s=%s*)"),
-    ]
-
-    ldapFields = []
-    for attribute, template in attributes:
-        ldapField = mapping.get(attribute, None)
-        if ldapField:
-            if isinstance(ldapField, str):
-                ldapFields.append((ldapField, template))
-            else:
-                for lf in ldapField:
-                    ldapFields.append((lf, template))
-
-    if len(ldapFields) == 0:
-        return None
-
-    tokenFragments = []
-    if extra:
-        tokenFragments.append(extra)
-
-    for token in tokens:
-        fragments = []
-        for ldapField, template in ldapFields:
-            fragments.append(template % (ldapField, token))
-        if len(fragments) == 1:
-            tokenFragment = fragments[0]
-        else:
-            tokenFragment = "(|%s)" % ("".join(fragments),)
-        tokenFragments.append(tokenFragment)
-
-    if len(tokenFragments) == 1:
-        filterStr = tokenFragments[0]
-    else:
-        filterStr = "(&%s)" % ("".join(tokenFragments),)
-
-    return filterStr
-
-
-
-class LdapDirectoryRecord(CachingDirectoryRecord):
-    """
-    LDAP implementation of L{IDirectoryRecord}.
-    """
-    def __init__(
-        self, service, recordType,
-        guid, shortNames, authIDs, fullName,
-        firstName, lastName, emailAddresses,
-        uid, dn, memberGUIDs, extProxies, extReadOnlyProxies,
-        attrs, **kwargs
-    ):
-        super(LdapDirectoryRecord, self).__init__(
-            service=service,
-            recordType=recordType,
-            guid=guid,
-            shortNames=shortNames,
-            authIDs=authIDs,
-            fullName=fullName,
-            firstName=firstName,
-            lastName=lastName,
-            emailAddresses=emailAddresses,
-            extProxies=extProxies,
-            extReadOnlyProxies=extReadOnlyProxies,
-            uid=uid,
-            **kwargs
-        )
-
-        # Save attributes of dn and attrs in case you might need them later
-        self.dn = dn
-        self.attrs = attrs
-
-        # Store copy of member guids
-        self._memberGUIDs = memberGUIDs
-
-        # Identifier of this record as a group member
-        memberIdAttr = self.service.groupSchema["memberIdAttr"]
-        if memberIdAttr:
-            self._memberId = self.service._getUniqueLdapAttribute(
-                attrs,
-                memberIdAttr
-            )
-        else:
-            self._memberId = normalizeDNstr(self.dn)
-
-
-    def members(self):
-        """ Return the records representing members of this group """
-
-        try:
-            return self._members_storage
-        except AttributeError:
-            self._members_storage = self._members()
-            return self._members_storage
-
-
-    def _members(self):
-        """ Fault in records for the members of this group """
-
-        memberIdAttr = self.service.groupSchema["memberIdAttr"]
-        results = []
-
-        for memberId in self._memberGUIDs:
-
-            if memberIdAttr:
-
-                base = self.service.base
-                filterstr = "(%s=%s)" % (memberIdAttr, ldapEsc(memberId))
-                self.log.debug(
-                    "Retrieving subtree of {base} with filter {filter}",
-                    base=ldap.dn.dn2str(base), filter=filterstr,
-                    system="LdapDirectoryService"
-                )
-                result = self.service.timedSearch(
-                    ldap.dn.dn2str(base),
-                    ldap.SCOPE_SUBTREE,
-                    filterstr=filterstr,
-                    attrlist=self.service.attrlist
-                )
-
-            else:  # using DN
-
-                self.log.debug(
-                    "Retrieving {id}.",
-                    id=memberId, system="LdapDirectoryService"
-                )
-                result = self.service.timedSearch(
-                    memberId,
-                    ldap.SCOPE_BASE, attrlist=self.service.attrlist
-                )
-
-            if result:
-
-                dn, attrs = result.pop()
-                dn = normalizeDNstr(dn)
-                self.log.debug("Retrieved: {dn} {attrs}", dn=dn, attrs=attrs)
-                recordType = self.service.recordTypeForDN(dn)
-                if recordType is None:
-                    self.log.error(
-                        "Unable to map {dn} to a record type", dn=dn
-                    )
-                    continue
-
-                shortName = self.service._getUniqueLdapAttribute(
-                    attrs,
-                    self.service.rdnSchema[recordType]["mapping"]["recordName"]
-                )
-
-                if shortName:
-                    record = self.service.recordWithShortName(
-                        recordType,
-                        shortName
-                    )
-                    if record:
-                        results.append(record)
-
-        return results
-
-
-    def groups(self):
-        """ Return the records representing groups this record is a member of """
-        try:
-            return self._groups_storage
-        except AttributeError:
-            self._groups_storage = self._groups()
-            return self._groups_storage
-
-
-    def _groups(self):
-        """ Fault in the groups of which this record is a member """
-
-        recordType = self.service.recordType_groups
-        base = self.service.typeDNs[recordType]
-
-        membersAttrs = []
-        if self.service.groupSchema["membersAttr"]:
-            membersAttrs.append(self.service.groupSchema["membersAttr"])
-        if self.service.groupSchema["nestedGroupsAttr"]:
-            membersAttrs.append(self.service.groupSchema["nestedGroupsAttr"])
-
-        if len(membersAttrs) == 1:
-            filterstr = "(%s=%s)" % (membersAttrs[0], self._memberId)
-        else:
-            filterstr = "(|%s)" % (
-                "".join(
-                    ["(%s=%s)" % (a, self._memberId) for a in membersAttrs]
-                ),
-            )
-        self.log.debug("Finding groups containing {id}", id=self._memberId)
-        groups = []
-
-        try:
-            results = self.service.timedSearch(
-                ldap.dn.dn2str(base),
-                ldap.SCOPE_SUBTREE,
-                filterstr=filterstr,
-                attrlist=self.service.attrlist
-            )
-
-            for dn, attrs in results:
-                dn = normalizeDNstr(dn)
-                shortName = self.service._getUniqueLdapAttribute(attrs, "cn")
-                self.log.debug(
-                    "{id} is a member of {shortName}",
-                    id=self._memberId, shortName=shortName
-                )
-                record = self.service.recordWithShortName(recordType, shortName)
-                if record is not None:
-                    groups.append(record)
-        except ldap.PROTOCOL_ERROR, e:
-            self.log.warn("{e}", e=e)
-
-        return groups
-
-
-    def cachedGroupsAlias(self):
-        """
-        See directory.py for full description
-
-        LDAP group members can be referred to by attributes other than guid.  _memberId
-        will be set to the appropriate value to look up group-membership with.
-        """
-        return self._memberId
-
-
-    def memberGUIDs(self):
-        return set(self._memberGUIDs)
-
-
-    def verifyCredentials(self, credentials):
-        """ Supports PAM or simple LDAP bind for username+password """
-
-        if isinstance(credentials, UsernamePassword):
-
-            # TODO: investigate:
-            # Check that the username supplied matches one of the shortNames
-            # (The DCS might already enforce this constraint, not sure)
-            if credentials.username not in self.shortNames:
-                return False
-
-            # Check cached password
-            try:
-                if credentials.password == self.password:
-                    return True
-            except AttributeError:
-                pass
-
-            if self.service.authMethod.upper() == "PAM":
-                # Authenticate against PAM (UNTESTED)
-
-                if not pamAvailable:
-                    self.log.error("PAM module is not installed")
-                    raise DirectoryConfigurationError()
-
-                def pam_conv(auth, query_list, userData):
-                    return [(credentials.password, 0)]
-
-                auth = PAM.pam()
-                auth.start("caldav")
-                auth.set_item(PAM.PAM_USER, credentials.username)
-                auth.set_item(PAM.PAM_CONV, pam_conv)
-                try:
-                    auth.authenticate()
-                except PAM.error:
-                    return False
-                else:
-                    # Cache the password to avoid further LDAP queries
-                    self.password = credentials.password
-                    return True
-
-            elif self.service.authMethod.upper() == "LDAP":
-
-                # Authenticate against LDAP
-                try:
-                    self.service.authenticate(self.dn, credentials.password)
-                    # Cache the password to avoid further LDAP queries
-                    self.password = credentials.password
-                    return True
-
-                except ldap.INVALID_CREDENTIALS:
-                    self.log.info(
-                        "Invalid credentials for {dn}",
-                        dn=repr(self.dn), system="LdapDirectoryService"
-                    )
-                    return False
-
-            else:
-                self.log.error(
-                    "Unknown Authentication Method {method!r}",
-                    method=self.service.authMethod.upper()
-                )
-                raise DirectoryConfigurationError()
-
-        return super(LdapDirectoryRecord, self).verifyCredentials(credentials)
-
-
-
-class MissingRecordNameException(Exception):
-    """ Raised when LDAP record is missing recordName """
-    pass
-
-
-
-class MissingGuidException(Exception):
-    """ Raised when LDAP record is missing guidAttr and it's required """
-    pass

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/principal.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/principal.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/principal.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -28,49 +28,47 @@
     "DirectoryCalendarPrincipalResource",
 ]
 
-import uuid
 from urllib import unquote
 from urlparse import urlparse
+import uuid
 
+from twext.python.log import Logger
 from twisted.cred.credentials import UsernamePassword
-from twisted.python.failure import Failure
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.internet.defer import succeed
-from twisted.web.template import XMLFile, Element, renderer, tags
-from twistedcaldav.directory.util import NotFoundResource
-
-from txweb2.auth.digest import DigestedCredentials
-from txweb2 import responsecode
-from txweb2.http import HTTPError
-from txdav.xml import element as davxml
-from txweb2.dav.util import joinURL
-from txweb2.dav.noneprops import NonePropertyStore
-
-from twext.python.log import Logger
-
-
-try:
-    from twistedcaldav.authkerb import NegotiateCredentials
-    NegotiateCredentials # sigh, pyflakes
-except ImportError:
-    NegotiateCredentials = None
 from twisted.python.modules import getModule
-
+from twisted.web.template import XMLFile, Element, renderer
 from twistedcaldav import caldavxml, customxml
 from twistedcaldav.cache import DisabledCacheNotifier, PropfindCacheMixin
 from twistedcaldav.config import config
 from twistedcaldav.customxml import calendarserver_namespace
 from twistedcaldav.directory.augment import allowedAutoScheduleModes
 from twistedcaldav.directory.common import uidsResourceName
-from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord
-from twistedcaldav.directory.idirectory import IDirectoryService
+from twistedcaldav.directory.util import NotFoundResource
+from twistedcaldav.directory.util import (
+    formatLink, formatLinks, formatPrincipals, formatList
+)
 from twistedcaldav.directory.wiki import getWikiACL
+from twistedcaldav.extensions import (
+    ReadOnlyResourceMixIn, DAVPrincipalResource, DAVResourceWithChildrenMixin
+)
 from twistedcaldav.extensions import DirectoryElement
-from twistedcaldav.extensions import ReadOnlyResourceMixIn, DAVPrincipalResource, \
-    DAVResourceWithChildrenMixin
 from twistedcaldav.resource import CalendarPrincipalCollectionResource, CalendarPrincipalResource
 from txdav.caldav.datastore.scheduling.cuaddress import normalizeCUAddr
+from txdav.who.directory import CalendarDirectoryRecordMixin
+from txdav.xml import element as davxml
+from txweb2 import responsecode
+from txweb2.auth.digest import DigestedCredentials
+from txweb2.dav.noneprops import NonePropertyStore
+from txweb2.dav.util import joinURL
+from txweb2.http import HTTPError
 
+try:
+    from twistedcaldav.authkerb import NegotiateCredentials
+    NegotiateCredentials  # sigh, pyflakes
+except ImportError:
+    NegotiateCredentials = None
+
 thisModule = getModule(__name__)
 log = Logger()
 
@@ -109,7 +107,7 @@
 def cuTypeConverter(cuType):
     """ Converts calendar user types to OD type names """
 
-    return "recordType", DirectoryRecord.fromCUType(cuType)
+    return "recordType", CalendarDirectoryRecordMixin.fromCUType(cuType)
 
 
 
@@ -127,7 +125,7 @@
     elif cua.startswith("/") or cua.startswith("http"):
         ignored, collection, id = cua.rsplit("/", 2)
         if collection == "__uids__":
-            return "guid", id
+            return "uid", id
         else:
             return "recordName", id
 
@@ -223,7 +221,7 @@
     _cs_ns = "http://calendarserver.org/ns/"
     _fieldMap = {
         ("DAV:" , "displayname") :
-            ("fullName", None, "Display Name", davxml.DisplayName),
+            ("fullNames", None, "Display Name", davxml.DisplayName),
         ("urn:ietf:params:xml:ns:caldav" , "calendar-user-type") :
             ("", cuTypeConverter, "Calendar User Type",
             caldavxml.CalendarUserType),
@@ -287,10 +285,16 @@
         #
         # Create children
         #
-        # MOVE2WHO - hack: appending "s" -- need mapping
-        for name, recordType in [(r.name + "s", r) for r in self.directory.recordTypes()]:
-            self.putChild(name, DirectoryPrincipalTypeProvisioningResource(self,
-                name, recordType))
+        for name, recordType in [
+            (self.directory.recordTypeToOldName(r), r)
+            for r in self.directory.recordTypes()
+        ]:
+            self.putChild(
+                name,
+                DirectoryPrincipalTypeProvisioningResource(
+                    self, name, recordType
+                )
+            )
 
         self.putChild(uidsResourceName, DirectoryPrincipalUIDProvisioningResource(self))
 
@@ -869,8 +873,12 @@
             #         returnValue(None)
 
             if name == "email-address-set":
+                try:
+                    emails = self.record.emailAddresses
+                except AttributeError:
+                    emails = []
                 returnValue(customxml.EmailAddressSet(
-                    *[customxml.EmailAddressProperty(addr) for addr in sorted(self.record.emailAddresses)]
+                    *[customxml.EmailAddressProperty(addr) for addr in sorted(emails)]
                 ))
 
         result = (yield super(DirectoryPrincipalResource, self).readProperty(property, request))
@@ -1463,71 +1471,3 @@
 
 
 
-def formatPrincipals(principals):
-    """
-    Format a list of principals into some twisted.web.template DOM objects.
-    """
-    def recordKey(principal):
-        try:
-            record = principal.record
-        except AttributeError:
-            try:
-                record = principal.parent.record
-            except:
-                return None
-        return (record.recordType, record.shortNames[0])
-
-
-    def describe(principal):
-        if hasattr(principal, "record"):
-            return " - %s" % (principal.record.displayName,)
-        else:
-            return ""
-
-    return formatList(
-        tags.a(href=principal.principalURL())(
-            str(principal), describe(principal)
-        )
-        for principal in sorted(principals, key=recordKey)
-    )
-
-
-
-def formatList(iterable):
-    """
-    Format a list of stuff as an interable.
-    """
-    thereAreAny = False
-    try:
-        item = None
-        for item in iterable:
-            thereAreAny = True
-            yield " -> "
-            if item is None:
-                yield "None"
-            else:
-                yield item
-            yield "\n"
-    except Exception, e:
-        log.error("Exception while rendering: %s" % (e,))
-        Failure().printTraceback()
-        yield "  ** %s **: %s\n" % (e.__class__.__name__, e)
-    if not thereAreAny:
-        yield " '()\n"
-
-
-
-def formatLink(url):
-    """
-    Convert a URL string into some twisted.web.template DOM objects for
-    rendering as a link to itself.
-    """
-    return tags.a(href=url)(url)
-
-
-
-def formatLinks(urls):
-    """
-    Format a list of URL strings as a list of twisted.web.template DOM links.
-    """
-    return formatList(formatLink(link) for link in urls)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/accounts.xml
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/accounts.xml	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/accounts.xml	2014-03-12 18:49:18 UTC (rev 12881)
@@ -135,8 +135,42 @@
     <short-name>delegategroup</short-name>
     <uid>00599DAF-3E75-42DD-9DB7-52617E79943F</uid>
     <full-name>Delegate Group</full-name>
-      <member-uid>delegateviagroup</member-uid>
+    <member-uid>delegateviagroup</member-uid>
   </record>
+
+  <record type="user">
+    <short-name>user01</short-name>
+    <uid>user01</uid>
+    <password>user01</password>
+    <full-name>User 01</full-name>
+    <email>user01 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user02</short-name>
+    <uid>user02</uid>
+    <password>user02</password>
+    <full-name>User 02</full-name>
+    <email>user02 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user03</short-name>
+    <uid>user03</uid>
+    <password>user03</password>
+    <full-name>User 03</full-name>
+    <email>user03 at example.com</email>
+  </record>
+
+  <record type="user">
+    <short-name>user04</short-name>
+    <uid>user04</uid>
+    <password>user04</password>
+    <full-name>User 04</full-name>
+    <email>user04 at example.com</email>
+  </record>
+
+  <!-- Repeat is not (yet?) supported in twext.who.xml
   <user repeat="100">
     <short-name>user%02d</short-name>
     <uid>user%02d</uid>
@@ -146,6 +180,8 @@
     <last-name>~9 User %02d</last-name>
     <email>~10 at example.com</email>
   </user>
+  -->
+
   <record type="group">
     <short-name>managers</short-name>
     <uid>9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1</uid>

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_aggregate.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_aggregate.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_aggregate.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,87 +0,0 @@
-##
-# Copyright (c) 2005-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
-from twistedcaldav.directory.aggregate import AggregateDirectoryService
-
-from twistedcaldav.directory.test.test_xmlfile import xmlFile, augmentsFile
-
-import twistedcaldav.directory.test.util
-from twistedcaldav.directory import augment
-
-xml_prefix = "xml:"
-
-testServices = (
-    (xml_prefix   , twistedcaldav.directory.test.test_xmlfile.XMLFile),
-)
-
-class AggregatedDirectories (twistedcaldav.directory.test.util.DirectoryTestCase):
-    def _recordTypes(self):
-        recordTypes = set()
-        for prefix, testClass in testServices:
-            for recordType in testClass.recordTypes:
-                recordTypes.add(prefix + recordType)
-        return recordTypes
-
-
-    def _records(key): #@NoSelf
-        def get(self):
-            records = {}
-            for prefix, testClass in testServices:
-                for record, info in getattr(testClass, key).iteritems():
-                    info = dict(info)
-                    info["prefix"] = prefix
-                    info["members"] = tuple(
-                        (t, prefix + s) for t, s in info.get("members", {})
-                    )
-                    records[prefix + record] = info
-            return records
-        return get
-
-    recordTypes = property(_recordTypes)
-    users = property(_records("users"))
-    groups = property(_records("groups"))
-    locations = property(_records("locations"))
-    resources = property(_records("resources"))
-    addresses = property(_records("addresses"))
-
-    recordTypePrefixes = tuple(s[0] for s in testServices)
-
-
-    def service(self):
-        """
-        Returns an IDirectoryService.
-        """
-        xmlService = XMLDirectoryService(
-            {
-                'xmlFile' : xmlFile,
-                'augmentService' :
-                    augment.AugmentXMLDB(xmlFiles=(augmentsFile.path,)),
-            }
-        )
-        xmlService.recordTypePrefix = xml_prefix
-
-        return AggregateDirectoryService((xmlService,), None)
-
-
-    def test_setRealm(self):
-        """
-        setRealm gets propagated to nested services
-        """
-        aggregatedService = self.service()
-        aggregatedService.setRealm("foo.example.com")
-        for service in aggregatedService._recordTypes.values():
-            self.assertEquals("foo.example.com", service.realmName)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_augment.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_augment.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_augment.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -17,7 +17,6 @@
 from twistedcaldav.test.util import TestCase
 from twistedcaldav.directory.augment import AugmentXMLDB, AugmentSqliteDB, \
     AugmentPostgreSQLDB, AugmentRecord
-from twistedcaldav.directory.directory import DirectoryService
 from twisted.internet.defer import inlineCallbacks
 from twistedcaldav.directory.xmlaugmentsparser import XMLAugmentsParser
 import cStringIO
@@ -78,7 +77,7 @@
 class AugmentTests(TestCase):
 
     @inlineCallbacks
-    def _checkRecord(self, db, items, recordType=DirectoryService.recordType_users):
+    def _checkRecord(self, db, items, recordType="users"):
 
         record = (yield db.getAugmentRecord(items["uid"], recordType))
         self.assertTrue(record is not None, "Failed record uid: %s" % (items["uid"],))
@@ -88,7 +87,7 @@
 
 
     @inlineCallbacks
-    def _checkRecordExists(self, db, uid, recordType=DirectoryService.recordType_users):
+    def _checkRecordExists(self, db, uid, recordType="users"):
 
         record = (yield db.getAugmentRecord(uid, recordType))
         self.assertTrue(record is not None, "Failed record uid: %s" % (uid,))

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_buildquery.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_buildquery.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_buildquery.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,160 +0,0 @@
-##
-# Copyright (c) 2009-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-try:
-    from calendarserver.platform.darwin.od import dsattributes
-except ImportError:
-    pass
-else:
-    from twistedcaldav.test.util import TestCase
-    from twistedcaldav.directory.appleopendirectory import (buildQueries,
-        buildLocalQueriesFromTokens, OpenDirectoryService, buildNestedQueryFromTokens)
-
-    class BuildQueryTests(TestCase):
-
-        def test_buildQuery(self):
-            self.assertEquals(
-                buildQueries(
-                    [dsattributes.kDSStdRecordTypeUsers],
-                    (
-                        ("firstName", "morgen", True, "starts-with"),
-                        ("lastName", "sagen", True, "starts-with"),
-                    ),
-                    OpenDirectoryService._ODFields
-                ),
-                {
-                    ('dsAttrTypeStandard:FirstName', 'morgen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeUsers],
-                    ('dsAttrTypeStandard:LastName', 'sagen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeUsers],
-                }
-            )
-            self.assertEquals(
-                buildQueries(
-                    [
-                        dsattributes.kDSStdRecordTypeUsers,
-                    ],
-                    (
-                        ("firstName", "morgen", True, "starts-with"),
-                        ("emailAddresses", "morgen", True, "contains"),
-                    ),
-                    OpenDirectoryService._ODFields
-                ),
-                {
-                    ('dsAttrTypeStandard:FirstName', 'morgen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeUsers],
-                    ('dsAttrTypeStandard:EMailAddress', 'morgen', True, 'contains') : [dsattributes.kDSStdRecordTypeUsers],
-                }
-            )
-            self.assertEquals(
-                buildQueries(
-                    [
-                        dsattributes.kDSStdRecordTypeGroups,
-                    ],
-                    (
-                        ("firstName", "morgen", True, "starts-with"),
-                        ("lastName", "morgen", True, "starts-with"),
-                        ("fullName", "morgen", True, "starts-with"),
-                        ("emailAddresses", "morgen", True, "contains"),
-                    ),
-                    OpenDirectoryService._ODFields
-                ),
-                {
-                    ('dsAttrTypeStandard:RealName', 'morgen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeGroups],
-                    ('dsAttrTypeStandard:EMailAddress', 'morgen', True, 'contains') : [dsattributes.kDSStdRecordTypeGroups],
-                }
-            )
-            self.assertEquals(
-                buildQueries(
-                    [
-                        dsattributes.kDSStdRecordTypeUsers,
-                        dsattributes.kDSStdRecordTypeGroups,
-                    ],
-                    (
-                        ("firstName", "morgen", True, "starts-with"),
-                        ("lastName", "morgen", True, "starts-with"),
-                        ("fullName", "morgen", True, "starts-with"),
-                        ("emailAddresses", "morgen", True, "contains"),
-                    ),
-                    OpenDirectoryService._ODFields
-                ),
-                {
-                    ('dsAttrTypeStandard:RealName', 'morgen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeUsers, dsattributes.kDSStdRecordTypeGroups],
-                    ('dsAttrTypeStandard:EMailAddress', 'morgen', True, 'contains') : [dsattributes.kDSStdRecordTypeUsers, dsattributes.kDSStdRecordTypeGroups],
-                    ('dsAttrTypeStandard:FirstName', 'morgen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeUsers],
-                    ('dsAttrTypeStandard:LastName', 'morgen', True, 'starts-with') : [dsattributes.kDSStdRecordTypeUsers],
-                }
-            )
-            self.assertEquals(
-                buildQueries(
-                    [
-                        dsattributes.kDSStdRecordTypeGroups,
-                    ],
-                    (
-                        ("firstName", "morgen", True, "starts-with"),
-                    ),
-                    OpenDirectoryService._ODFields
-                ),
-                {
-                }
-            )
-
-
-        def test_buildLocalQueryFromTokens(self):
-            """
-            Verify the generating of the simpler queries passed to /Local/Default
-            """
-            results = buildLocalQueriesFromTokens([], OpenDirectoryService._ODFields)
-            self.assertEquals(results, None)
-
-            results = buildLocalQueriesFromTokens(["foo"], OpenDirectoryService._ODFields)
-            self.assertEquals(
-                results[0].generate(),
-                "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))"
-            )
-
-            results = buildLocalQueriesFromTokens(["foo", "bar"], OpenDirectoryService._ODFields)
-            self.assertEquals(
-                results[0].generate(),
-                "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*))"
-            )
-            self.assertEquals(
-                results[1].generate(),
-                "(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*))"
-            )
-
-
-        def test_buildNestedQueryFromTokens(self):
-            """
-            Verify the generating of the complex nested queries
-            """
-            query = buildNestedQueryFromTokens([], OpenDirectoryService._ODFields)
-            self.assertEquals(query, None)
-
-            query = buildNestedQueryFromTokens(["foo"], OpenDirectoryService._ODFields)
-            self.assertEquals(
-                query.generate(),
-                "(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*)(dsAttrTypeStandard:RecordName=foo*))"
-            )
-
-            query = buildNestedQueryFromTokens(["foo", "bar"], OpenDirectoryService._ODFields)
-            self.assertEquals(
-                query.generate(),
-                "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*)(dsAttrTypeStandard:RecordName=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*)(dsAttrTypeStandard:RecordName=bar*)))"
-            )
-
-            query = buildNestedQueryFromTokens(["foo", "bar", "baz"], OpenDirectoryService._ODFields)
-            self.assertEquals(
-                query.generate(),
-                "(&(|(dsAttrTypeStandard:RealName=*foo*)(dsAttrTypeStandard:EMailAddress=foo*)(dsAttrTypeStandard:RecordName=foo*))(|(dsAttrTypeStandard:RealName=*bar*)(dsAttrTypeStandard:EMailAddress=bar*)(dsAttrTypeStandard:RecordName=bar*))(|(dsAttrTypeStandard:RealName=*baz*)(dsAttrTypeStandard:EMailAddress=baz*)(dsAttrTypeStandard:RecordName=baz*)))"
-            )

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_cachedirectory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_cachedirectory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_cachedirectory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,405 +0,0 @@
-#
-# Copyright (c) 2009-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from uuid import uuid4
-
-from twistedcaldav.directory.cachingdirectory import CachingDirectoryService
-from twistedcaldav.directory.cachingdirectory import CachingDirectoryRecord
-from twistedcaldav.directory.directory import DirectoryService
-from twistedcaldav.directory.util import uuidFromName
-from twistedcaldav.directory.augment import AugmentRecord
-from twistedcaldav.test.util import TestCase
-from twistedcaldav.config import config
-
-
-class TestDirectoryService (CachingDirectoryService):
-
-    realmName = "Dummy Realm"
-    baseGUID = "20CB1593-DE3F-4422-A7D7-BA9C2099B317"
-
-    def recordTypes(self):
-        return (
-            DirectoryService.recordType_users,
-            DirectoryService.recordType_groups,
-            DirectoryService.recordType_locations,
-            DirectoryService.recordType_resources,
-        )
-
-
-    def queryDirectory(self, recordTypes, indexType, indexKey):
-
-        self.queried = True
-
-        for recordType in recordTypes:
-            for record in self.fakerecords[recordType]:
-                cacheIt = False
-                if indexType in (
-                    CachingDirectoryService.INDEX_TYPE_SHORTNAME,
-                    CachingDirectoryService.INDEX_TYPE_CUA,
-                    CachingDirectoryService.INDEX_TYPE_AUTHID,
-                ):
-                    if indexKey in record[indexType]:
-                        cacheIt = True
-                else:
-                    if indexKey == record[indexType]:
-                        cacheIt = True
-
-                if cacheIt:
-                    cacheRecord = CachingDirectoryRecord(
-                        service=self,
-                        recordType=recordType,
-                        guid=record.get("guid"),
-                        shortNames=record.get("shortname"),
-                        authIDs=record.get("authid"),
-                        fullName=record.get("fullName"),
-                        firstName="",
-                        lastName="",
-                        emailAddresses=record.get("email"),
-                    )
-
-                    augmentRecord = AugmentRecord(
-                        uid=cacheRecord.guid,
-                        enabled=True,
-                        enabledForCalendaring=True,
-                    )
-
-                    cacheRecord.addAugmentInformation(augmentRecord)
-
-                    self.recordCacheForType(recordType).addRecord(cacheRecord,
-                        indexType, indexKey)
-
-
-
-class CachingDirectoryTest(TestCase):
-
-    baseGUID = str(uuid4())
-
-
-    def setUp(self):
-        super(CachingDirectoryTest, self).setUp()
-        self.service = TestDirectoryService()
-        self.service.queried = False
-
-
-    def loadRecords(self, records):
-        self.service._initCaches()
-        self.service.fakerecords = records
-        self.service.queried = False
-
-
-    def fakeRecord(
-        self,
-        fullName,
-        recordType,
-        shortNames=None,
-        guid=None,
-        emails=None,
-        members=None,
-        resourceInfo=None,
-        multinames=False
-    ):
-        if shortNames is None:
-            shortNames = (self.shortNameForFullName(fullName),)
-            if multinames:
-                shortNames += (fullName,)
-
-        if guid is None:
-            guid = self.guidForShortName(shortNames[0], recordType=recordType)
-        else:
-            guid = guid.lower()
-
-        if emails is None:
-            emails = ("%s at example.com" % (shortNames[0],),)
-
-        attrs = {
-            "fullName": fullName,
-            "guid": guid,
-            "shortname": shortNames,
-            "email": emails,
-            "cua": tuple(["mailto:%s" % email for email in emails]),
-            "authid": tuple(["Kerberos:%s" % email for email in emails])
-        }
-
-        if members:
-            attrs["members"] = members
-
-        if resourceInfo:
-            attrs["resourceInfo"] = resourceInfo
-
-        return attrs
-
-
-    def shortNameForFullName(self, fullName):
-        return fullName.lower().replace(" ", "")
-
-
-    def guidForShortName(self, shortName, recordType=""):
-        return uuidFromName(self.baseGUID, "%s%s" % (recordType, shortName))
-
-
-    def dummyRecords(self):
-        SIZE = 10
-        records = {
-            DirectoryService.recordType_users: [
-                self.fakeRecord("User %02d" % x, DirectoryService.recordType_users, multinames=(x > 5)) for x in range(1, SIZE + 1)
-            ],
-            DirectoryService.recordType_groups: [
-                self.fakeRecord("Group %02d" % x, DirectoryService.recordType_groups) for x in range(1, SIZE + 1)
-            ],
-            DirectoryService.recordType_resources: [
-                self.fakeRecord("Resource %02d" % x, DirectoryService.recordType_resources) for x in range(1, SIZE + 1)
-            ],
-            DirectoryService.recordType_locations: [
-                self.fakeRecord("Location %02d" % x, DirectoryService.recordType_locations) for x in range(1, SIZE + 1)
-            ],
-        }
-        # Add duplicate shortnames
-        records[DirectoryService.recordType_users].append(self.fakeRecord("Duplicate", DirectoryService.recordType_users, multinames=True))
-        records[DirectoryService.recordType_groups].append(self.fakeRecord("Duplicate", DirectoryService.recordType_groups, multinames=True))
-        records[DirectoryService.recordType_resources].append(self.fakeRecord("Duplicate", DirectoryService.recordType_resources, multinames=True))
-        records[DirectoryService.recordType_locations].append(self.fakeRecord("Duplicate", DirectoryService.recordType_locations, multinames=True))
-
-        self.loadRecords(records)
-
-
-    def verifyRecords(self, recordType, expectedGUIDs):
-
-        records = self.service.listRecords(recordType)
-        recordGUIDs = set([record.guid for record in records])
-        self.assertEqual(recordGUIDs, expectedGUIDs)
-
-
-
-class GUIDLookups(CachingDirectoryTest):
-
-    def test_emptylist(self):
-        self.dummyRecords()
-
-        self.verifyRecords(DirectoryService.recordType_users, set())
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-
-    def test_cacheoneguid(self):
-        self.dummyRecords()
-
-        self.assertTrue(self.service.recordWithGUID(self.guidForShortName("user01", recordType=DirectoryService.recordType_users)) is not None)
-        self.assertTrue(self.service.queried)
-        self.verifyRecords(DirectoryService.recordType_users, set((
-            self.guidForShortName("user01", recordType=DirectoryService.recordType_users),
-        )))
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-        # Make sure it really is cached and won't cause another query
-        self.service.queried = False
-        self.assertTrue(self.service.recordWithGUID(self.guidForShortName("user01", recordType=DirectoryService.recordType_users)) is not None)
-        self.assertFalse(self.service.queried)
-
-        # Make sure guid is case-insensitive
-        self.assertTrue(self.service.recordWithGUID(self.guidForShortName("user01", recordType=DirectoryService.recordType_users).lower()) is not None)
-
-
-    def test_cacheoneshortname(self):
-        self.dummyRecords()
-
-        self.assertTrue(self.service.recordWithShortName(
-            DirectoryService.recordType_users,
-            "user02"
-        ) is not None)
-        self.assertTrue(self.service.queried)
-        self.verifyRecords(DirectoryService.recordType_users, set((
-            self.guidForShortName("user02", recordType=DirectoryService.recordType_users),
-        )))
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-        # Make sure it really is cached and won't cause another query
-        self.service.queried = False
-        self.assertTrue(self.service.recordWithShortName(
-            DirectoryService.recordType_users,
-            "user02"
-        ) is not None)
-        self.assertFalse(self.service.queried)
-
-
-    def test_cacheoneemail(self):
-        self.dummyRecords()
-
-        self.assertTrue(self.service.recordWithCalendarUserAddress(
-            "mailto:user03 at example.com"
-        ) is not None)
-        self.assertTrue(self.service.queried)
-        self.verifyRecords(DirectoryService.recordType_users, set((
-            self.guidForShortName("user03", recordType=DirectoryService.recordType_users),
-        )))
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-        # Make sure it really is cached and won't cause another query
-        self.service.queried = False
-        self.assertTrue(self.service.recordWithCalendarUserAddress(
-            "mailto:user03 at example.com"
-        ) is not None)
-        self.assertFalse(self.service.queried)
-
-
-    def test_cacheonePrincipalsURLWithUIDS(self):
-        self.dummyRecords()
-
-        guid = self.guidForShortName("user03", "users")
-        self.assertTrue(self.service.recordWithCalendarUserAddress(
-            "/principals/__uids__/%s" % (guid,)
-        ) is not None)
-        self.assertTrue(self.service.queried)
-        self.verifyRecords(DirectoryService.recordType_users, set((
-            self.guidForShortName("user03", recordType=DirectoryService.recordType_users),
-        )))
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-        # Make sure it really is cached and won't cause another query
-        self.service.queried = False
-        self.assertTrue(self.service.recordWithCalendarUserAddress(
-            "/principals/__uids__/%s" % (guid,)
-        ) is not None)
-        self.assertFalse(self.service.queried)
-
-
-    def test_cacheonePrincipalsURLWithUsers(self):
-        self.dummyRecords()
-
-        self.assertTrue(self.service.recordWithCalendarUserAddress(
-            "/principals/users/user03"
-        ) is not None)
-        self.assertTrue(self.service.queried)
-        self.verifyRecords(DirectoryService.recordType_users, set((
-            self.guidForShortName("user03", recordType=DirectoryService.recordType_users),
-        )))
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-        # Make sure it really is cached and won't cause another query
-        self.service.queried = False
-        self.assertTrue(self.service.recordWithCalendarUserAddress(
-            "/principals/users/user03"
-        ) is not None)
-        self.assertFalse(self.service.queried)
-
-
-    def test_cacheoneauthid(self):
-        self.dummyRecords()
-
-        self.assertTrue(self.service.recordWithAuthID(
-            "Kerberos:user03 at example.com"
-        ) is not None)
-        self.assertTrue(self.service.queried)
-        self.verifyRecords(DirectoryService.recordType_users, set((
-            self.guidForShortName("user03", recordType=DirectoryService.recordType_users),
-        )))
-        self.verifyRecords(DirectoryService.recordType_groups, set())
-        self.verifyRecords(DirectoryService.recordType_resources, set())
-        self.verifyRecords(DirectoryService.recordType_locations, set())
-
-        # Make sure it really is cached and won't cause another query
-        self.service.queried = False
-        self.assertTrue(self.service.recordWithAuthID(
-            "Kerberos:user03 at example.com"
-        ) is not None)
-        self.assertFalse(self.service.queried)
-
-
-    def test_negativeCaching(self):
-        self.dummyRecords()
-
-        # If negativeCaching is off, each miss will result in a call to
-        # queryDirectory( )
-        self.service.negativeCaching = False
-
-        self.service.queried = False
-        self.assertEquals(self.service.recordWithGUID(self.guidForShortName("missing")), None)
-        self.assertTrue(self.service.queried)
-
-        self.service.queried = False
-        self.assertEquals(self.service.recordWithGUID(self.guidForShortName("missing")), None)
-        self.assertTrue(self.service.queried)
-
-        # However, if negativeCaching is on, a miss is recorded as such,
-        # preventing a similar queryDirectory( ) until cacheTimeout passes
-        self.service.negativeCaching = True
-
-        self.service.queried = False
-        self.assertEquals(self.service.recordWithGUID(self.guidForShortName("missing")), None)
-        self.assertTrue(self.service.queried)
-
-        self.service.queried = False
-        self.assertEquals(self.service.recordWithGUID(self.guidForShortName("missing")), None)
-        self.assertFalse(self.service.queried)
-
-        # Simulate time passing by clearing the negative timestamp for this
-        # entry, then try again, this time queryDirectory( ) is called
-        self.service._disabledKeys[self.service.INDEX_TYPE_GUID][self.guidForShortName("missing")] = 0
-
-        self.service.queried = False
-        self.assertEquals(self.service.recordWithGUID(self.guidForShortName("missing")), None)
-        self.assertTrue(self.service.queried)
-
-
-    def test_duplicateShortNames(self):
-        """
-        Verify that when looking up records having duplicate short-names, the record of the
-        proper type is returned
-        """
-
-        self.patch(config.Memcached.Pools.Default, "ClientEnabled", True)
-        self.dummyRecords()
-
-        record = self.service.recordWithShortName(DirectoryService.recordType_users,
-            "Duplicate")
-        self.assertEquals(record.recordType, DirectoryService.recordType_users)
-
-        record = self.service.recordWithShortName(DirectoryService.recordType_groups,
-            "Duplicate")
-        self.assertEquals(record.recordType, DirectoryService.recordType_groups)
-
-        record = self.service.recordWithShortName(DirectoryService.recordType_resources,
-            "Duplicate")
-        self.assertEquals(record.recordType, DirectoryService.recordType_resources)
-
-        record = self.service.recordWithShortName(DirectoryService.recordType_locations,
-            "Duplicate")
-        self.assertEquals(record.recordType, DirectoryService.recordType_locations)
-
-
-    def test_generateMemcacheKey(self):
-        """
-        Verify keys are correctly generated based on the index type -- if index type is
-        short-name, then the recordtype is encoded into the key.
-        """
-        self.assertEquals(
-            self.service.generateMemcacheKey(self.service.INDEX_TYPE_GUID, "foo", "users"),
-            "dir|v2|20CB1593-DE3F-4422-A7D7-BA9C2099B317|guid|foo",
-        )
-        self.assertEquals(
-            self.service.generateMemcacheKey(self.service.INDEX_TYPE_SHORTNAME, "foo", "users"),
-            "dir|v2|20CB1593-DE3F-4422-A7D7-BA9C2099B317|users|shortname|foo",
-        )

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_directory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_directory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_directory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,1201 +0,0 @@
-##
-# Copyright (c) 2011-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from twisted.internet.defer import inlineCallbacks
-from twisted.python.filepath import FilePath
-
-from twistedcaldav.test.util import TestCase
-from twistedcaldav.test.util import xmlFile, augmentsFile, proxiesFile, dirTest
-from twistedcaldav.config import config
-from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord, GroupMembershipCache, GroupMembershipCacheUpdater, diffAssignments
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
-from twistedcaldav.directory.calendaruserproxyloader import XMLCalendarUserProxyLoader
-from twistedcaldav.directory import augment, calendaruserproxy
-from twistedcaldav.directory.util import normalizeUUID
-from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
-
-import cPickle as pickle
-import uuid
-
-def StubCheckSACL(cls, username, service):
-    services = {
-        "calendar" : ["amanda", "betty"],
-        "addressbook" : ["amanda", "carlene"],
-    }
-    if username in services[service]:
-        return 0
-    return 1
-
-
-
-class SACLTests(TestCase):
-
-    def setUp(self):
-        self.patch(DirectoryRecord, "CheckSACL", StubCheckSACL)
-        self.patch(config, "EnableSACLs", True)
-        self.service = DirectoryService()
-        self.service.setRealm("test")
-        self.service.baseGUID = "0E8E6EC2-8E52-4FF3-8F62-6F398B08A498"
-
-
-    def test_applySACLs(self):
-        """
-        Users not in calendar SACL will have enabledForCalendaring set to
-        False.
-        Users not in addressbook SACL will have enabledForAddressBooks set to
-        False.
-        """
-
-        data = [
-            ("amanda", True, True,),
-            ("betty", True, False,),
-            ("carlene", False, True,),
-            ("daniel", False, False,),
-        ]
-        for username, cal, ab in data:
-            record = DirectoryRecord(self.service, "users", None, (username,),
-                enabledForCalendaring=True, enabledForAddressBooks=True)
-            record.applySACLs()
-            self.assertEquals(record.enabledForCalendaring, cal)
-            self.assertEquals(record.enabledForAddressBooks, ab)
-
-
-
-class GroupMembershipTests (TestCase):
-
-    @inlineCallbacks
-    def setUp(self):
-        super(GroupMembershipTests, self).setUp()
-
-        self.directoryFixture.addDirectoryService(XMLDirectoryService(
-            {
-                'xmlFile' : xmlFile,
-                'augmentService' :
-                    augment.AugmentXMLDB(xmlFiles=(augmentsFile.path,)),
-            }
-        ))
-        calendaruserproxy.ProxyDBService = calendaruserproxy.ProxySqliteDB("proxies.sqlite")
-
-        # Set up a principals hierarchy for each service we're testing with
-        self.principalRootResources = {}
-        name = self.directoryService.__class__.__name__
-        url = "/" + name + "/"
-
-        provisioningResource = DirectoryPrincipalProvisioningResource(url, self.directoryService)
-
-        self.site.resource.putChild(name, provisioningResource)
-
-        self.principalRootResources[self.directoryService.__class__.__name__] = provisioningResource
-
-        yield XMLCalendarUserProxyLoader(proxiesFile.path).updateProxyDB()
-
-
-    def tearDown(self):
-        """ Empty the proxy db between tests """
-        return calendaruserproxy.ProxyDBService.clean() #@UndefinedVariable
-
-
-    def _getPrincipalByShortName(self, type, name):
-        provisioningResource = self.principalRootResources[self.directoryService.__class__.__name__]
-        return provisioningResource.principalForShortName(type, name)
-
-
-    def _updateMethod(self):
-        """
-        Update a counter in the following test
-        """
-        self.count += 1
-
-
-    def test_expandedMembers(self):
-        """
-        Make sure expandedMembers( ) returns a complete, flattened set of
-        members of a group, including all sub-groups.
-        """
-        bothCoasts = self.directoryService.recordWithShortName(
-            DirectoryService.recordType_groups, "both_coasts")
-        self.assertEquals(
-            set([r.guid for r in bothCoasts.expandedMembers()]),
-            set(['8B4288F6-CC82-491D-8EF9-642EF4F3E7D0',
-                 '6423F94A-6B76-4A3A-815B-D52CFD77935D',
-                 '5A985493-EE2C-4665-94CF-4DFEA3A89500',
-                 '5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1',
-                 'left_coast',
-                 'right_coast'])
-        )
-
-
-    @inlineCallbacks
-    def test_groupMembershipCache(self):
-        """
-        Ensure we get back what we put in
-        """
-        cache = GroupMembershipCache("ProxyDB", expireSeconds=10)
-
-        yield cache.setGroupsFor("a", set(["b", "c", "d"])) # a is in b, c, d
-        members = (yield cache.getGroupsFor("a"))
-        self.assertEquals(members, set(["b", "c", "d"]))
-
-        yield cache.setGroupsFor("b", set()) # b not in any groups
-        members = (yield cache.getGroupsFor("b"))
-        self.assertEquals(members, set())
-
-        cache._memcacheProtocol.advanceClock(10)
-
-        members = (yield cache.getGroupsFor("a")) # has expired
-        self.assertEquals(members, set())
-
-
-    @inlineCallbacks
-    def test_groupMembershipCacheUpdater(self):
-        """
-        Let the GroupMembershipCacheUpdater populate the cache, then make
-        sure proxyFor( ) and groupMemberships( ) work from the cache
-        """
-        cache = GroupMembershipCache("ProxyDB", expireSeconds=60)
-        # Having a groupMembershipCache assigned to the directory service is the
-        # trigger to use such a cache:
-        self.directoryService.groupMembershipCache = cache
-
-        updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
-            cache=cache, useExternalProxies=False)
-
-        # Exercise getGroups()
-        groups, aliases = (yield updater.getGroups())
-        self.assertEquals(
-            groups,
-            {
-                '00599DAF-3E75-42DD-9DB7-52617E79943F':
-                    set(['46D9D716-CBEE-490F-907A-66FA6C3767FF']),
-                '9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1':
-                    set(['8B4288F6-CC82-491D-8EF9-642EF4F3E7D0']),
-                'admin':
-                    set(['9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1']),
-                'both_coasts':
-                    set(['left_coast', 'right_coast']),
-                'grunts':
-                    set(['5A985493-EE2C-4665-94CF-4DFEA3A89500',
-                         '5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1',
-                         '6423F94A-6B76-4A3A-815B-D52CFD77935D']),
-                'left_coast':
-                    set(['5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1',
-                         '6423F94A-6B76-4A3A-815B-D52CFD77935D',
-                         '8B4288F6-CC82-491D-8EF9-642EF4F3E7D0']),
-                'non_calendar_group':
-                    set(['5A985493-EE2C-4665-94CF-4DFEA3A89500',
-                         '8B4288F6-CC82-491D-8EF9-642EF4F3E7D0']),
-                'recursive1_coasts':
-                    set(['6423F94A-6B76-4A3A-815B-D52CFD77935D',
-                         'recursive2_coasts']),
-                'recursive2_coasts':
-                    set(['5A985493-EE2C-4665-94CF-4DFEA3A89500',
-                         'recursive1_coasts']),
-                'right_coast':
-                    set(['5A985493-EE2C-4665-94CF-4DFEA3A89500'])
-            }
-        )
-        self.assertEquals(
-            aliases,
-            {
-                '00599DAF-3E75-42DD-9DB7-52617E79943F':
-                    '00599DAF-3E75-42DD-9DB7-52617E79943F',
-                '9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1':
-                    '9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1',
-                 'admin': 'admin',
-                 'both_coasts': 'both_coasts',
-                 'grunts': 'grunts',
-                 'left_coast': 'left_coast',
-                 'non_calendar_group': 'non_calendar_group',
-                 'recursive1_coasts': 'recursive1_coasts',
-                 'recursive2_coasts': 'recursive2_coasts',
-                 'right_coast': 'right_coast'
-            }
-        )
-
-        # Exercise expandedMembers()
-        self.assertEquals(
-            updater.expandedMembers(groups, "both_coasts"),
-            set(['5A985493-EE2C-4665-94CF-4DFEA3A89500',
-                 '5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1',
-                 '6423F94A-6B76-4A3A-815B-D52CFD77935D',
-                 '8B4288F6-CC82-491D-8EF9-642EF4F3E7D0',
-                 'left_coast',
-                 'right_coast']
-            )
-        )
-
-        # Prevent an update by locking the cache
-        acquiredLock = (yield cache.acquireLock())
-        self.assertTrue(acquiredLock)
-        self.assertEquals((False, 0, 0), (yield updater.updateCache()))
-
-        # You can't lock when already locked:
-        acquiredLockAgain = (yield cache.acquireLock())
-        self.assertFalse(acquiredLockAgain)
-
-        # Allow an update by unlocking the cache
-        yield cache.releaseLock()
-
-        self.assertEquals((False, 9, 9), (yield updater.updateCache()))
-
-        # Verify cache is populated:
-        self.assertTrue((yield cache.isPopulated()))
-
-        delegates = (
-
-            # record name
-            # read-write delegators
-            # read-only delegators
-            # groups delegate is in (restricted to only those groups
-            #   participating in delegation)
-
-            ("wsanchez",
-             set(["mercury", "apollo", "orion", "gemini"]),
-             set(["non_calendar_proxy"]),
-             set(['left_coast',
-                  'both_coasts',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                  'gemini#calendar-proxy-write',
-                ]),
-            ),
-            ("cdaboo",
-             set(["apollo", "orion", "non_calendar_proxy"]),
-             set(["non_calendar_proxy"]),
-             set(['both_coasts',
-                  'non_calendar_group',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                ]),
-            ),
-            ("lecroy",
-             set(["apollo", "mercury", "non_calendar_proxy"]),
-             set(),
-             set(['both_coasts',
-                  'left_coast',
-                  'non_calendar_group',
-                ]),
-            ),
-            ("usera",
-             set(),
-             set(),
-             set(),
-            ),
-            ("userb",
-             set(['7423F94A-6B76-4A3A-815B-D52CFD77935D']),
-             set(),
-             set(['7423F94A-6B76-4A3A-815B-D52CFD77935D#calendar-proxy-write']),
-            ),
-            ("userc",
-             set(['7423F94A-6B76-4A3A-815B-D52CFD77935D']),
-             set(),
-             set(['7423F94A-6B76-4A3A-815B-D52CFD77935D#calendar-proxy-write']),
-            ),
-        )
-
-        for name, write, read, groups in delegates:
-            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
-
-            proxyFor = (yield delegate.proxyFor(True))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                write,
-            )
-            proxyFor = (yield delegate.proxyFor(False))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                read,
-            )
-            groupsIn = (yield delegate.groupMemberships())
-            uids = set()
-            for group in groupsIn:
-                try:
-                    uid = group.uid # a sub-principal
-                except AttributeError:
-                    uid = group.record.guid # a regular group
-                uids.add(uid)
-            self.assertEquals(
-                set(uids),
-                groups,
-            )
-
-        # Verify CalendarUserProxyPrincipalResource.containsPrincipal( ) works
-        delegator = self._getPrincipalByShortName(DirectoryService.recordType_locations, "mercury")
-        proxyPrincipal = delegator.getChild("calendar-proxy-write")
-        for expected, name in [(True, "wsanchez"), (False, "cdaboo")]:
-            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
-            self.assertEquals(expected, (yield proxyPrincipal.containsPrincipal(delegate)))
-
-        # Verify that principals who were previously members of delegated-to groups but
-        # are no longer members have their proxyFor info cleaned out of the cache:
-        # Remove wsanchez from all groups in the directory, run the updater, then check
-        # that wsanchez is only a proxy for gemini (since that assignment does not involve groups)
-        self.directoryService.xmlFile = dirTest.child("accounts-modified.xml")
-        self.directoryService._alwaysStat = True
-        self.assertEquals((False, 8, 1), (yield updater.updateCache()))
-        delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, "wsanchez")
-        proxyFor = (yield delegate.proxyFor(True))
-        self.assertEquals(
-          set([p.record.guid for p in proxyFor]),
-          set(['gemini'])
-        )
-
-
-    @inlineCallbacks
-    def test_groupMembershipCacheUpdaterExternalProxies(self):
-        """
-        Exercise external proxy assignment support (assignments come from the
-        directory service itself)
-        """
-        cache = GroupMembershipCache("ProxyDB", expireSeconds=60)
-        # Having a groupMembershipCache assigned to the directory service is the
-        # trigger to use such a cache:
-        self.directoryService.groupMembershipCache = cache
-
-        # This time, we're setting some external proxy assignments for the
-        # "transporter" resource...
-        def fakeExternalProxies():
-            return [
-                (
-                    "transporter#calendar-proxy-write",
-                    set(["6423F94A-6B76-4A3A-815B-D52CFD77935D",
-                         "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0"])
-                ),
-                (
-                    "transporter#calendar-proxy-read",
-                    set(["5A985493-EE2C-4665-94CF-4DFEA3A89500"])
-                ),
-            ]
-
-        updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
-            cache=cache, useExternalProxies=True,
-            externalProxiesSource=fakeExternalProxies)
-
-        yield updater.updateCache()
-
-        delegates = (
-
-            # record name
-            # read-write delegators
-            # read-only delegators
-            # groups delegate is in (restricted to only those groups
-            #   participating in delegation)
-
-            ("wsanchez",
-             set(["mercury", "apollo", "orion", "gemini", "transporter"]),
-             set(["non_calendar_proxy"]),
-             set(['left_coast',
-                  'both_coasts',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                  'gemini#calendar-proxy-write',
-                  'transporter#calendar-proxy-write',
-                ]),
-            ),
-            ("cdaboo",
-             set(["apollo", "orion", "non_calendar_proxy"]),
-             set(["non_calendar_proxy", "transporter"]),
-             set(['both_coasts',
-                  'non_calendar_group',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                  'transporter#calendar-proxy-read',
-                ]),
-            ),
-            ("lecroy",
-             set(["apollo", "mercury", "non_calendar_proxy", "transporter"]),
-             set(),
-             set(['both_coasts',
-                  'left_coast',
-                  'non_calendar_group',
-                  'transporter#calendar-proxy-write',
-                ]),
-            ),
-        )
-
-        for name, write, read, groups in delegates:
-            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
-
-            proxyFor = (yield delegate.proxyFor(True))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                write,
-            )
-            proxyFor = (yield delegate.proxyFor(False))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                read,
-            )
-            groupsIn = (yield delegate.groupMemberships())
-            uids = set()
-            for group in groupsIn:
-                try:
-                    uid = group.uid # a sub-principal
-                except AttributeError:
-                    uid = group.record.guid # a regular group
-                uids.add(uid)
-            self.assertEquals(
-                set(uids),
-                groups,
-            )
-
-        #
-        # Now remove two external assignments, and those should take effect.
-        #
-        def fakeExternalProxiesRemoved():
-            return [
-                (
-                    "transporter#calendar-proxy-write",
-                    set(["8B4288F6-CC82-491D-8EF9-642EF4F3E7D0"])
-                ),
-            ]
-
-        updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
-            cache=cache, useExternalProxies=True,
-            externalProxiesSource=fakeExternalProxiesRemoved)
-
-        yield updater.updateCache()
-
-        delegates = (
-
-            # record name
-            # read-write delegators
-            # read-only delegators
-            # groups delegate is in (restricted to only those groups
-            #   participating in delegation)
-
-            # Note: "transporter" is now gone for wsanchez and cdaboo
-
-            ("wsanchez",
-             set(["mercury", "apollo", "orion", "gemini"]),
-             set(["non_calendar_proxy"]),
-             set(['left_coast',
-                  'both_coasts',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                  'gemini#calendar-proxy-write',
-                ]),
-            ),
-            ("cdaboo",
-             set(["apollo", "orion", "non_calendar_proxy"]),
-             set(["non_calendar_proxy"]),
-             set(['both_coasts',
-                  'non_calendar_group',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                ]),
-            ),
-            ("lecroy",
-             set(["apollo", "mercury", "non_calendar_proxy", "transporter"]),
-             set(),
-             set(['both_coasts',
-                  'left_coast',
-                  'non_calendar_group',
-                  'transporter#calendar-proxy-write',
-                ]),
-            ),
-        )
-
-        for name, write, read, groups in delegates:
-            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
-
-            proxyFor = (yield delegate.proxyFor(True))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                write,
-            )
-            proxyFor = (yield delegate.proxyFor(False))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                read,
-            )
-            groupsIn = (yield delegate.groupMemberships())
-            uids = set()
-            for group in groupsIn:
-                try:
-                    uid = group.uid # a sub-principal
-                except AttributeError:
-                    uid = group.record.guid # a regular group
-                uids.add(uid)
-            self.assertEquals(
-                set(uids),
-                groups,
-            )
-
-        #
-        # Now remove all external assignments, and those should take effect.
-        #
-        def fakeExternalProxiesEmpty():
-            return []
-
-        updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
-            cache=cache, useExternalProxies=True,
-            externalProxiesSource=fakeExternalProxiesEmpty)
-
-        yield updater.updateCache()
-
-        delegates = (
-
-            # record name
-            # read-write delegators
-            # read-only delegators
-            # groups delegate is in (restricted to only those groups
-            #   participating in delegation)
-
-            # Note: "transporter" is now gone for everyone
-
-            ("wsanchez",
-             set(["mercury", "apollo", "orion", "gemini"]),
-             set(["non_calendar_proxy"]),
-             set(['left_coast',
-                  'both_coasts',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                  'gemini#calendar-proxy-write',
-                ]),
-            ),
-            ("cdaboo",
-             set(["apollo", "orion", "non_calendar_proxy"]),
-             set(["non_calendar_proxy"]),
-             set(['both_coasts',
-                  'non_calendar_group',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                ]),
-            ),
-            ("lecroy",
-             set(["apollo", "mercury", "non_calendar_proxy"]),
-             set(),
-             set(['both_coasts',
-                  'left_coast',
-                      'non_calendar_group',
-                ]),
-            ),
-        )
-
-        for name, write, read, groups in delegates:
-            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
-
-            proxyFor = (yield delegate.proxyFor(True))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                write,
-            )
-            proxyFor = (yield delegate.proxyFor(False))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                read,
-            )
-            groupsIn = (yield delegate.groupMemberships())
-            uids = set()
-            for group in groupsIn:
-                try:
-                    uid = group.uid # a sub-principal
-                except AttributeError:
-                    uid = group.record.guid # a regular group
-                uids.add(uid)
-            self.assertEquals(
-                set(uids),
-                groups,
-            )
-
-        #
-        # Now add back an external assignments, and those should take effect.
-        #
-        def fakeExternalProxiesAdded():
-            return [
-                (
-                    "transporter#calendar-proxy-write",
-                    set(["8B4288F6-CC82-491D-8EF9-642EF4F3E7D0"])
-                ),
-            ]
-
-        updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
-            cache=cache, useExternalProxies=True,
-            externalProxiesSource=fakeExternalProxiesAdded)
-
-        yield updater.updateCache()
-
-        delegates = (
-
-            # record name
-            # read-write delegators
-            # read-only delegators
-            # groups delegate is in (restricted to only those groups
-            #   participating in delegation)
-
-            ("wsanchez",
-             set(["mercury", "apollo", "orion", "gemini"]),
-             set(["non_calendar_proxy"]),
-             set(['left_coast',
-                  'both_coasts',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                  'gemini#calendar-proxy-write',
-                ]),
-            ),
-            ("cdaboo",
-             set(["apollo", "orion", "non_calendar_proxy"]),
-             set(["non_calendar_proxy"]),
-             set(['both_coasts',
-                  'non_calendar_group',
-                  'recursive1_coasts',
-                  'recursive2_coasts',
-                ]),
-            ),
-            ("lecroy",
-             set(["apollo", "mercury", "non_calendar_proxy", "transporter"]),
-             set(),
-             set(['both_coasts',
-                  'left_coast',
-                  'non_calendar_group',
-                  'transporter#calendar-proxy-write',
-                ]),
-            ),
-        )
-
-        for name, write, read, groups in delegates:
-            delegate = self._getPrincipalByShortName(DirectoryService.recordType_users, name)
-
-            proxyFor = (yield delegate.proxyFor(True))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                write,
-            )
-            proxyFor = (yield delegate.proxyFor(False))
-            self.assertEquals(
-                set([p.record.guid for p in proxyFor]),
-                read,
-            )
-            groupsIn = (yield delegate.groupMemberships())
-            uids = set()
-            for group in groupsIn:
-                try:
-                    uid = group.uid # a sub-principal
-                except AttributeError:
-                    uid = group.record.guid # a regular group
-                uids.add(uid)
-            self.assertEquals(
-                set(uids),
-                groups,
-            )
-
-
-    def test_diffAssignments(self):
-        """
-        Ensure external proxy assignment diffing works
-        """
-
-        self.assertEquals(
-            (
-                # changed
-                [],
-                # removed
-                [],
-            ),
-            diffAssignments(
-                # old
-                [],
-                # new
-                [],
-            )
-        )
-
-        self.assertEquals(
-            (
-                # changed
-                [],
-                # removed
-                [],
-            ),
-            diffAssignments(
-                # old
-                [("B", set(["3"])), ("A", set(["1", "2"])), ],
-                # new
-                [("A", set(["1", "2"])), ("B", set(["3"])), ],
-            )
-        )
-
-        self.assertEquals(
-            (
-                # changed
-                [("A", set(["1", "2"])), ("B", set(["3"])), ],
-                # removed
-                [],
-            ),
-            diffAssignments(
-                # old
-                [],
-                # new
-                [("A", set(["1", "2"])), ("B", set(["3"])), ],
-            )
-        )
-
-        self.assertEquals(
-            (
-                # changed
-                [],
-                # removed
-                ["A", "B"],
-            ),
-            diffAssignments(
-                # old
-                [("A", set(["1", "2"])), ("B", set(["3"])), ],
-                # new
-                [],
-            )
-        )
-
-        self.assertEquals(
-            (
-                # changed
-                [("A", set(["2"])), ("C", set(["4", "5"])), ("D", set(["6"])), ],
-                # removed
-                ["B"],
-            ),
-            diffAssignments(
-                # old
-                [("A", set(["1", "2"])), ("B", set(["3"])), ("C", set(["4"])), ],
-                # new
-                [("D", set(["6"])), ("C", set(["4", "5"])), ("A", set(["2"])), ],
-            )
-        )
-
-
-    @inlineCallbacks
-    def test_groupMembershipCacheSnapshot(self):
-        """
-        The group membership cache creates a snapshot (a pickle file) of
-        the member -> groups dictionary, and can quickly refresh memcached
-        from that snapshot when restarting the server.
-        """
-        cache = GroupMembershipCache("ProxyDB", expireSeconds=60)
-        # Having a groupMembershipCache assigned to the directory service is the
-        # trigger to use such a cache:
-        self.directoryService.groupMembershipCache = cache
-
-        updater = GroupMembershipCacheUpdater(
-            calendaruserproxy.ProxyDBService, self.directoryService, 30, 30, 30,
-            cache=cache)
-
-        dataRoot = FilePath(config.DataRoot)
-        snapshotFile = dataRoot.child("memberships_cache")
-
-        # Snapshot doesn't exist initially
-        self.assertFalse(snapshotFile.exists())
-
-        # Try a fast update (as when the server starts up for the very first
-        # time), but since the snapshot doesn't exist we fault in from the
-        # directory (fast now is False), and snapshot will get created
-
-        # Note that because fast=True and isPopulated() is False, locking is
-        # ignored:
-        yield cache.acquireLock()
-
-        self.assertFalse((yield cache.isPopulated()))
-        fast, numMembers, numChanged = (yield updater.updateCache(fast=True))
-        self.assertEquals(fast, False)
-        self.assertEquals(numMembers, 9)
-        self.assertEquals(numChanged, 9)
-        self.assertTrue(snapshotFile.exists())
-        self.assertTrue((yield cache.isPopulated()))
-
-        yield cache.releaseLock()
-
-        # Try another fast update where the snapshot already exists (as in a
-        # server-restart scenario), which will only read from the snapshot
-        # as indicated by the return value for "fast".  Note that the cache
-        # is already populated so updateCache( ) in fast mode will not do
-        # anything, and numMembers will be 0.
-        fast, numMembers, numChanged = (yield updater.updateCache(fast=True))
-        self.assertEquals(fast, True)
-        self.assertEquals(numMembers, 0)
-
-        # Try an update which faults in from the directory (fast=False)
-        fast, numMembers, numChanged = (yield updater.updateCache(fast=False))
-        self.assertEquals(fast, False)
-        self.assertEquals(numMembers, 9)
-        self.assertEquals(numChanged, 0)
-
-        # Verify the snapshot contains the pickled dictionary we expect
-        expected = {
-            "46D9D716-CBEE-490F-907A-66FA6C3767FF":
-                set([
-                    u"00599DAF-3E75-42DD-9DB7-52617E79943F",
-                ]),
-            "5A985493-EE2C-4665-94CF-4DFEA3A89500":
-                set([
-                    u"non_calendar_group",
-                    u"recursive1_coasts",
-                    u"recursive2_coasts",
-                    u"both_coasts"
-                ]),
-            "6423F94A-6B76-4A3A-815B-D52CFD77935D":
-                set([
-                    u"left_coast",
-                    u"recursive1_coasts",
-                    u"recursive2_coasts",
-                    u"both_coasts"
-                ]),
-            "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1":
-                set([
-                    u"left_coast",
-                    u"both_coasts"
-                ]),
-            "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0":
-                set([
-                    u"non_calendar_group",
-                    u"left_coast",
-                    u"both_coasts"
-                ]),
-            "left_coast":
-                 set([
-                     u"both_coasts"
-                 ]),
-            "recursive1_coasts":
-                 set([
-                     u"recursive1_coasts",
-                     u"recursive2_coasts"
-                 ]),
-            "recursive2_coasts":
-                set([
-                    u"recursive1_coasts",
-                    u"recursive2_coasts"
-                ]),
-            "right_coast":
-                set([
-                    u"both_coasts"
-                ])
-        }
-        members = pickle.loads(snapshotFile.getContent())
-        self.assertEquals(members, expected)
-
-        # "Corrupt" the snapshot and verify it is regenerated properly
-        snapshotFile.setContent("xyzzy")
-        cache.delete("group-cacher-populated")
-        fast, numMembers, numChanged = (yield updater.updateCache(fast=True))
-        self.assertEquals(fast, False)
-        self.assertEquals(numMembers, 9)
-        self.assertEquals(numChanged, 9)
-        self.assertTrue(snapshotFile.exists())
-        members = pickle.loads(snapshotFile.getContent())
-        self.assertEquals(members, expected)
-
-
-    def test_autoAcceptMembers(self):
-        """
-        autoAcceptMembers( ) returns an empty list if no autoAcceptGroup is
-        assigned, or the expanded membership if assigned.
-        """
-
-        # No auto-accept-group for "orion" in augments.xml
-        orion = self.directoryService.recordWithGUID("orion")
-        self.assertEquals(orion.autoAcceptMembers(), [])
-
-        # "both_coasts" group assigned to "apollo" in augments.xml
-        apollo = self.directoryService.recordWithGUID("apollo")
-        self.assertEquals(
-            set(apollo.autoAcceptMembers()),
-            set([
-                "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0",
-                 "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1",
-                 "5A985493-EE2C-4665-94CF-4DFEA3A89500",
-                 "6423F94A-6B76-4A3A-815B-D52CFD77935D",
-                 "right_coast",
-                 "left_coast",
-            ])
-        )
-
-
-    # @inlineCallbacks
-    # def testScheduling(self):
-    #     """
-    #     Exercise schedulePolledGroupCachingUpdate
-    #     """
-
-    #     groupCacher = StubGroupCacher()
-
-
-    #     def decorateTransaction(txn):
-    #         txn._groupCacher = groupCacher
-
-    #     store = yield buildStore(self, None)
-    #     store.callWithNewTransactions(decorateTransaction)
-    #     wp = (yield schedulePolledGroupCachingUpdate(store))
-    #     yield wp.whenExecuted()
-    #     self.assertTrue(groupCacher.called)
-
-    # testScheduling.skip = "Fix WorkProposal to track delayed calls and cancel them"
-
-
-
-class StubGroupCacher(object):
-    def __init__(self):
-        self.called = False
-        self.updateSeconds = 99
-
-
-    def updateCache(self):
-        self.called = True
-
-
-
-class RecordsMatchingTokensTests(TestCase):
-
-    @inlineCallbacks
-    def setUp(self):
-        super(RecordsMatchingTokensTests, self).setUp()
-
-        self.directoryFixture.addDirectoryService(XMLDirectoryService(
-            {
-                'xmlFile' : xmlFile,
-                'augmentService' :
-                    augment.AugmentXMLDB(xmlFiles=(augmentsFile.path,)),
-            }
-        ))
-        calendaruserproxy.ProxyDBService = calendaruserproxy.ProxySqliteDB("proxies.sqlite")
-
-        # Set up a principals hierarchy for each service we're testing with
-        self.principalRootResources = {}
-        name = self.directoryService.__class__.__name__
-        url = "/" + name + "/"
-
-        provisioningResource = DirectoryPrincipalProvisioningResource(url, self.directoryService)
-
-        self.site.resource.putChild(name, provisioningResource)
-
-        self.principalRootResources[self.directoryService.__class__.__name__] = provisioningResource
-
-        yield XMLCalendarUserProxyLoader(proxiesFile.path).updateProxyDB()
-
-
-    def tearDown(self):
-        """ Empty the proxy db between tests """
-        return calendaruserproxy.ProxyDBService.clean() #@UndefinedVariable
-
-
-    @inlineCallbacks
-    def test_recordsMatchingTokens(self):
-        """
-        Exercise the default recordsMatchingTokens implementation
-        """
-        records = list((yield self.directoryService.recordsMatchingTokens(["Use", "01"])))
-        self.assertNotEquals(len(records), 0)
-        shorts = [record.shortNames[0] for record in records]
-        self.assertTrue("user01" in shorts)
-
-        records = list((yield self.directoryService.recordsMatchingTokens(['"quotey"'],
-            context=self.directoryService.searchContext_attendee)))
-        self.assertEquals(len(records), 1)
-        self.assertEquals(records[0].shortNames[0], "doublequotes")
-
-        records = list((yield self.directoryService.recordsMatchingTokens(["coast"])))
-        self.assertEquals(len(records), 5)
-
-        records = list((yield self.directoryService.recordsMatchingTokens(["poll"],
-            context=self.directoryService.searchContext_location)))
-        self.assertEquals(len(records), 1)
-        self.assertEquals(records[0].shortNames[0], "apollo")
-
-
-    def test_recordTypesForSearchContext(self):
-        self.assertEquals(
-            [self.directoryService.recordType_locations],
-            self.directoryService.recordTypesForSearchContext("location")
-        )
-        self.assertEquals(
-            [self.directoryService.recordType_resources],
-            self.directoryService.recordTypesForSearchContext("resource")
-        )
-        self.assertEquals(
-            [self.directoryService.recordType_users],
-            self.directoryService.recordTypesForSearchContext("user")
-        )
-        self.assertEquals(
-            [self.directoryService.recordType_groups],
-            self.directoryService.recordTypesForSearchContext("group")
-        )
-        self.assertEquals(
-            set([
-                self.directoryService.recordType_resources,
-                self.directoryService.recordType_users,
-                self.directoryService.recordType_groups
-            ]),
-            set(self.directoryService.recordTypesForSearchContext("attendee"))
-        )
-
-
-
-class GUIDTests(TestCase):
-
-    def setUp(self):
-        self.service = DirectoryService()
-        self.service.setRealm("test")
-        self.service.baseGUID = "0E8E6EC2-8E52-4FF3-8F62-6F398B08A498"
-
-
-    def test_normalizeUUID(self):
-
-        # Ensure that record.guid automatically gets normalized to
-        # uppercase+hyphenated form if the value is one that uuid.UUID( )
-        # recognizes.
-
-        data = (
-            (
-                "0543A85A-D446-4CF6-80AE-6579FA60957F",
-                "0543A85A-D446-4CF6-80AE-6579FA60957F"
-            ),
-            (
-                "0543a85a-d446-4cf6-80ae-6579fa60957f",
-                "0543A85A-D446-4CF6-80AE-6579FA60957F"
-            ),
-            (
-                "0543A85AD4464CF680AE-6579FA60957F",
-                "0543A85A-D446-4CF6-80AE-6579FA60957F"
-            ),
-            (
-                "0543a85ad4464cf680ae6579fa60957f",
-                "0543A85A-D446-4CF6-80AE-6579FA60957F"
-            ),
-            (
-                "foo",
-                "foo"
-            ),
-            (
-                None,
-                None
-            ),
-        )
-        for original, expected in data:
-            self.assertEquals(expected, normalizeUUID(original))
-            record = DirectoryRecord(self.service, "users", original,
-                shortNames=("testing",))
-            self.assertEquals(expected, record.guid)
-
-
-
-class DirectoryServiceTests(TestCase):
-    """
-    Test L{DirectoryService} apis.
-    """
-
-    class StubDirectoryService(DirectoryService):
-
-        def __init__(self):
-            self._records = {}
-
-
-        def createRecord(self, recordType, guid=None, shortNames=(), authIDs=set(),
-            fullName=None, firstName=None, lastName=None, emailAddresses=set(),
-            uid=None, password=None, **kwargs):
-            """
-            Create/persist a directory record based on the given values
-            """
-
-            record = DirectoryRecord(
-                self,
-                recordType,
-                guid=guid,
-                shortNames=shortNames,
-                authIDs=authIDs,
-                fullName=fullName,
-                firstName=firstName,
-                lastName=lastName,
-                emailAddresses=emailAddresses,
-                uid=uid,
-                password=password,
-                **kwargs
-            )
-            self._records.setdefault(recordType, []).append(record)
-
-
-        def recordTypes(self):
-            return self._records.keys()
-
-
-        def listRecords(self, recordType):
-            return self._records[recordType]
-
-
-    def setUp(self):
-        self.service = self.StubDirectoryService()
-        self.service.setRealm("test")
-        self.service.baseGUID = "0E8E6EC2-8E52-4FF3-8F62-6F398B08A498"
-
-
-    def test_recordWithCalendarUserAddress_principal_uris(self):
-        """
-        Make sure that recordWithCalendarUserAddress handles percent-encoded
-        principal URIs.
-        """
-
-        self.service.createRecord(
-            DirectoryService.recordType_users,
-            guid="user01",
-            shortNames=("user 01", "User 01"),
-            fullName="User 01",
-            enabledForCalendaring=True,
-        )
-        self.service.createRecord(
-            DirectoryService.recordType_users,
-            guid="user02",
-            shortNames=("user02", "User 02"),
-            fullName="User 02",
-            enabledForCalendaring=True,
-        )
-
-        record = self.service.recordWithCalendarUserAddress("/principals/users/user%2001")
-        self.assertTrue(record is not None)
-        record = self.service.recordWithCalendarUserAddress("/principals/users/user02")
-        self.assertTrue(record is not None)
-        record = self.service.recordWithCalendarUserAddress("/principals/users/user%0202")
-        self.assertTrue(record is None)
-
-
-
-class DirectoryRecordTests(TestCase):
-    """
-    Test L{DirectoryRecord} apis.
-    """
-
-    def setUp(self):
-        self.service = DirectoryService()
-        self.service.setRealm("test")
-        self.service.baseGUID = "0E8E6EC2-8E52-4FF3-8F62-6F398B08A498"
-
-
-    def test_cacheToken(self):
-        """
-        Test that DirectoryRecord.cacheToken is different for different records, and its value changes
-        as attributes on the record change.
-        """
-
-        record1 = DirectoryRecord(self.service, "users", str(uuid.uuid4()), shortNames=("testing1",))
-        record2 = DirectoryRecord(self.service, "users", str(uuid.uuid4()), shortNames=("testing2",))
-        self.assertNotEquals(record1.cacheToken(), record2.cacheToken())
-
-        cache1 = record1.cacheToken()
-        record1.enabled = True
-        self.assertNotEquals(cache1, record1.cacheToken())
-
-        cache1 = record1.cacheToken()
-        record1.enabledForCalendaring = True
-        self.assertNotEquals(cache1, record1.cacheToken())

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_modify.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_modify.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_modify.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,159 +0,0 @@
-##
-# Copyright (c) 2005-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-import os
-from twistedcaldav.config import config
-from twistedcaldav.test.util import TestCase
-from calendarserver.tools.util import getDirectory
-from twext.python.filepath import CachingFilePath as FilePath
-from twistedcaldav.directory.directory import DirectoryError
-
-
-class ModificationTestCase(TestCase):
-
-    def setUp(self):
-        super(ModificationTestCase, self).setUp()
-
-        testRoot = os.path.join(os.path.dirname(__file__), "modify")
-        #configFileName = os.path.join(testRoot, "caldavd.plist")
-        #config.load(configFileName)
-
-        usersFile = os.path.join(testRoot, "users-groups.xml")
-        config.DirectoryService.params.xmlFile = usersFile
-
-        # Copy xml file containing locations/resources to a temp file because
-        # we're going to be modifying it during testing
-
-        origResourcesFile = FilePath(os.path.join(os.path.dirname(__file__),
-            "modify", "resources-locations.xml"))
-        copyResourcesFile = FilePath(self.mktemp())
-        origResourcesFile.copyTo(copyResourcesFile)
-        config.ResourceService.params.xmlFile = copyResourcesFile
-        config.ResourceService.Enabled = True
-
-        augmentsFile = os.path.join(testRoot, "augments.xml")
-        config.AugmentService.params.xmlFiles = (augmentsFile,)
-
-
-    def test_createRecord(self):
-        directory = getDirectory()
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record, None)
-
-        directory.createRecord("resources", guid="resource01",
-            shortNames=("resource01",), uid="resource01",
-            emailAddresses=("res1 at example.com", "res2 at example.com"),
-            comment="Test Comment")
-
-        record = directory.recordWithUID("resource01")
-        self.assertNotEquals(record, None)
-
-        self.assertEquals(len(record.emailAddresses), 2)
-        self.assertEquals(record.extras['comment'], "Test Comment")
-
-        directory.createRecord("resources", guid="resource02", shortNames=("resource02",), uid="resource02")
-
-        record = directory.recordWithUID("resource02")
-        self.assertNotEquals(record, None)
-
-        # Make sure old records are still there:
-        record = directory.recordWithUID("resource01")
-        self.assertNotEquals(record, None)
-        record = directory.recordWithUID("location01")
-        self.assertNotEquals(record, None)
-
-
-    def test_destroyRecord(self):
-        directory = getDirectory()
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record, None)
-
-        directory.createRecord("resources", guid="resource01", shortNames=("resource01",), uid="resource01")
-
-        record = directory.recordWithUID("resource01")
-        self.assertNotEquals(record, None)
-
-        directory.destroyRecord("resources", guid="resource01")
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record, None)
-
-        # Make sure old records are still there:
-        record = directory.recordWithUID("location01")
-        self.assertNotEquals(record, None)
-
-
-    def test_updateRecord(self):
-        directory = getDirectory()
-
-        directory.createRecord("resources", guid="resource01",
-            shortNames=("resource01",), uid="resource01",
-            fullName="Resource number 1")
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record.fullName, "Resource number 1")
-
-        directory.updateRecord("resources", guid="resource01",
-            shortNames=("resource01", "r01"), uid="resource01",
-            fullName="Resource #1", firstName="First", lastName="Last",
-            emailAddresses=("resource01 at example.com", "r01 at example.com"),
-            comment="Test Comment")
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record.fullName, "Resource #1")
-        self.assertEquals(record.firstName, "First")
-        self.assertEquals(record.lastName, "Last")
-        self.assertEquals(set(record.shortNames), set(["resource01", "r01"]))
-        self.assertEquals(record.emailAddresses,
-            set(["resource01 at example.com", "r01 at example.com"]))
-        self.assertEquals(record.extras['comment'], "Test Comment")
-
-        # Make sure old records are still there:
-        record = directory.recordWithUID("location01")
-        self.assertNotEquals(record, None)
-
-
-    def test_createDuplicateRecord(self):
-        directory = getDirectory()
-
-        directory.createRecord("resources", guid="resource01", shortNames=("resource01",), uid="resource01")
-        self.assertRaises(DirectoryError, directory.createRecord, "resources", guid="resource01", shortNames=("resource01",), uid="resource01")
-
-
-    def test_missingShortNames(self):
-        directory = getDirectory()
-
-        directory.createRecord("resources", guid="resource01")
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record.shortNames[0], "resource01")
-
-        directory.updateRecord("resources", guid="resource01",
-            fullName="Resource #1")
-
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record.shortNames[0], "resource01")
-        self.assertEquals(record.fullName, "Resource #1")
-
-
-    def test_missingGUID(self):
-        directory = getDirectory()
-
-        record = directory.createRecord("resources")
-
-        self.assertEquals(record.shortNames[0], record.guid)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_principal.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_principal.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_principal.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -16,68 +16,63 @@
 from __future__ import print_function
 
 import os
+from urllib import quote
 
 from twisted.cred.credentials import UsernamePassword
 from twisted.internet.defer import inlineCallbacks
-from txdav.xml import element as davxml
-from txweb2.dav.fileop import rmdir
-from txweb2.dav.resource import AccessDeniedError
-from txweb2.http import HTTPError
-from txweb2.test.test_server import SimpleRequest
-
+from twistedcaldav import carddavxml
 from twistedcaldav.cache import DisabledCacheNotifier
 from twistedcaldav.caldavxml import caldav_namespace
 from twistedcaldav.config import config
 from twistedcaldav.customxml import calendarserver_namespace
-from twistedcaldav.directory import augment, calendaruserproxy
 from twistedcaldav.directory.addressbook import DirectoryAddressBookHomeProvisioningResource
 from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource
-from twistedcaldav.directory.directory import DirectoryService
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
-from twistedcaldav.directory.test.test_xmlfile import xmlFile, augmentsFile
+from twistedcaldav.directory.principal import DirectoryCalendarPrincipalResource
 from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource
-from twistedcaldav.directory.principal import DirectoryPrincipalTypeProvisioningResource
 from twistedcaldav.directory.principal import DirectoryPrincipalResource
-from twistedcaldav.directory.principal import DirectoryCalendarPrincipalResource
-from twistedcaldav import carddavxml
-import twistedcaldav.test.util
-
+from twistedcaldav.directory.principal import DirectoryPrincipalTypeProvisioningResource
+from twistedcaldav.test.util import StoreTestCase
 from txdav.common.datastore.file import CommonDataStore
-from urllib import quote
+from txdav.xml import element as davxml
+from txweb2.dav.fileop import rmdir
+from txweb2.dav.resource import AccessDeniedError
+from txweb2.http import HTTPError
+from txweb2.test.test_server import SimpleRequest
 
 
 
-class ProvisionedPrincipals (twistedcaldav.test.util.TestCase):
+
+class ProvisionedPrincipals(StoreTestCase):  # twistedcaldav.test.util.TestCase):
     """
     Directory service provisioned principals.
     """
-    def setUp(self):
-        super(ProvisionedPrincipals, self).setUp()
+    # def setUp(self):
+    #     super(ProvisionedPrincipals, self).setUp()
 
-        self.directoryServices = (
-            XMLDirectoryService(
-                {
-                    'xmlFile' : xmlFile,
-                    'augmentService' :
-                        augment.AugmentXMLDB(xmlFiles=(augmentsFile.path,)),
-                }
-            ),
-        )
+    #     self.directoryServices = (
+    #         XMLDirectoryService(
+    #             {
+    #                 'xmlFile' : xmlFile,
+    #                 'augmentService' :
+    #                     augment.AugmentXMLDB(xmlFiles=(augmentsFile.path,)),
+    #             }
+    #         ),
+    #     )
 
-        # Set up a principals hierarchy for each service we're testing with
-        self.principalRootResources = {}
-        for directory in self.directoryServices:
-            name = directory.__class__.__name__
-            url = "/" + name + "/"
+    #     # Set up a principals hierarchy for each service we're testing with
+    #     self.principalRootResources = {}
+    #     for directory in self.directoryServices:
+    #         name = directory.__class__.__name__
+    #         url = "/" + name + "/"
 
-            provisioningResource = DirectoryPrincipalProvisioningResource(url, directory)
-            directory.setPrincipalCollection(provisioningResource)
+    #         provisioningResource = DirectoryPrincipalProvisioningResource(url, directory)
+    #         directory.setPrincipalCollection(provisioningResource)
 
-            self.site.resource.putChild(name, provisioningResource)
+    #         self.site.resource.putChild(name, provisioningResource)
 
-            self.principalRootResources[directory.__class__.__name__] = provisioningResource
+    #         self.principalRootResources[directory.__class__.__name__] = provisioningResource
 
-        calendaruserproxy.ProxyDBService = calendaruserproxy.ProxySqliteDB(os.path.abspath(self.mktemp()))
+    #     calendaruserproxy.ProxyDBService = calendaruserproxy.ProxySqliteDB(os.path.abspath(self.mktemp()))
 
 
     @inlineCallbacks
@@ -95,7 +90,8 @@
 
         DirectoryPrincipalResource.principalURL(),
         """
-        for directory in self.directoryServices:
+        directory = self.directory
+        if True:
             #print("\n -> %s" % (directory.__class__.__name__,))
             provisioningResource = self.principalRootResources[directory.__class__.__name__]
 
@@ -170,33 +166,33 @@
         """
         DirectoryPrincipalProvisioningResource.principalForUser()
         """
-        for directory in self.directoryServices:
-            provisioningResource = self.principalRootResources[directory.__class__.__name__]
+        directory = self.directory
+        provisioningResource = self.principalRootResources[directory.__class__.__name__]
 
-            for user in directory.listRecords(DirectoryService.recordType_users):
-                userResource = provisioningResource.principalForUser(user.shortNames[0])
-                if user.enabled:
-                    self.failIf(userResource is None)
-                    self.assertEquals(user, userResource.record)
-                else:
-                    self.failIf(userResource is not None)
+        for user in directory.listRecords(DirectoryService.recordType_users):
+            userResource = provisioningResource.principalForUser(user.shortNames[0])
+            if user.enabled:
+                self.failIf(userResource is None)
+                self.assertEquals(user, userResource.record)
+            else:
+                self.failIf(userResource is not None)
 
 
     def test_principalForAuthID(self):
         """
         DirectoryPrincipalProvisioningResource.principalForAuthID()
         """
-        for directory in self.directoryServices:
-            provisioningResource = self.principalRootResources[directory.__class__.__name__]
+        directory = self.directory
+        provisioningResource = self.principalRootResources[directory.__class__.__name__]
 
-            for user in directory.listRecords(DirectoryService.recordType_users):
-                creds = UsernamePassword(user.shortNames[0], "bogus")
-                userResource = provisioningResource.principalForAuthID(creds)
-                if user.enabled:
-                    self.failIf(userResource is None)
-                    self.assertEquals(user, userResource.record)
-                else:
-                    self.failIf(userResource is not None)
+        for user in directory.listRecords(DirectoryService.recordType_users):
+            creds = UsernamePassword(user.shortNames[0], "bogus")
+            userResource = provisioningResource.principalForAuthID(creds)
+            if user.enabled:
+                self.failIf(userResource is None)
+                self.assertEquals(user, userResource.record)
+            else:
+                self.failIf(userResource is not None)
 
 
     def test_principalForUID(self):
@@ -465,23 +461,23 @@
         # Need to create a addressbook home provisioner for each service.
         addressBookRootResources = {}
 
-        for directory in self.directoryServices:
-            path = os.path.join(self.docroot, directory.__class__.__name__)
+        directory = self.directory
+        path = os.path.join(self.docroot, directory.__class__.__name__)
 
-            if os.path.exists(path):
-                rmdir(path)
-            os.mkdir(path)
+        if os.path.exists(path):
+            rmdir(path)
+        os.mkdir(path)
 
-            # Need a data store
-            _newStore = CommonDataStore(path, None, None, True, False)
+        # need a data store
+        _newstore = commondatastore(path, none, none, true, false)
 
-            provisioningResource = DirectoryAddressBookHomeProvisioningResource(
-                directory,
-                "/addressbooks/",
-                _newStore
-            )
+        provisioningresource = directoryaddressbookhomeprovisioningresource(
+            directory,
+            "/addressbooks/",
+            _newstore
+        )
 
-            addressBookRootResources[directory.__class__.__name__] = provisioningResource
+        addressbookrootresources[directory.__class__.__name__] = provisioningResource
 
         # AddressBook home provisioners should result in addressBook homes.
         for provisioningResource, _ignore_recordType, recordResource, record in self._allRecords():
@@ -517,23 +513,23 @@
         # Need to create a calendar home provisioner for each service.
         calendarRootResources = {}
 
-        for directory in self.directoryServices:
-            path = os.path.join(self.docroot, directory.__class__.__name__)
+        directory = self.directory
+        path = os.path.join(self.docroot, directory.__class__.__name__)
 
-            if os.path.exists(path):
-                rmdir(path)
-            os.mkdir(path)
+        if os.path.exists(path):
+            rmdir(path)
+        os.mkdir(path)
 
-            # Need a data store
-            _newStore = CommonDataStore(path, None, None, True, False)
+        # Need a data store
+        _newStore = CommonDataStore(path, None, None, True, False)
 
-            provisioningResource = DirectoryCalendarHomeProvisioningResource(
-                directory,
-                "/calendars/",
-                _newStore
-            )
+        provisioningResource = DirectoryCalendarHomeProvisioningResource(
+            directory,
+            "/calendars/",
+            _newStore
+        )
 
-            calendarRootResources[directory.__class__.__name__] = provisioningResource
+        calendarRootResources[directory.__class__.__name__] = provisioningResource
 
         # Calendar home provisioners should result in calendar homes.
         for provisioningResource, _ignore_recordType, recordResource, record in self._allRecords():
@@ -643,19 +639,19 @@
         """
         Default access controls for principal provisioning resources.
         """
-        for directory in self.directoryServices:
-            #print("\n -> %s" % (directory.__class__.__name__,))
-            provisioningResource = self.principalRootResources[directory.__class__.__name__]
+        directory = self.directory
+        #print("\n -> %s" % (directory.__class__.__name__,))
+        provisioningResource = self.principalRootResources[directory.__class__.__name__]
 
-            for args in _authReadOnlyPrivileges(self, provisioningResource, provisioningResource.principalCollectionURL()):
-                yield self._checkPrivileges(*args)
+        for args in _authReadOnlyPrivileges(self, provisioningResource, provisioningResource.principalCollectionURL()):
+            yield self._checkPrivileges(*args)
 
-            for recordType in (yield provisioningResource.listChildren()):
-                #print("   -> %s" % (recordType,))
-                typeResource = provisioningResource.getChild(recordType)
+        for recordType in (yield provisioningResource.listChildren()):
+            #print("   -> %s" % (recordType,))
+            typeResource = provisioningResource.getChild(recordType)
 
-                for args in _authReadOnlyPrivileges(self, typeResource, typeResource.principalCollectionURL()):
-                    yield self._checkPrivileges(*args)
+            for args in _authReadOnlyPrivileges(self, typeResource, typeResource.principalCollectionURL()):
+                yield self._checkPrivileges(*args)
 
 
     def test_propertyToField(self):
@@ -705,14 +701,14 @@
             C{record} is the directory service record
             for each record in each directory in C{directoryServices}.
         """
-        for directory in self.directoryServices:
-            provisioningResource = self.principalRootResources[
-                directory.__class__.__name__
-            ]
-            for recordType in directory.recordTypes():
-                for record in directory.listRecords(recordType):
-                    recordResource = provisioningResource.principalForRecord(record)
-                    yield provisioningResource, recordType, recordResource, record
+        directory = self.directory
+        provisioningResource = self.principalRootResources[
+            directory.__class__.__name__
+        ]
+        for recordType in directory.recordTypes():
+            for record in directory.listRecords(recordType):
+                recordResource = provisioningResource.principalForRecord(record)
+                yield provisioningResource, recordType, recordResource, record
 
 
     def _checkPrivileges(self, resource, url, principal, privilege, allowed):

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_proxyprincipalmembers.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_proxyprincipalmembers.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_proxyprincipalmembers.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,506 +0,0 @@
-##
-# Copyright (c) 2005-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from twisted.internet.defer import DeferredList, inlineCallbacks, succeed
-from txdav.xml import element as davxml
-
-from twistedcaldav.directory.directory import DirectoryService
-from twistedcaldav.test.util import xmlFile, augmentsFile, proxiesFile
-from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource, \
-    DirectoryPrincipalResource
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
-
-import twistedcaldav.test.util
-from twistedcaldav.directory import augment, calendaruserproxy
-from twistedcaldav.directory.calendaruserproxyloader import XMLCalendarUserProxyLoader
-
-
-class ProxyPrincipals (twistedcaldav.test.util.TestCase):
-    """
-    Directory service provisioned principals.
-    """
-
-    @inlineCallbacks
-    def setUp(self):
-        super(ProxyPrincipals, self).setUp()
-
-        self.directoryFixture.addDirectoryService(XMLDirectoryService(
-            {
-                'xmlFile' : xmlFile,
-                'augmentService' :
-                    augment.AugmentXMLDB(xmlFiles=(augmentsFile.path,)),
-            }
-        ))
-        calendaruserproxy.ProxyDBService = calendaruserproxy.ProxySqliteDB("proxies.sqlite")
-
-        # Set up a principals hierarchy for each service we're testing with
-        self.principalRootResources = {}
-        name = self.directoryService.__class__.__name__
-        url = "/" + name + "/"
-
-        provisioningResource = DirectoryPrincipalProvisioningResource(url, self.directoryService)
-
-        self.site.resource.putChild(name, provisioningResource)
-
-        self.principalRootResources[self.directoryService.__class__.__name__] = provisioningResource
-
-        yield XMLCalendarUserProxyLoader(proxiesFile.path).updateProxyDB()
-
-
-    def tearDown(self):
-        """ Empty the proxy db between tests """
-        return calendaruserproxy.ProxyDBService.clean() #@UndefinedVariable
-
-
-    def _getPrincipalByShortName(self, type, name):
-        provisioningResource = self.principalRootResources[self.directoryService.__class__.__name__]
-        return provisioningResource.principalForShortName(type, name)
-
-
-    def _groupMembersTest(self, recordType, recordName, subPrincipalName, expectedMembers):
-        def gotMembers(members):
-            memberNames = set([p.displayName() for p in members])
-            self.assertEquals(memberNames, set(expectedMembers))
-
-        principal = self._getPrincipalByShortName(recordType, recordName)
-        if subPrincipalName is not None:
-            principal = principal.getChild(subPrincipalName)
-
-        d = principal.expandedGroupMembers()
-        d.addCallback(gotMembers)
-        return d
-
-
-    def _groupMembershipsTest(self, recordType, recordName, subPrincipalName, expectedMemberships):
-        def gotMemberships(memberships):
-            uids = set([p.principalUID() for p in memberships])
-            self.assertEquals(uids, set(expectedMemberships))
-
-        principal = self._getPrincipalByShortName(recordType, recordName)
-        if subPrincipalName is not None:
-            principal = principal.getChild(subPrincipalName)
-
-        d = principal.groupMemberships()
-        d.addCallback(gotMemberships)
-        return d
-
-
-    @inlineCallbacks
-    def _addProxy(self, principal, subPrincipalName, proxyPrincipal):
-
-        if isinstance(principal, tuple):
-            principal = self._getPrincipalByShortName(principal[0], principal[1])
-        principal = principal.getChild(subPrincipalName)
-        members = (yield principal.groupMembers())
-
-        if isinstance(proxyPrincipal, tuple):
-            proxyPrincipal = self._getPrincipalByShortName(proxyPrincipal[0], proxyPrincipal[1])
-        members.add(proxyPrincipal)
-
-        yield principal.setGroupMemberSetPrincipals(members)
-
-
-    @inlineCallbacks
-    def _removeProxy(self, recordType, recordName, subPrincipalName, proxyRecordType, proxyRecordName):
-
-        principal = self._getPrincipalByShortName(recordType, recordName)
-        principal = principal.getChild(subPrincipalName)
-        members = (yield principal.groupMembers())
-
-        proxyPrincipal = self._getPrincipalByShortName(proxyRecordType, proxyRecordName)
-        for p in members:
-            if p.principalUID() == proxyPrincipal.principalUID():
-                members.remove(p)
-                break
-
-        yield principal.setGroupMemberSetPrincipals(members)
-
-
-    @inlineCallbacks
-    def _clearProxy(self, principal, subPrincipalName):
-
-        if isinstance(principal, tuple):
-            principal = self._getPrincipalByShortName(principal[0], principal[1])
-        principal = principal.getChild(subPrincipalName)
-        yield principal.setGroupMemberSetPrincipals(set())
-
-
-    @inlineCallbacks
-    def _proxyForTest(self, recordType, recordName, expectedProxies, read_write):
-        principal = self._getPrincipalByShortName(recordType, recordName)
-        proxies = (yield principal.proxyFor(read_write))
-        proxies = sorted([_principal.displayName() for _principal in proxies])
-        self.assertEquals(proxies, sorted(expectedProxies))
-
-
-    @inlineCallbacks
-    def test_multipleProxyAssignmentsAtOnce(self):
-        yield self._proxyForTest(
-            DirectoryService.recordType_users, "userb",
-            ('a',),
-            True
-        )
-        yield self._proxyForTest(
-            DirectoryService.recordType_users, "userc",
-            ('a',),
-            True
-        )
-
-
-    def test_groupMembersRegular(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        return self._groupMembersTest(
-            DirectoryService.recordType_groups, "both_coasts", None,
-            ("Chris Lecroy", "David Reid", "Wilfredo Sanchez", "West Coast", "East Coast", "Cyrus Daboo",),
-        )
-
-
-    def test_groupMembersRecursive(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        return self._groupMembersTest(
-            DirectoryService.recordType_groups, "recursive1_coasts", None,
-            ("Wilfredo Sanchez", "Recursive2 Coasts", "Cyrus Daboo",),
-        )
-
-
-    def test_groupMembersProxySingleUser(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        return self._groupMembersTest(
-            DirectoryService.recordType_locations, "gemini", "calendar-proxy-write",
-            ("Wilfredo Sanchez",),
-        )
-
-
-    def test_groupMembersProxySingleGroup(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        return self._groupMembersTest(
-            DirectoryService.recordType_locations, "mercury", "calendar-proxy-write",
-            ("Chris Lecroy", "David Reid", "Wilfredo Sanchez", "West Coast",),
-        )
-
-
-    def test_groupMembersProxySingleGroupWithNestedGroups(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        return self._groupMembersTest(
-            DirectoryService.recordType_locations, "apollo", "calendar-proxy-write",
-            ("Chris Lecroy", "David Reid", "Wilfredo Sanchez", "West Coast", "East Coast", "Cyrus Daboo", "Both Coasts",),
-        )
-
-
-    def test_groupMembersProxySingleGroupWithNestedRecursiveGroups(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        return self._groupMembersTest(
-            DirectoryService.recordType_locations, "orion", "calendar-proxy-write",
-            ("Wilfredo Sanchez", "Cyrus Daboo", "Recursive1 Coasts", "Recursive2 Coasts",),
-        )
-
-
-    def test_groupMembersProxySingleGroupWithNonCalendarGroup(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        ds = []
-
-        ds.append(self._groupMembersTest(
-            DirectoryService.recordType_resources, "non_calendar_proxy", "calendar-proxy-write",
-            ("Chris Lecroy", "Cyrus Daboo", "Non-calendar group"),
-        ))
-
-        ds.append(self._groupMembershipsTest(
-            DirectoryService.recordType_groups, "non_calendar_group", None,
-            ("non_calendar_proxy#calendar-proxy-write",),
-        ))
-
-        return DeferredList(ds)
-
-
-    def test_groupMembersProxyMissingUser(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        proxy = self._getPrincipalByShortName(DirectoryService.recordType_users, "cdaboo")
-        proxyGroup = proxy.getChild("calendar-proxy-write")
-
-        def gotMembers(members):
-            members.add("12345")
-            return proxyGroup._index().setGroupMembers("%s#calendar-proxy-write" % (proxy.principalUID(),), members)
-
-        def check(_):
-            return self._groupMembersTest(
-                DirectoryService.recordType_users, "cdaboo", "calendar-proxy-write",
-                (),
-            )
-
-        # Setup the fake entry in the DB
-        d = proxyGroup._index().getMembers("%s#calendar-proxy-write" % (proxy.principalUID(),))
-        d.addCallback(gotMembers)
-        d.addCallback(check)
-        return d
-
-
-    def test_groupMembershipsMissingUser(self):
-        """
-        DirectoryPrincipalResource.expandedGroupMembers()
-        """
-        # Setup the fake entry in the DB
-        fake_uid = "12345"
-        proxy = self._getPrincipalByShortName(DirectoryService.recordType_users, "cdaboo")
-        proxyGroup = proxy.getChild("calendar-proxy-write")
-
-        def gotMembers(members):
-            members.add("%s#calendar-proxy-write" % (proxy.principalUID(),))
-            return proxyGroup._index().setGroupMembers("%s#calendar-proxy-write" % (fake_uid,), members)
-
-        def check(_):
-            return self._groupMembershipsTest(
-                DirectoryService.recordType_users, "cdaboo", "calendar-proxy-write",
-                (),
-            )
-
-        d = proxyGroup._index().getMembers("%s#calendar-proxy-write" % (fake_uid,))
-        d.addCallback(gotMembers)
-        d.addCallback(check)
-        return d
-
-
-    @inlineCallbacks
-    def test_setGroupMemberSet(self):
-        class StubMemberDB(object):
-            def __init__(self):
-                self.members = set()
-
-            def setGroupMembers(self, uid, members):
-                self.members = members
-                return succeed(None)
-
-            def getMembers(self, uid):
-                return succeed(self.members)
-
-        user = self._getPrincipalByShortName(self.directoryService.recordType_users,
-                                           "cdaboo")
-
-        proxyGroup = user.getChild("calendar-proxy-write")
-
-        memberdb = StubMemberDB()
-
-        proxyGroup._index = (lambda: memberdb)
-
-        new_members = davxml.GroupMemberSet(
-            davxml.HRef.fromString(
-                "/XMLDirectoryService/__uids__/8B4288F6-CC82-491D-8EF9-642EF4F3E7D0/"),
-            davxml.HRef.fromString(
-                "/XMLDirectoryService/__uids__/5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1/"))
-
-        yield proxyGroup.setGroupMemberSet(new_members, None)
-
-        self.assertEquals(
-            set([str(p) for p in memberdb.members]),
-            set(["5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1",
-                 "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0"]))
-
-
-    @inlineCallbacks
-    def test_setGroupMemberSetNotifiesPrincipalCaches(self):
-        class StubCacheNotifier(object):
-            changedCount = 0
-            def changed(self):
-                self.changedCount += 1
-                return succeed(None)
-
-        user = self._getPrincipalByShortName(self.directoryService.recordType_users, "cdaboo")
-
-        proxyGroup = user.getChild("calendar-proxy-write")
-
-        notifier = StubCacheNotifier()
-
-        oldCacheNotifier = DirectoryPrincipalResource.cacheNotifierFactory
-
-        try:
-            DirectoryPrincipalResource.cacheNotifierFactory = (lambda _1, _2, **kwargs: notifier)
-
-            self.assertEquals(notifier.changedCount, 0)
-
-            yield proxyGroup.setGroupMemberSet(
-                davxml.GroupMemberSet(
-                    davxml.HRef.fromString(
-                        "/XMLDirectoryService/__uids__/5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1/")),
-                None)
-
-            self.assertEquals(notifier.changedCount, 1)
-        finally:
-            DirectoryPrincipalResource.cacheNotifierFactory = oldCacheNotifier
-
-
-    def test_proxyFor(self):
-
-        return self._proxyForTest(
-            DirectoryService.recordType_users, "wsanchez",
-            ("Mercury Seven", "Gemini Twelve", "Apollo Eleven", "Orion",),
-            True
-        )
-
-
-    @inlineCallbacks
-    def test_proxyForDuplicates(self):
-
-        yield self._addProxy(
-            (DirectoryService.recordType_locations, "gemini",),
-            "calendar-proxy-write",
-            (DirectoryService.recordType_groups, "grunts",),
-        )
-
-        yield self._proxyForTest(
-            DirectoryService.recordType_users, "wsanchez",
-            ("Mercury Seven", "Gemini Twelve", "Apollo Eleven", "Orion",),
-            True
-        )
-
-
-    def test_readOnlyProxyFor(self):
-
-        return self._proxyForTest(
-            DirectoryService.recordType_users, "wsanchez",
-            ("Non-calendar proxy",),
-            False
-        )
-
-
-    @inlineCallbacks
-    def test_UserProxy(self):
-
-        for proxyType in ("calendar-proxy-read", "calendar-proxy-write"):
-
-            yield self._addProxy(
-                (DirectoryService.recordType_users, "wsanchez",),
-                proxyType,
-                (DirectoryService.recordType_users, "cdaboo",),
-            )
-
-            yield self._groupMembersTest(
-                DirectoryService.recordType_users, "wsanchez",
-                proxyType,
-                ("Cyrus Daboo",),
-            )
-
-            yield self._addProxy(
-                (DirectoryService.recordType_users, "wsanchez",),
-                proxyType,
-                (DirectoryService.recordType_users, "lecroy",),
-            )
-
-            yield self._groupMembersTest(
-                DirectoryService.recordType_users, "wsanchez",
-                proxyType,
-                ("Cyrus Daboo", "Chris Lecroy",),
-            )
-
-            yield self._removeProxy(
-                DirectoryService.recordType_users, "wsanchez",
-                proxyType,
-                DirectoryService.recordType_users, "cdaboo",
-            )
-
-            yield self._groupMembersTest(
-                DirectoryService.recordType_users, "wsanchez",
-                proxyType,
-                ("Chris Lecroy",),
-            )
-
-
-    @inlineCallbacks
-    def test_NonAsciiProxy(self):
-        """
-        Ensure that principalURLs with non-ascii don't cause problems
-        within CalendarUserProxyPrincipalResource
-        """
-
-        recordType = DirectoryService.recordType_users
-        proxyType = "calendar-proxy-read"
-
-        record = self.directoryService.recordWithGUID("320B73A1-46E2-4180-9563-782DFDBE1F63")
-        provisioningResource = self.principalRootResources[self.directoryService.__class__.__name__]
-        principal = provisioningResource.principalForRecord(record)
-        proxyPrincipal = provisioningResource.principalForShortName(recordType,
-            "wsanchez")
-
-        yield self._addProxy(principal, proxyType, proxyPrincipal)
-        memberships = yield proxyPrincipal._calendar_user_proxy_index().getMemberships(proxyPrincipal.principalUID())
-        for uid in memberships:
-            provisioningResource.principalForUID(uid)
-
-
-    @inlineCallbacks
-    def test_getAllMembers(self):
-        """
-        getAllMembers( ) returns the unique set of guids that have been
-        delegated-to directly
-        """
-        self.assertEquals(
-            set((yield calendaruserproxy.ProxyDBService.getAllMembers())), #@UndefinedVariable
-            set([
-                u'00599DAF-3E75-42DD-9DB7-52617E79943F',
-                u'6423F94A-6B76-4A3A-815B-D52CFD77935D',
-                u'8A985493-EE2C-4665-94CF-4DFEA3A89500',
-                u'9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD2',
-                u'both_coasts',
-                u'left_coast',
-                u'non_calendar_group',
-                u'recursive1_coasts',
-                u'recursive2_coasts',
-                u'EC465590-E9E9-4746-ACE8-6C756A49FE4D'])
-        )
-
-
-    @inlineCallbacks
-    def test_hideDisabledDelegates(self):
-        """
-        Delegates who are not enabledForLogin are "hidden" from the delegate lists
-        (but groups *are* allowed)
-        """
-
-        record = self.directoryService.recordWithGUID("EC465590-E9E9-4746-ACE8-6C756A49FE4D")
-
-        record.enabledForLogin = True
-        yield self._groupMembersTest(
-            DirectoryService.recordType_users, "delegator", "calendar-proxy-write",
-            ("Occasional Delegate", "Delegate Via Group", "Delegate Group"),
-        )
-
-        # Login disabled -- no longer shown as a delegate
-        record.enabledForLogin = False
-        yield self._groupMembersTest(
-            DirectoryService.recordType_users, "delegator", "calendar-proxy-write",
-            ("Delegate Via Group", "Delegate Group"),
-        )
-
-        # Login re-enabled -- once again a delegate (it wasn't not removed from proxydb)
-        record.enabledForLogin = True
-        yield self._groupMembersTest(
-            DirectoryService.recordType_users, "delegator", "calendar-proxy-write",
-            ("Occasional Delegate", "Delegate Via Group", "Delegate Group"),
-        )

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_resources.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_resources.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_resources.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,80 +0,0 @@
-##
-# Copyright (c) 2005-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-import os
-from twistedcaldav.config import config
-from twistedcaldav.test.util import TestCase
-from calendarserver.tools.util import getDirectory
-
-class ResourcesTestCase(TestCase):
-
-    def setUp(self):
-        super(ResourcesTestCase, self).setUp()
-
-        testRoot = os.path.join(".", os.path.dirname(__file__), "resources")
-
-        xmlFile = os.path.join(testRoot, "users-groups.xml")
-        config.DirectoryService.params.xmlFile = xmlFile
-
-        xmlFile = os.path.join(testRoot, "resources-locations.xml")
-        config.ResourceService.params.xmlFile = xmlFile
-        config.ResourceService.Enabled = True
-
-        xmlFile = os.path.join(testRoot, "augments.xml")
-        config.AugmentService.type = "twistedcaldav.directory.augment.AugmentXMLDB"
-        config.AugmentService.params.xmlFiles = (xmlFile,)
-
-# Uh, what's this testing?
-#    def test_loadConfig(self):
-#        directory = getDirectory()
-
-
-    def test_recordInPrimaryDirectory(self):
-        directory = getDirectory()
-
-        # Look up a user, which comes out of primary directory service
-        record = directory.recordWithUID("user01")
-        self.assertNotEquals(record, None)
-
-
-    def test_recordInSupplementalDirectory(self):
-        directory = getDirectory()
-
-        # Look up a resource, which comes out of locations/resources service
-        record = directory.recordWithUID("resource01")
-        self.assertNotEquals(record, None)
-
-
-    def test_augments(self):
-        directory = getDirectory()
-
-        # Primary directory
-        record = directory.recordWithUID("user01")
-        self.assertEquals(record.enabled, True)
-        self.assertEquals(record.enabledForCalendaring, True)
-        record = directory.recordWithUID("user02")
-        self.assertEquals(record.enabled, False)
-        self.assertEquals(record.enabledForCalendaring, False)
-
-        # Supplemental directory
-        record = directory.recordWithUID("resource01")
-        self.assertEquals(record.enabled, True)
-        self.assertEquals(record.enabledForCalendaring, True)
-        self.assertEquals(record.autoSchedule, True)
-        record = directory.recordWithUID("resource02")
-        self.assertEquals(record.enabled, False)
-        self.assertEquals(record.enabledForCalendaring, False)
-        self.assertEquals(record.autoSchedule, False)

Deleted: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_xmlfile.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_xmlfile.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/test/test_xmlfile.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1,375 +0,0 @@
-##
-# Copyright (c) 2005-2014 Apple Inc. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-##
-
-from twext.python.filepath import CachingFilePath as FilePath
-
-from twistedcaldav.directory import augment
-from twistedcaldav.directory.directory import DirectoryService
-import twistedcaldav.directory.test.util
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
-from twistedcaldav.test.util import TestCase, xmlFile, augmentsFile
-
-# FIXME: Add tests for GUID hooey, once we figure out what that means here
-
-class XMLFileBase(object):
-    """
-    L{XMLFileBase} is a base/mix-in object for testing L{XMLDirectoryService}
-    (or things that depend on L{IDirectoryService} and need a simple
-    implementation to use).
-    """
-    recordTypes = set((
-        DirectoryService.recordType_users,
-        DirectoryService.recordType_groups,
-        DirectoryService.recordType_locations,
-        DirectoryService.recordType_resources,
-        DirectoryService.recordType_addresses,
-    ))
-
-    users = {
-        "admin"      : {"password": "nimda", "guid": "D11F03A0-97EA-48AF-9A6C-FAC7F3975766", "addresses": ()},
-        "wsanchez"   : {"password": "zehcnasw", "guid": "6423F94A-6B76-4A3A-815B-D52CFD77935D", "addresses": ("mailto:wsanchez at example.com",)},
-        "cdaboo"     : {"password": "oobadc", "guid": "5A985493-EE2C-4665-94CF-4DFEA3A89500", "addresses": ("mailto:cdaboo at example.com",)  },
-        "lecroy"     : {"password": "yorcel", "guid": "8B4288F6-CC82-491D-8EF9-642EF4F3E7D0", "addresses": ("mailto:lecroy at example.com",)  },
-        "dreid"      : {"password": "dierd", "guid": "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1", "addresses": ("mailto:dreid at example.com",)   },
-        "nocalendar" : {"password": "radnelacon", "guid": "543D28BA-F74F-4D5F-9243-B3E3A61171E5", "addresses": ()},
-        "user01"     : {"password": "01user", "guid": None, "addresses": ("mailto:c4ca4238a0 at example.com",)},
-        "user02"     : {"password": "02user", "guid": None, "addresses": ("mailto:c81e728d9d at example.com",)},
-   }
-
-    groups = {
-        "admin"      : {"password": "admin", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "managers"),)},
-        "managers"   : {"password": "managers", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "lecroy"),)},
-        "grunts"     : {"password": "grunts", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "wsanchez"),
-                                                                                               (DirectoryService.recordType_users , "cdaboo"),
-                                                                                               (DirectoryService.recordType_users , "dreid"))},
-        "right_coast": {"password": "right_coast", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "cdaboo"),)},
-        "left_coast" : {"password": "left_coast", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "wsanchez"),
-                                                                                               (DirectoryService.recordType_users , "dreid"),
-                                                                                               (DirectoryService.recordType_users , "lecroy"))},
-        "both_coasts": {"password": "both_coasts", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "right_coast"),
-                                                                                               (DirectoryService.recordType_groups, "left_coast"))},
-        "recursive1_coasts": {"password": "recursive1_coasts", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "recursive2_coasts"),
-                                                                                               (DirectoryService.recordType_users, "wsanchez"))},
-        "recursive2_coasts": {"password": "recursive2_coasts", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_groups, "recursive1_coasts"),
-                                                                                               (DirectoryService.recordType_users, "cdaboo"))},
-        "non_calendar_group": {"password": "non_calendar_group", "guid": None, "addresses": (), "members": ((DirectoryService.recordType_users , "cdaboo"),
-                                                                                               (DirectoryService.recordType_users , "lecroy"))},
-   }
-
-    locations = {
-        "mercury": {"password": "mercury", "guid": None, "addresses": ("mailto:mercury at example.com",)},
-        "gemini" : {"password": "gemini", "guid": None, "addresses": ("mailto:gemini at example.com",)},
-        "apollo" : {"password": "apollo", "guid": None, "addresses": ("mailto:apollo at example.com",)},
-        "orion"  : {"password": "orion", "guid": None, "addresses": ("mailto:orion at example.com",)},
-   }
-
-    resources = {
-        "transporter"        : {"password": "transporter", "guid": None, "addresses": ("mailto:transporter at example.com",)       },
-        "ftlcpu"             : {"password": "ftlcpu", "guid": None, "addresses": ("mailto:ftlcpu at example.com",)            },
-        "non_calendar_proxy" : {"password": "non_calendar_proxy", "guid": "non_calendar_proxy", "addresses": ("mailto:non_calendar_proxy at example.com",)},
-   }
-
-
-    def xmlFile(self):
-        """
-        Create a L{FilePath} that points to a temporary file containing a copy
-        of C{twistedcaldav/directory/test/accounts.xml}.
-
-        @see: L{xmlFile}
-
-        @rtype: L{FilePath}
-        """
-        if not hasattr(self, "_xmlFile"):
-            self._xmlFile = FilePath(self.mktemp())
-            xmlFile.copyTo(self._xmlFile)
-        return self._xmlFile
-
-
-    def augmentsFile(self):
-        """
-        Create a L{FilePath} that points to a temporary file containing a copy
-        of C{twistedcaldav/directory/test/augments.xml}.
-
-        @see: L{augmentsFile}
-
-        @rtype: L{FilePath}
-        """
-        if not hasattr(self, "_augmentsFile"):
-            self._augmentsFile = FilePath(self.mktemp())
-            augmentsFile.copyTo(self._augmentsFile)
-        return self._augmentsFile
-
-
-    def service(self):
-        """
-        Create an L{XMLDirectoryService} based on the contents of the paths
-        returned by L{XMLFileBase.augmentsFile} and L{XMLFileBase.xmlFile}.
-
-        @rtype: L{XMLDirectoryService}
-        """
-        return XMLDirectoryService(
-            {
-                'xmlFile': self.xmlFile(),
-                'augmentService':
-                    augment.AugmentXMLDB(xmlFiles=(self.augmentsFile().path,)),
-            },
-            alwaysStat=True
-        )
-
-
-
-class XMLFile (
-    XMLFileBase,
-    twistedcaldav.directory.test.util.BasicTestCase,
-    twistedcaldav.directory.test.util.DigestTestCase
-):
-    """
-    Test XML file based directory implementation.
-    """
-
-    def test_changedXML(self):
-        service = self.service()
-
-        self.xmlFile().open("w").write(
-"""<?xml version="1.0" encoding="utf-8"?>
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-<accounts realm="Test Realm">
-  <user>
-    <uid>admin</uid>
-    <guid>admin</guid>
-    <password>nimda</password>
-    <name>Super User</name>
-  </user>
-</accounts>
-"""
-        )
-        for recordType, expectedRecords in (
-            (DirectoryService.recordType_users     , ("admin",)),
-            (DirectoryService.recordType_groups    , ()),
-            (DirectoryService.recordType_locations , ()),
-            (DirectoryService.recordType_resources , ()),
-        ):
-            # Fault records in
-            for name in expectedRecords:
-                service.recordWithShortName(recordType, name)
-
-            self.assertEquals(
-                set(r.shortNames[0] for r in service.listRecords(recordType)),
-                set(expectedRecords)
-            )
-
-
-    def test_okAutoSchedule(self):
-        service = self.service()
-
-        self.xmlFile().open("w").write(
-"""<?xml version="1.0" encoding="utf-8"?>
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-<accounts realm="Test Realm">
-  <location>
-    <uid>my office</uid>
-    <guid>myoffice</guid>
-    <password>nimda</password>
-    <name>Super User</name>
-  </location>
-</accounts>
-"""
-        )
-        self.augmentsFile().open("w").write(
-"""<?xml version="1.0" encoding="utf-8"?>
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-<augments>
-  <record>
-    <uid>myoffice</uid>
-    <enable>true</enable>
-    <enable-calendar>true</enable-calendar>
-    <auto-schedule>true</auto-schedule>
-  </record>
-</augments>
-"""
-        )
-        service.augmentService.refresh()
-
-        for recordType, expectedRecords in (
-            (DirectoryService.recordType_users     , ()),
-            (DirectoryService.recordType_groups    , ()),
-            (DirectoryService.recordType_locations , ("my office",)),
-            (DirectoryService.recordType_resources , ()),
-        ):
-            # Fault records in
-            for name in expectedRecords:
-                service.recordWithShortName(recordType, name)
-
-            self.assertEquals(
-                set(r.shortNames[0] for r in service.listRecords(recordType)),
-                set(expectedRecords)
-            )
-        self.assertTrue(service.recordWithShortName(DirectoryService.recordType_locations, "my office").autoSchedule)
-
-
-    def test_okDisableCalendar(self):
-        service = self.service()
-
-        self.xmlFile().open("w").write(
-"""<?xml version="1.0" encoding="utf-8"?>
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-<accounts realm="Test Realm">
-  <group>
-    <uid>enabled</uid>
-    <guid>enabled</guid>
-    <password>enabled</password>
-    <name>Enabled</name>
-  </group>
-  <group>
-    <uid>disabled</uid>
-    <guid>disabled</guid>
-    <password>disabled</password>
-    <name>Disabled</name>
-  </group>
-</accounts>
-"""
-        )
-
-        for recordType, expectedRecords in (
-            (DirectoryService.recordType_users     , ()),
-            (DirectoryService.recordType_groups    , ("enabled", "disabled")),
-            (DirectoryService.recordType_locations , ()),
-            (DirectoryService.recordType_resources , ()),
-        ):
-            # Fault records in
-            for name in expectedRecords:
-                service.recordWithShortName(recordType, name)
-
-            self.assertEquals(
-                set(r.shortNames[0] for r in service.listRecords(recordType)),
-                set(expectedRecords)
-            )
-
-        # All groups are disabled
-        self.assertFalse(service.recordWithShortName(DirectoryService.recordType_groups, "enabled").enabledForCalendaring)
-        self.assertFalse(service.recordWithShortName(DirectoryService.recordType_groups, "disabled").enabledForCalendaring)
-
-
-    def test_readExtras(self):
-        service = self.service()
-
-        self.xmlFile().open("w").write(
-"""<?xml version="1.0" encoding="utf-8"?>
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-<accounts realm="Test Realm">
-  <location>
-    <uid>my office</uid>
-    <guid>myoffice</guid>
-    <name>My Office</name>
-    <extras>
-        <comment>This is the comment</comment>
-        <capacity>40</capacity>
-    </extras>
-  </location>
-</accounts>
-"""
-        )
-
-        record = service.recordWithShortName(
-            DirectoryService.recordType_locations, "my office")
-        self.assertEquals(record.guid, "myoffice")
-        self.assertEquals(record.extras["comment"], "This is the comment")
-        self.assertEquals(record.extras["capacity"], "40")
-
-
-    def test_writeExtras(self):
-        service = self.service()
-
-        service.createRecord(DirectoryService.recordType_locations, "newguid",
-            shortNames=("New office",),
-            fullName="My New Office",
-            address="1 Infinite Loop, Cupertino, CA",
-            capacity="10",
-            comment="Test comment",
-        )
-
-        record = service.recordWithShortName(
-            DirectoryService.recordType_locations, "New office")
-        self.assertEquals(record.extras["comment"], "Test comment")
-        self.assertEquals(record.extras["capacity"], "10")
-
-        service.updateRecord(DirectoryService.recordType_locations, "newguid",
-            shortNames=("New office",),
-            fullName="My Newer Office",
-            address="2 Infinite Loop, Cupertino, CA",
-            capacity="20",
-            comment="Test comment updated",
-        )
-
-        record = service.recordWithShortName(
-            DirectoryService.recordType_locations, "New office")
-        self.assertEquals(record.fullName, "My Newer Office")
-        self.assertEquals(record.extras["address"], "2 Infinite Loop, Cupertino, CA")
-        self.assertEquals(record.extras["comment"], "Test comment updated")
-        self.assertEquals(record.extras["capacity"], "20")
-
-        service.destroyRecord(DirectoryService.recordType_locations, "newguid")
-
-        record = service.recordWithShortName(
-            DirectoryService.recordType_locations, "New office")
-        self.assertEquals(record, None)
-
-
-    def test_indexing(self):
-        service = self.service()
-        self.assertNotEquals(None, service._lookupInIndex(service.recordType_users, service.INDEX_TYPE_SHORTNAME, "usera"))
-        self.assertNotEquals(None, service._lookupInIndex(service.recordType_users, service.INDEX_TYPE_CUA, "mailto:wsanchez at example.com"))
-        self.assertNotEquals(None, service._lookupInIndex(service.recordType_users, service.INDEX_TYPE_GUID, "9FF60DAD-0BDE-4508-8C77-15F0CA5C8DD2"))
-        self.assertNotEquals(None, service._lookupInIndex(service.recordType_locations, service.INDEX_TYPE_SHORTNAME, "orion"))
-        self.assertEquals(None, service._lookupInIndex(service.recordType_users, service.INDEX_TYPE_CUA, "mailto:nobody at example.com"))
-
-
-    def test_repeat(self):
-        service = self.service()
-        record = service.recordWithShortName(
-            DirectoryService.recordType_users, "user01")
-        self.assertEquals(record.fullName, "c4ca4238a0b923820dcc509a6f75849bc4c User 01")
-        self.assertEquals(record.firstName, "c4ca4")
-        self.assertEquals(record.lastName, "c4ca4238a User 01")
-        self.assertEquals(record.emailAddresses, set(['c4ca4238a0 at example.com']))
-
-
-
-class XMLFileSubset (XMLFileBase, TestCase):
-    """
-    Test the recordTypes subset feature of XMLFile service.
-    """
-    recordTypes = set((
-        DirectoryService.recordType_users,
-        DirectoryService.recordType_groups,
-    ))
-
-
-    def test_recordTypesSubset(self):
-        directory = XMLDirectoryService(
-            {
-                'xmlFile' : self.xmlFile(),
-                'augmentService' :
-                    augment.AugmentXMLDB(xmlFiles=(self.augmentsFile().path,)),
-                'recordTypes' :
-                    (
-                        DirectoryService.recordType_users,
-                        DirectoryService.recordType_groups
-                    ),
-            },
-            alwaysStat=True
-        )
-        self.assertEquals(set(("users", "groups")), set(directory.recordTypes()))

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/util.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/util.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/util.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -34,7 +34,10 @@
 from twisted.internet.defer import inlineCallbacks, returnValue
 from txdav.xml import element as davxml
 from uuid import UUID, uuid5
+from twisted.python.failure import Failure
+from twisted.web.template import tags
 
+
 log = Logger()
 
 def uuidFromName(namespace, name):
@@ -148,3 +151,76 @@
         else:
             response = StatusResponse(responsecode.NOT_FOUND, "Resource not found")
             returnValue(response)
+
+
+
+
+def formatLink(url):
+    """
+    Convert a URL string into some twisted.web.template DOM objects for
+    rendering as a link to itself.
+    """
+    return tags.a(href=url)(url)
+
+
+
+def formatLinks(urls):
+    """
+    Format a list of URL strings as a list of twisted.web.template DOM links.
+    """
+    return formatList(formatLink(link) for link in urls)
+
+
+def formatPrincipals(principals):
+    """
+    Format a list of principals into some twisted.web.template DOM objects.
+    """
+    def recordKey(principal):
+        try:
+            record = principal.record
+        except AttributeError:
+            try:
+                record = principal.parent.record
+            except:
+                return None
+        return (record.recordType, record.shortNames[0])
+
+
+    def describe(principal):
+        if hasattr(principal, "record"):
+            return " - %s" % (principal.record.displayName,)
+        else:
+            return ""
+
+    return formatList(
+        tags.a(href=principal.principalURL())(
+            str(principal), describe(principal)
+        )
+        for principal in sorted(principals, key=recordKey)
+    )
+
+
+
+def formatList(iterable):
+    """
+    Format a list of stuff as an interable.
+    """
+    thereAreAny = False
+    try:
+        item = None
+        for item in iterable:
+            thereAreAny = True
+            yield " -> "
+            if item is None:
+                yield "None"
+            else:
+                yield item
+            yield "\n"
+    except Exception, e:
+        log.error("Exception while rendering: %s" % (e,))
+        Failure().printTraceback()
+        yield "  ** %s **: %s\n" % (e.__class__.__name__, e)
+    if not thereAreAny:
+        yield " '()\n"
+
+

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/wiki.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/wiki.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/directory/wiki.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -19,32 +19,32 @@
 as other principals.
 """
 
-__all__ = [
-    "WikiDirectoryService",
-]
 
+from twisted.internet.defer import inlineCallbacks, returnValue, succeed
+from twistedcaldav.config import config
+from twisted.web.xmlrpc import Proxy, Fault
 from calendarserver.platform.darwin.wiki import accessForUserToWiki
+from twext.python.log import Logger
 
 from twext.internet.gaiendpoint import MultiFailure
-from twext.python.log import Logger
 from txweb2 import responsecode
-from txweb2.auth.wrapper import UnauthorizedResponse
-from txweb2.dav.resource import TwistedACLInheritable
+# from txweb2.auth.wrapper import UnauthorizedResponse
+# from txweb2.dav.resource import TwistedACLInheritable
 from txweb2.http import HTTPError, StatusResponse
 
-from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.web.error import Error as WebError
-from twisted.web.xmlrpc import Proxy, Fault
 
-from twistedcaldav.config import config
-from twistedcaldav.directory.directory import DirectoryService, \
-    DirectoryRecord, UnknownRecordTypeError
+# from twistedcaldav.directory.directory import DirectoryService, \
+#     DirectoryRecord, UnknownRecordTypeError
 
-from txdav.xml import element as davxml
+# from txdav.xml import element as davxml
 
 log = Logger()
 
-class WikiDirectoryService(DirectoryService):
+# class WikiDirectoryService(DirectoryService):
+
+
+class WikiDirectoryService(object):
     """
     L{IDirectoryService} implementation for Wikis.
     """
@@ -57,81 +57,81 @@
     UIDPrefix = "wiki-"
 
 
-    def __repr__(self):
-        return "<%s %r>" % (self.__class__.__name__, self.realmName)
+#     def __repr__(self):
+#         return "<%s %r>" % (self.__class__.__name__, self.realmName)
 
 
-    def __init__(self):
-        super(WikiDirectoryService, self).__init__()
-        self.byUID = {}
-        self.byShortName = {}
+#     def __init__(self):
+#         super(WikiDirectoryService, self).__init__()
+#         self.byUID = {}
+#         self.byShortName = {}
 
 
-    def recordTypes(self):
-        return (WikiDirectoryService.recordType_wikis,)
+#     def recordTypes(self):
+#         return (WikiDirectoryService.recordType_wikis,)
 
 
-    def listRecords(self, recordType):
-        return ()
+#     def listRecords(self, recordType):
+#         return ()
 
 
-    def recordWithShortName(self, recordType, shortName):
-        if recordType != WikiDirectoryService.recordType_wikis:
-            raise UnknownRecordTypeError(recordType)
+#     def recordWithShortName(self, recordType, shortName):
+#         if recordType != WikiDirectoryService.recordType_wikis:
+#             raise UnknownRecordTypeError(recordType)
 
-        if shortName in self.byShortName:
-            record = self.byShortName[shortName]
-            return record
+#         if shortName in self.byShortName:
+#             record = self.byShortName[shortName]
+#             return record
 
-        record = self._addRecord(shortName)
-        return record
+#         record = self._addRecord(shortName)
+#         return record
 
 
-    def recordWithUID(self, uid):
+#     def recordWithUID(self, uid):
 
-        if uid in self.byUID:
-            record = self.byUID[uid]
-            return record
+#         if uid in self.byUID:
+#             record = self.byUID[uid]
+#             return record
 
-        if uid.startswith(self.UIDPrefix):
-            shortName = uid[len(self.UIDPrefix):]
-            record = self._addRecord(shortName)
-            return record
-        else:
-            return None
+#         if uid.startswith(self.UIDPrefix):
+#             shortName = uid[len(self.UIDPrefix):]
+#             record = self._addRecord(shortName)
+#             return record
+#         else:
+#             return None
 
 
-    def _addRecord(self, shortName):
+#     def _addRecord(self, shortName):
 
-        record = WikiDirectoryRecord(
-            self,
-            WikiDirectoryService.recordType_wikis,
-            shortName,
-            None
-        )
-        self.byUID[record.uid] = record
-        self.byShortName[shortName] = record
-        return record
+#         record = WikiDirectoryRecord(
+#             self,
+#             WikiDirectoryService.recordType_wikis,
+#             shortName,
+#             None
+#         )
+#         self.byUID[record.uid] = record
+#         self.byShortName[shortName] = record
+#         return record
 
 
 
-class WikiDirectoryRecord(DirectoryRecord):
-    """
-    L{DirectoryRecord} implementation for Wikis.
-    """
+# class WikiDirectoryRecord(DirectoryRecord):
+#     """
+#     L{DirectoryRecord} implementation for Wikis.
+#     """
 
-    def __init__(self, service, recordType, shortName, entry):
-        super(WikiDirectoryRecord, self).__init__(
-            service=service,
-            recordType=recordType,
-            guid=None,
-            shortNames=(shortName,),
-            fullName=shortName,
-            enabledForCalendaring=True,
-            uid="%s%s" % (WikiDirectoryService.UIDPrefix, shortName),
-        )
-        # Wiki enabling doesn't come from augments db, so enable here...
-        self.enabled = True
+#     def __init__(self, service, recordType, shortName, entry):
+#         super(WikiDirectoryRecord, self).__init__(
+#             service=service,
+#             recordType=recordType,
+#             guid=None,
+#             shortNames=(shortName,),
+#             fullName=shortName,
+#             enabledForCalendaring=True,
+#             uid="%s%s" % (WikiDirectoryService.UIDPrefix, shortName),
+#         )
+#         # Wiki enabling doesn't come from augments db, so enable here...
+#         self.enabled = True
 
 
 
@@ -250,118 +250,120 @@
 
 
 
- at inlineCallbacks
 def getWikiACL(resource, request):
-    """
-    Ask the wiki server we're paired with what level of access the authnUser has.
+    return succeed(None)
+# @inlineCallbacks
+# def getWikiACL(resource, request):
+#     """
+#     Ask the wiki server we're paired with what level of access the authnUser has.
 
-    Returns an ACL.
+#     Returns an ACL.
 
-    Wiki authentication is a bit tricky because the end-user accessing a group
-    calendar may not actually be enabled for calendaring.  Therefore in that
-    situation, the authzUser will have been replaced with the wiki principal
-    in locateChild( ), so that any changes the user makes will have the wiki
-    as the originator.  The authnUser will always be the end-user.
-    """
-    from twistedcaldav.directory.principal import DirectoryPrincipalResource
+#     Wiki authentication is a bit tricky because the end-user accessing a group
+#     calendar may not actually be enabled for calendaring.  Therefore in that
+#     situation, the authzUser will have been replaced with the wiki principal
+#     in locateChild( ), so that any changes the user makes will have the wiki
+#     as the originator.  The authnUser will always be the end-user.
+#     """
+#     from twistedcaldav.directory.principal import DirectoryPrincipalResource
 
-    if (not hasattr(resource, "record") or
-        resource.record.recordType != WikiDirectoryService.recordType_wikis):
-        returnValue(None)
+#     if (not hasattr(resource, "record") or
+#         resource.record.recordType != WikiDirectoryService.recordType_wikis):
+#         returnValue(None)
 
-    if hasattr(request, 'wikiACL'):
-        returnValue(request.wikiACL)
+#     if hasattr(request, 'wikiACL'):
+#         returnValue(request.wikiACL)
 
-    userID = "unauthenticated"
-    wikiID = resource.record.shortNames[0]
+#     userID = "unauthenticated"
+#     wikiID = resource.record.shortNames[0]
 
-    try:
-        url = str(request.authnUser.children[0])
-        principal = (yield request.locateResource(url))
-        if isinstance(principal, DirectoryPrincipalResource):
-            userID = principal.record.guid
-    except:
-        # TODO: better error handling
-        pass
+#     try:
+#         url = str(request.authnUser.children[0])
+#         principal = (yield request.locateResource(url))
+#         if isinstance(principal, DirectoryPrincipalResource):
+#             userID = principal.record.guid
+#     except:
+#         # TODO: better error handling
+#         pass
 
-    try:
-        access = (yield getWikiAccess(userID, wikiID))
+#     try:
+#         access = (yield getWikiAccess(userID, wikiID))
 
-        # The ACL we returns has ACEs for the end-user and the wiki principal
-        # in case authzUser is the wiki principal.
-        if access == "read":
-            request.wikiACL = davxml.ACL(
-                davxml.ACE(
-                    request.authnUser,
-                    davxml.Grant(
-                        davxml.Privilege(davxml.Read()),
-                        davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
+#         # The ACL we returns has ACEs for the end-user and the wiki principal
+#         # in case authzUser is the wiki principal.
+#         if access == "read":
+#             request.wikiACL = davxml.ACL(
+#                 davxml.ACE(
+#                     request.authnUser,
+#                     davxml.Grant(
+#                         davxml.Privilege(davxml.Read()),
+#                         davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
 
-                        # We allow write-properties so that direct sharees can change
-                        # e.g. calendar color properties
-                        davxml.Privilege(davxml.WriteProperties()),
-                    ),
-                    TwistedACLInheritable(),
-                ),
-                davxml.ACE(
-                    davxml.Principal(
-                        davxml.HRef.fromString("/principals/wikis/%s/" % (wikiID,))
-                    ),
-                    davxml.Grant(
-                        davxml.Privilege(davxml.Read()),
-                        davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
-                    ),
-                    TwistedACLInheritable(),
-                )
-            )
-            returnValue(request.wikiACL)
+#                         # We allow write-properties so that direct sharees can change
+#                         # e.g. calendar color properties
+#                         davxml.Privilege(davxml.WriteProperties()),
+#                     ),
+#                     TwistedACLInheritable(),
+#                 ),
+#                 davxml.ACE(
+#                     davxml.Principal(
+#                         davxml.HRef.fromString("/principals/wikis/%s/" % (wikiID,))
+#                     ),
+#                     davxml.Grant(
+#                         davxml.Privilege(davxml.Read()),
+#                         davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
+#                     ),
+#                     TwistedACLInheritable(),
+#                 )
+#             )
+#             returnValue(request.wikiACL)
 
-        elif access in ("write", "admin"):
-            request.wikiACL = davxml.ACL(
-                davxml.ACE(
-                    request.authnUser,
-                    davxml.Grant(
-                        davxml.Privilege(davxml.Read()),
-                        davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
-                        davxml.Privilege(davxml.Write()),
-                    ),
-                    TwistedACLInheritable(),
-                ),
-                davxml.ACE(
-                    davxml.Principal(
-                        davxml.HRef.fromString("/principals/wikis/%s/" % (wikiID,))
-                    ),
-                    davxml.Grant(
-                        davxml.Privilege(davxml.Read()),
-                        davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
-                        davxml.Privilege(davxml.Write()),
-                    ),
-                    TwistedACLInheritable(),
-                )
-            )
-            returnValue(request.wikiACL)
+#         elif access in ("write", "admin"):
+#             request.wikiACL = davxml.ACL(
+#                 davxml.ACE(
+#                     request.authnUser,
+#                     davxml.Grant(
+#                         davxml.Privilege(davxml.Read()),
+#                         davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
+#                         davxml.Privilege(davxml.Write()),
+#                     ),
+#                     TwistedACLInheritable(),
+#                 ),
+#                 davxml.ACE(
+#                     davxml.Principal(
+#                         davxml.HRef.fromString("/principals/wikis/%s/" % (wikiID,))
+#                     ),
+#                     davxml.Grant(
+#                         davxml.Privilege(davxml.Read()),
+#                         davxml.Privilege(davxml.ReadCurrentUserPrivilegeSet()),
+#                         davxml.Privilege(davxml.Write()),
+#                     ),
+#                     TwistedACLInheritable(),
+#                 )
+#             )
+#             returnValue(request.wikiACL)
 
-        else: # "no-access":
+#         else: # "no-access":
 
-            if userID == "unauthenticated":
-                # Return a 401 so they have an opportunity to log in
-                response = (yield UnauthorizedResponse.makeResponse(
-                    request.credentialFactories,
-                    request.remoteAddr,
-                ))
-                raise HTTPError(response)
+#             if userID == "unauthenticated":
+#                 # Return a 401 so they have an opportunity to log in
+#                 response = (yield UnauthorizedResponse.makeResponse(
+#                     request.credentialFactories,
+#                     request.remoteAddr,
+#                 ))
+#                 raise HTTPError(response)
 
-            raise HTTPError(
-                StatusResponse(
-                    responsecode.FORBIDDEN,
-                    "You are not allowed to access this wiki"
-                )
-            )
+#             raise HTTPError(
+#                 StatusResponse(
+#                     responsecode.FORBIDDEN,
+#                     "You are not allowed to access this wiki"
+#                 )
+#             )
 
-    except HTTPError:
-        # pass through the HTTPError we might have raised above
-        raise
+#     except HTTPError:
+#         # pass through the HTTPError we might have raised above
+#         raise
 
-    except Exception, e:
-        log.error("Wiki ACL lookup failed: %s" % (e,))
-        raise HTTPError(StatusResponse(responsecode.SERVICE_UNAVAILABLE, "Wiki ACL lookup failed"))
+#     except Exception, e:
+#         log.error("Wiki ACL lookup failed: %s" % (e,))
+#         raise HTTPError(StatusResponse(responsecode.SERVICE_UNAVAILABLE, "Wiki ACL lookup failed"))

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/extensions.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/extensions.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/extensions.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -66,8 +66,8 @@
 from twistedcaldav.method.report import http_REPORT
 
 from twistedcaldav.config import config
+from twext.who.expression import Operand, MatchType, MatchFlags
 
-
 thisModule = getModule(__name__)
 
 log = Logger()
@@ -95,7 +95,7 @@
             msg = "Bad XML: unknown value for test attribute: %s" % (testMode,)
             log.warn(msg)
             raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, msg))
-        operand = "and" if testMode == "allof" else "or"
+        operand = Operand.AND if testMode == "allof" else Operand.OR
 
         # Are we narrowing results down to a single CUTYPE?
         cuType = principal_property_search.attributes.get("type", None)
@@ -144,10 +144,18 @@
                     log.warn(msg)
                     raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, msg))
 
+                # Convert to twext.who.expression form
+                matchType = {
+                    "starts-with": MatchType.startsWith,
+                    "contains": MatchType.contains,
+                    "equals": MatchType.equals
+                }.get(matchType)
+                matchFlags = MatchFlags.caseInsensitive if caseless else MatchFlags.none
+
                 # Ignore any query strings under three letters
-                matchText = str(match)
+                matchText = match.toString()  # gives us unicode
                 if len(matchText) >= 3:
-                    propertySearches.append((props.children, matchText, caseless, matchType))
+                    propertySearches.append((props.children, matchText, matchFlags, matchType))
 
             elif child.qname() == (calendarserver_namespace, "limit"):
                 try:
@@ -182,7 +190,7 @@
         # See if we can take advantage of the directory
         fields = []
         nonDirectorySearches = []
-        for props, match, caseless, matchType in propertySearches:
+        for props, match, matchFlags, matchType in propertySearches:
             nonDirectoryProps = []
             for prop in props:
                 try:
@@ -191,12 +199,12 @@
                 except ValueError, e:
                     raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, str(e)))
                 if fieldName:
-                    fields.append((fieldName, match, caseless, matchType))
+                    fields.append((fieldName, match, matchFlags, matchType))
                 else:
                     nonDirectoryProps.append(prop)
             if nonDirectoryProps:
                 nonDirectorySearches.append((nonDirectoryProps, match,
-                    caseless, matchType))
+                    matchFlags, matchType))
 
         matchingResources = []
         matchcount = 0
@@ -208,7 +216,7 @@
                 operand=operand, cuType=cuType))
 
             for record in records:
-                resource = principalCollection.principalForRecord(record)
+                resource = yield principalCollection.principalForRecord(record)
                 if resource:
                     matchingResources.append(resource)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/stdconfig.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/stdconfig.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1016,8 +1016,6 @@
         "Enabled": True,
         "MemcachedPool" : "Default",
         "UpdateSeconds" : 300,
-        "ExpireSeconds" : 86400,
-        "LockSeconds"   : 600,
         "EnableUpdater" : True,
         "UseExternalProxies" : False,
     },

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/storebridge.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/storebridge.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/storebridge.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -1989,7 +1989,7 @@
                 # Access level comes from what the wiki has granted to the
                 # sharee
                 sharee = self.principalForUID(shareeUID)
-                userID = sharee.record.guid
+                userID = sharee.record.uid
                 wikiID = owner.record.shortNames[0]
                 access = (yield getWikiAccess(userID, wikiID))
                 if access == "read":
@@ -2866,7 +2866,7 @@
                 principalURL = str(authz_principal)
                 if principalURL:
                     authz = (yield request.locateResource(principalURL))
-                    self._parentResource._newStoreObject._txn._authz_uid = authz.record.guid
+                    self._parentResource._newStoreObject._txn._authz_uid = authz.record.uid
 
             try:
                 response = (yield self.storeComponent(component, smart_merge=schedule_tag_match))
@@ -3587,7 +3587,7 @@
                 principalURL = str(authz_principal)
                 if principalURL:
                     authz = (yield request.locateResource(principalURL))
-                    self._parentResource._newStoreObject._txn._authz_uid = authz.record.guid
+                    self._parentResource._newStoreObject._txn._authz_uid = authz.record.uid
 
             try:
                 response = (yield self.storeComponent(component))

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookmultiget.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookmultiget.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookmultiget.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -31,7 +31,10 @@
 from twisted.internet.defer import inlineCallbacks, returnValue
 
 from txdav.xml import element as davxml
+from twext.who.idirectory import RecordType
 
+
+
 class AddressBookMultiget (StoreTestCase):
     """
     addressbook-multiget REPORT
@@ -39,6 +42,13 @@
     data_dir = os.path.join(os.path.dirname(__file__), "data")
     vcards_dir = os.path.join(data_dir, "vCards")
 
+
+    @inlineCallbacks
+    def setUp(self):
+        yield StoreTestCase.setUp(self)
+        self.authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+
+
     def test_multiget_some_vcards(self):
         """
         All vcards.
@@ -207,7 +217,7 @@
 </D:set>
 </D:mkcol>
 """
-            response = yield self.send(SimpleStoreRequest(self, "MKCOL", addressbook_uri, content=mkcol, authid="wsanchez"))
+            response = yield self.send(SimpleStoreRequest(self, "MKCOL", addressbook_uri, content=mkcol, authRecord=self.authRecord))
 
             response = IResponse(response)
 
@@ -221,7 +231,7 @@
                         "PUT",
                         joinURL(addressbook_uri, filename + ".vcf"),
                         headers=Headers({"content-type": MimeType.fromString("text/vcard")}),
-                        authid="wsanchez"
+                        authRecord=self.authRecord
                     )
                     request.stream = MemoryStream(icaldata)
                     yield self.send(request)
@@ -235,12 +245,12 @@
                         "PUT",
                         joinURL(addressbook_uri, child.basename()),
                         headers=Headers({"content-type": MimeType.fromString("text/vcard")}),
-                        authid="wsanchez"
+                        authRecord=self.authRecord
                     )
                     request.stream = MemoryStream(child.getContent())
                     yield self.send(request)
 
-        request = SimpleStoreRequest(self, "REPORT", addressbook_uri, authid="wsanchez")
+        request = SimpleStoreRequest(self, "REPORT", addressbook_uri, authRecord=self.authRecord)
         request.stream = MemoryStream(query.toxml())
         response = yield self.send(request)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookquery.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookquery.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_addressbookquery.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -27,7 +27,10 @@
 from twistedcaldav.test.util import StoreTestCase, SimpleStoreRequest
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.python.filepath import FilePath
+from twext.who.idirectory import RecordType
 
+
+
 class AddressBookQuery(StoreTestCase):
     """
     addressbook-query REPORT
@@ -67,6 +70,7 @@
 
         oldValue = config.MaxQueryWithDataResults
         config.MaxQueryWithDataResults = 1
+
         def _restoreValueOK(f):
             config.MaxQueryWithDataResults = oldValue
             return None
@@ -89,6 +93,7 @@
 
         oldValue = config.MaxQueryWithDataResults
         config.MaxQueryWithDataResults = 1
+
         def _restoreValueOK(f):
             config.MaxQueryWithDataResults = oldValue
             return None
@@ -191,15 +196,16 @@
         if response.code != responsecode.CREATED:
             self.fail("MKCOL failed: %s" % (response.code,))
         '''
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         # Add vCards to addressbook
         for child in FilePath(self.vcards_dir).children():
             if os.path.splitext(child.basename())[1] != ".vcf":
                 continue
-            request = SimpleStoreRequest(self, "PUT", joinURL(addressbook_uri, child.basename()), authid="wsanchez")
+            request = SimpleStoreRequest(self, "PUT", joinURL(addressbook_uri, child.basename()), authRecord=authRecord)
             request.stream = MemoryStream(child.getContent())
             yield self.send(request)
 
-        request = SimpleStoreRequest(self, "REPORT", addressbook_uri, authid="wsanchez")
+        request = SimpleStoreRequest(self, "REPORT", addressbook_uri, authRecord=authRecord)
         request.stream = MemoryStream(query.toxml())
         response = yield self.send(request)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_calendarquery.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_calendarquery.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_calendarquery.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -34,8 +34,8 @@
 from pycalendar.datetime import DateTime
 from twistedcaldav.ical import Component
 from txdav.caldav.icalendarstore import ComponentUpdateState
-from twistedcaldav.directory.directory import DirectoryService
 from txdav.caldav.datastore.query.filter import TimeRange
+from twext.who.idirectory import RecordType
 
 
 @inlineCallbacks
@@ -79,7 +79,7 @@
         """
         Put the contents of the Holidays directory into the store.
         """
-        record = self.directory.recordWithShortName(DirectoryService.recordType_users, "wsanchez")
+        record = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         yield self.transactionUnderTest().calendarHomeWithUID(record.uid, create=True)
         calendar = yield self.calendarUnderTest(name="calendar", home=record.uid)
         for f in os.listdir(self.holidays_dir):
@@ -248,6 +248,7 @@
         """
 
         self.patch(config, "MaxQueryWithDataResults", 1)
+
         def _restoreValueOK(f):
             self.fail("REPORT must fail with 403")
 
@@ -268,6 +269,7 @@
         """
 
         self.patch(config, "MaxQueryWithDataResults", 1)
+
         def _restoreValueError(f):
             self.fail("REPORT must not fail with 403")
 
@@ -343,7 +345,8 @@
     @inlineCallbacks
     def calendar_query(self, query, got_xml):
 
-        request = SimpleStoreRequest(self, "REPORT", "/calendars/users/wsanchez/calendar/", authid="wsanchez")
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+        request = SimpleStoreRequest(self, "REPORT", "/calendars/users/wsanchez/calendar/", authRecord=authRecord)
         request.stream = MemoryStream(query.toxml())
         response = yield self.send(request)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_collectioncontents.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_collectioncontents.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_collectioncontents.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -14,22 +14,22 @@
 # limitations under the License.
 ##
 
-from twisted.internet.defer import inlineCallbacks
 from twext.python.filepath import CachingFilePath as FilePath
-from txweb2 import responsecode
-from txweb2.iweb import IResponse
-from txweb2.stream import MemoryStream, FileStream
-from txweb2.http_headers import MimeType
-
+from twext.who.idirectory import RecordType
+from twisted.internet.defer import inlineCallbacks
 from twistedcaldav.ical import Component
 from twistedcaldav.memcachelock import MemcacheLock
 from twistedcaldav.memcacher import Memcacher
-
-
 from twistedcaldav.test.util import StoreTestCase, SimpleStoreRequest
-from txweb2.dav.util import joinURL
 from txdav.caldav.datastore.sql import CalendarObject
+from txweb2 import responsecode
+from txweb2.dav.util import joinURL
+from txweb2.http_headers import MimeType
+from txweb2.iweb import IResponse
+from txweb2.stream import MemoryStream, FileStream
 
+
+
 class CollectionContents(StoreTestCase):
     """
     PUT request
@@ -52,7 +52,7 @@
         def _fakeDoImplicitScheduling(self, component, inserting, internal_state):
             return False, None, False, None
 
-        self.patch(CalendarObject , "doImplicitScheduling",
+        self.patch(CalendarObject, "doImplicitScheduling",
                    _fakeDoImplicitScheduling)
 
         # Tests in this suite assume that the root resource is a calendar home.
@@ -61,31 +61,27 @@
         return super(CollectionContents, self).setUp()
 
 
+    @inlineCallbacks
     def test_collection_in_calendar(self):
         """
         Make (regular) collection in calendar
         """
         calendar_uri = "/calendars/users/wsanchez/collection_in_calendar/"
 
-        def mkcalendar_cb(response):
-            response = IResponse(response)
-
-            if response.code != responsecode.CREATED:
-                self.fail("MKCALENDAR failed: %s" % (response.code,))
-
-            def mkcol_cb(response):
-                response = IResponse(response)
-
-                if response.code != responsecode.FORBIDDEN:
-                    self.fail("Incorrect response to nested MKCOL: %s" % (response.code,))
-
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authRecord=authRecord)
+        response = yield self.send(request)
+        response = IResponse(response)
+        if response.code != responsecode.CREATED:
+            self.fail("MKCALENDAR failed: %s" % (response.code,))
             nested_uri = joinURL(calendar_uri, "nested")
 
-            request = SimpleStoreRequest(self, "MKCOL", nested_uri, authid="wsanchez")
-            return self.send(request, mkcol_cb)
+            request = SimpleStoreRequest(self, "MKCOL", nested_uri, authRecord=authRecord)
+            response = yield self.send(request)
+            response = IResponse(response)
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authid="wsanchez")
-        return self.send(request, mkcalendar_cb)
+            if response.code != responsecode.FORBIDDEN:
+                self.fail("Incorrect response to nested MKCOL: %s" % (response.code,))
 
 
     def test_bogus_file(self):
@@ -163,6 +159,7 @@
         )
 
 
+    @inlineCallbacks
     def _test_file_in_calendar(self, what, *work):
         """
         Creates a calendar collection, then PUTs a resource into that collection
@@ -171,68 +168,58 @@
         """
         calendar_uri = "/calendars/users/wsanchez/testing_calendar/"
 
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authRecord=authRecord)
+        response = yield self.send(request)
+        response = IResponse(response)
+        if response.code != responsecode.CREATED:
+            self.fail("MKCALENDAR failed: %s" % (response.code,))
 
-        @inlineCallbacks
-        def mkcalendar_cb(response):
+        c = 0
+        for stream, response_code in work:
+            dst_uri = joinURL(calendar_uri, "dst%d.ics" % (c,))
+            request = SimpleStoreRequest(self, "PUT", dst_uri, authRecord=authRecord)
+            request.headers.setHeader("if-none-match", "*")
+            request.headers.setHeader("content-type", MimeType("text", "calendar"))
+            request.stream = stream
+            response = yield self.send(request)
             response = IResponse(response)
 
-            if response.code != responsecode.CREATED:
-                self.fail("MKCALENDAR failed: %s" % (response.code,))
+            if response.code != response_code:
+                self.fail("Incorrect response to %s: %s (!= %s)" % (what, response.code, response_code))
 
-            c = 0
+            c += 1
 
-            for stream, response_code in work:
 
-                dst_uri = joinURL(calendar_uri, "dst%d.ics" % (c,))
-                request = SimpleStoreRequest(self, "PUT", dst_uri, authid="wsanchez")
-                request.headers.setHeader("if-none-match", "*")
-                request.headers.setHeader("content-type", MimeType("text", "calendar"))
-                request.stream = stream
-                response = yield self.send(request)
-                response = IResponse(response)
 
-                if response.code != response_code:
-                    self.fail("Incorrect response to %s: %s (!= %s)" % (what, response.code, response_code))
-
-                c += 1
-
-        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authid="wsanchez")
-        return self.send(request, mkcalendar_cb)
-
-
+    @inlineCallbacks
     def test_fail_dot_file_put_in_calendar(self):
         """
         Make (regular) collection in calendar
         """
         calendar_uri = "/calendars/users/wsanchez/dot_file_in_calendar/"
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authRecord=authRecord)
+        response = yield self.send(request)
+        response = IResponse(response)
+        if response.code != responsecode.CREATED:
+            self.fail("MKCALENDAR failed: %s" % (response.code,))
 
-        def mkcalendar_cb(response):
-            response = IResponse(response)
+        stream = self.dataPath.child(
+            "Holidays").child(
+            "C318AA54-1ED0-11D9-A5E0-000A958A3252.ics"
+        ).open()
+        try:
+            calendar = str(Component.fromStream(stream))
+        finally:
+            stream.close()
 
-            if response.code != responsecode.CREATED:
-                self.fail("MKCALENDAR failed: %s" % (response.code,))
+        event_uri = "/".join([calendar_uri, ".event.ics"])
 
-            def put_cb(response):
-                response = IResponse(response)
-
-                if response.code != responsecode.FORBIDDEN:
-                    self.fail("Incorrect response to dot file PUT: %s" % (response.code,))
-
-            stream = self.dataPath.child(
-                "Holidays").child(
-                "C318AA54-1ED0-11D9-A5E0-000A958A3252.ics"
-            ).open()
-            try:
-                calendar = str(Component.fromStream(stream))
-            finally:
-                stream.close()
-
-            event_uri = "/".join([calendar_uri, ".event.ics"])
-
-            request = SimpleStoreRequest(self, "PUT", event_uri, authid="wsanchez")
-            request.headers.setHeader("content-type", MimeType("text", "calendar"))
-            request.stream = MemoryStream(calendar)
-            return self.send(request, put_cb)
-
-        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authid="wsanchez")
-        return self.send(request, mkcalendar_cb)
+        request = SimpleStoreRequest(self, "PUT", event_uri, authRecord=authRecord)
+        request.headers.setHeader("content-type", MimeType("text", "calendar"))
+        request.stream = MemoryStream(calendar)
+        response = yield self.send(request)
+        response = IResponse(response)
+        if response.code != responsecode.FORBIDDEN:
+            self.fail("Incorrect response to dot file PUT: %s" % (response.code,))

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_mkcalendar.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_mkcalendar.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_mkcalendar.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -27,6 +27,10 @@
 from twistedcaldav import caldavxml
 from twistedcaldav.test.util import StoreTestCase, SimpleStoreRequest
 
+from twext.who.idirectory import RecordType
+
+
+
 class MKCALENDAR (StoreTestCase):
     """
     MKCALENDAR request
@@ -35,6 +39,12 @@
     # Try nesting calendars (should fail)
     # HEAD request on calendar: resourcetype = (collection, calendar)
 
+    @inlineCallbacks
+    def setUp(self):
+        yield StoreTestCase.setUp(self)
+        self.authRecord = yield self.directory.recordWithShortName(RecordType.user, u"user01")
+
+
     def test_make_calendar(self):
         """
         Make calendar
@@ -45,7 +55,7 @@
         if os.path.exists(path):
             rmdir(path)
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", uri, authid="user01")
+        request = SimpleStoreRequest(self, "MKCALENDAR", uri, authRecord=self.authRecord)
 
         @inlineCallbacks
         def do_test(response):
@@ -146,7 +156,7 @@
             )
         )
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", uri, authid="user01")
+        request = SimpleStoreRequest(self, "MKCALENDAR", uri, authRecord=self.authRecord)
         request.stream = MemoryStream(mk.toxml())
         return self.send(request, do_test)
 
@@ -165,7 +175,7 @@
 
             # FIXME: Check for DAV:resource-must-be-null element
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", uri, authid="user01")
+        request = SimpleStoreRequest(self, "MKCALENDAR", uri, authRecord=self.authRecord)
         return self.send(request, do_test)
 
 
@@ -190,8 +200,8 @@
 
             nested_uri = os.path.join(first_uri, "nested")
 
-            request = SimpleStoreRequest(self, "MKCALENDAR", nested_uri, authid="user01")
+            request = SimpleStoreRequest(self, "MKCALENDAR", nested_uri, authRecord=self.authRecord)
             yield self.send(request, do_test)
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", first_uri, authid="user01")
+        request = SimpleStoreRequest(self, "MKCALENDAR", first_uri, authRecord=self.authRecord)
         return self.send(request, next)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_multiget.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_multiget.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_multiget.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -14,6 +14,7 @@
 ##
 
 from twext.python.filepath import CachingFilePath as FilePath
+from twext.who.idirectory import RecordType
 from txweb2 import responsecode
 from txweb2.dav.util import davXMLFromStream, joinURL
 from txweb2.http_headers import Headers, MimeType
@@ -38,6 +39,12 @@
     data_dir = os.path.join(os.path.dirname(__file__), "data")
     holidays_dir = os.path.join(data_dir, "Holidays")
 
+    @inlineCallbacks
+    def setUp(self):
+        yield StoreTestCase.setUp(self)
+        self.authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+
+
     def test_multiget_some_events(self):
         """
         All events.
@@ -262,7 +269,7 @@
     def calendar_query(self, calendar_uri, query, got_xml, data, no_init):
 
         if not no_init:
-            response = yield self.send(SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authid="wsanchez"))
+            response = yield self.send(SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authRecord=self.authRecord))
             response = IResponse(response)
             if response.code != responsecode.CREATED:
                 self.fail("MKCALENDAR failed: %s" % (response.code,))
@@ -274,7 +281,7 @@
                         "PUT",
                         joinURL(calendar_uri, filename + ".ics"),
                         headers=Headers({"content-type": MimeType.fromString("text/calendar")}),
-                        authid="wsanchez"
+                        authRecord=self.authRecord
                     )
                     request.stream = MemoryStream(icaldata)
                     yield self.send(request)
@@ -288,12 +295,12 @@
                         "PUT",
                         joinURL(calendar_uri, child.basename()),
                         headers=Headers({"content-type": MimeType.fromString("text/calendar")}),
-                        authid="wsanchez"
+                        authRecord=self.authRecord
                     )
                     request.stream = MemoryStream(child.getContent())
                     yield self.send(request)
 
-        request = SimpleStoreRequest(self, "REPORT", calendar_uri, authid="wsanchez")
+        request = SimpleStoreRequest(self, "REPORT", calendar_uri, authRecord=self.authRecord)
         request.stream = MemoryStream(query.toxml())
         response = yield self.send(request)
 

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_props.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_props.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_props.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -19,21 +19,35 @@
 from txweb2.iweb import IResponse
 from txweb2.stream import MemoryStream
 
+from twisted.internet.defer import inlineCallbacks
+
 from twistedcaldav import caldavxml
 from twistedcaldav.test.util import StoreTestCase, SimpleStoreRequest
 
 from txdav.xml import element as davxml
 
+from twext.who.idirectory import RecordType
+
+
+
 class Properties(StoreTestCase):
     """
     CalDAV properties
     """
+
+    @inlineCallbacks
+    def setUp(self):
+        yield StoreTestCase.setUp(self)
+        self.authRecord = yield self.directory.recordWithShortName(RecordType.user, u"user01")
+
+
     def test_live_props(self):
         """
         Live CalDAV properties
         """
         calendar_uri = "/calendars/users/user01/test/"
 
+
         def mkcalendar_cb(response):
             response = IResponse(response)
 
@@ -123,24 +137,24 @@
                 return davXMLFromStream(response.stream).addCallback(got_xml)
 
             query = davxml.PropertyFind(
-                        davxml.PropertyContainer(
-                            caldavxml.SupportedCalendarData(),
-                            caldavxml.SupportedCalendarComponentSet(),
-                            davxml.SupportedReportSet(),
-                        ),
-                    )
+                davxml.PropertyContainer(
+                    caldavxml.SupportedCalendarData(),
+                    caldavxml.SupportedCalendarComponentSet(),
+                    davxml.SupportedReportSet(),
+                ),
+            )
 
             request = SimpleStoreRequest(
                 self,
                 "PROPFIND",
                 calendar_uri,
                 headers=http_headers.Headers({"Depth": "0"}),
-                authid="user01",
+                authRecord=self.authRecord,
             )
             request.stream = MemoryStream(query.toxml())
             return self.send(request, propfind_cb)
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authid="user01")
+        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authRecord=self.authRecord)
         return self.send(request, mkcalendar_cb)
 
 
@@ -207,10 +221,10 @@
                 "PROPFIND",
                 calendar_uri,
                 headers=http_headers.Headers({"Depth": "0"}),
-                authid="user01",
+                authRecord=self.authRecord,
             )
             request.stream = MemoryStream(query.toxml())
             return self.send(request, propfind_cb)
 
-        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authid="user01")
+        request = SimpleStoreRequest(self, "MKCALENDAR", calendar_uri, authRecord=self.authRecord)
         return self.send(request, mkcalendar_cb)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_resource.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_resource.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_resource.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -14,22 +14,25 @@
 # limitations under the License.
 ##
 
+from twext.who.idirectory import RecordType
+from twisted.internet.defer import inlineCallbacks
+from twistedcaldav import carddavxml
+from twistedcaldav.config import config
+from twistedcaldav.notifications import NotificationCollectionResource
+from twistedcaldav.resource import (
+    CalDAVResource, CommonHomeResource,
+    CalendarHomeResource, AddressBookHomeResource
+)
+from twistedcaldav.test.util import (
+    InMemoryPropertyStore, StoreTestCase, SimpleStoreRequest
+)
+from twistedcaldav.test.util import TestCase
 from txdav.xml.element import HRef, Principal, Unauthenticated
 from txweb2.http import HTTPError
 from txweb2.test.test_server import SimpleRequest
 
-from twisted.internet.defer import inlineCallbacks
 
-from twistedcaldav import carddavxml
-from twistedcaldav.config import config
-from twistedcaldav.resource import CalDAVResource, CommonHomeResource, \
- CalendarHomeResource, AddressBookHomeResource
-from twistedcaldav.test.util import InMemoryPropertyStore, StoreTestCase, \
-    SimpleStoreRequest
-from twistedcaldav.test.util import TestCase
-from twistedcaldav.notifications import NotificationCollectionResource
 
-
 class StubProperty(object):
     def qname(self):
         return "StubQnamespace", "StubQname"
@@ -185,13 +188,20 @@
 
 class DefaultAddressBook (StoreTestCase):
 
+
     @inlineCallbacks
+    def setUp(self):
+        yield StoreTestCase.setUp(self)
+        self.authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
+
+
+    @inlineCallbacks
     def test_pick_default_addressbook(self):
         """
         Get adbk
         """
 
-        request = SimpleStoreRequest(self, "GET", "/addressbooks/users/wsanchez/", authid="wsanchez")
+        request = SimpleStoreRequest(self, "GET", "/addressbooks/users/wsanchez/", authRecord=self.authRecord)
         home = yield request.locateResource("/addressbooks/users/wsanchez")
 
         # default property initially not present

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_sharing.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_sharing.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_sharing.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -185,7 +185,8 @@
 
     @inlineCallbacks
     def _doPOST(self, body, resultcode=responsecode.OK):
-        request = SimpleStoreRequest(self, "POST", "/calendars/__uids__/user01/calendar/", content=body, authid="user01")
+        authRecord = yield self.directory.recordWithUID(u"user01")
+        request = SimpleStoreRequest(self, "POST", "/calendars/__uids__/user01/calendar/", content=body, authRecord=authRecord)
         request.headers.setHeader("content-type", MimeType("text", "xml"))
         response = yield self.send(request)
         response = IResponse(response)
@@ -210,7 +211,8 @@
 
     @inlineCallbacks
     def _doPOSTSharerAccept(self, body, resultcode=responsecode.OK):
-        request = SimpleStoreRequest(self, "POST", "/calendars/__uids__/user02/", content=body, authid="user02")
+        authRecord = yield self.directory.recordWithUID(u"user02")
+        request = SimpleStoreRequest(self, "POST", "/calendars/__uids__/user02/", content=body, authRecord=authRecord)
         request.headers.setHeader("content-type", MimeType("text", "xml"))
         response = yield self.send(request)
         response = IResponse(response)
@@ -732,6 +734,7 @@
         self.assertEquals(propInvite, None)
 
 
+    # MOVE2WHO Fix wiki
     @inlineCallbacks
     def wikiSetup(self):
         """
@@ -798,7 +801,8 @@
         self.patch(sharing, "getWikiAccess", stubWikiAccessMethod)
         @inlineCallbacks
         def listChildrenViaPropfind():
-            request = SimpleStoreRequest(self, "PROPFIND", "/calendars/__uids__/user01/", authid="user01")
+            authRecord = yield self.directory.recordWithUID(u"user01")
+            request = SimpleStoreRequest(self, "PROPFIND", "/calendars/__uids__/user01/", authRecord=authRecord)
             request.headers.setHeader("depth", "1")
             response = yield self.send(request)
             response = IResponse(response)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_wrapping.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_wrapping.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/test_wrapping.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -28,11 +28,12 @@
 from txweb2.responsecode import UNAUTHORIZED
 from txweb2.stream import MemoryStream
 
+from twext.who.idirectory import RecordType
+
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.internet.defer import maybeDeferred
 
 from twistedcaldav.config import config
-from twistedcaldav.directory.test.test_xmlfile import XMLFileBase
 from twistedcaldav.ical import Component as VComponent
 from twistedcaldav.storebridge import DropboxCollection, \
     CalendarCollectionResource
@@ -51,11 +52,14 @@
 
 import hashlib
 
+
 def _todo(f, why):
     f.todo = why
     return f
 rewriteOrRemove = lambda f: _todo(f, "Rewrite or remove")
 
+
+
 class FakeChanRequest(object):
     code = 'request-not-finished'
 
@@ -113,7 +117,7 @@
         @param objectText: Some iCalendar text to populate it with.
         @type objectText: str
         """
-        record = self.directory.recordWithShortName("users", "wsanchez")
+        record = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         uid = record.uid
         txn = self.transactionUnderTest()
         home = yield txn.calendarHomeWithUID(uid, True)
@@ -132,7 +136,7 @@
         @param objectText: Some iVcard text to populate it with.
         @type objectText: str
         """
-        record = self.directory.recordWithShortName("users", "wsanchez")
+        record = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         uid = record.uid
         txn = self.transactionUnderTest()
         home = yield txn.addressbookHomeWithUID(uid, True)
@@ -171,9 +175,10 @@
             "http://localhost:8008/" + path
         )
         if user is not None:
-            guid = XMLFileBase.users[user]["guid"]
+            record = yield self.directory.recordWithShortName(RecordType.user, user)
+            uid = record.uid
             req.authnUser = req.authzUser = (
-                davxml.Principal(davxml.HRef('/principals/__uids__/' + guid + '/'))
+                davxml.Principal(davxml.HRef('/principals/__uids__/' + uid + '/'))
             )
         returnValue(aResource)
 
@@ -271,7 +276,7 @@
         )
         yield self.commit()
         self.assertIsInstance(dropBoxResource, DropboxCollection)
-        dropboxHomeType = davxml.ResourceType.dropboxhome #@UndefinedVariable
+        dropboxHomeType = davxml.ResourceType.dropboxhome  # @UndefinedVariable
         self.assertEquals(dropBoxResource.resourceType(),
                           dropboxHomeType)
 
@@ -285,7 +290,7 @@
         C{CalendarHome.calendarWithName}.
         """
         calDavFile = yield self.getResource("calendars/users/wsanchez/calendar")
-        regularCalendarType = davxml.ResourceType.calendar #@UndefinedVariable
+        regularCalendarType = davxml.ResourceType.calendar  # @UndefinedVariable
         self.assertEquals(calDavFile.resourceType(),
                           regularCalendarType)
         yield self.commit()
@@ -344,8 +349,11 @@
             self.assertIdentical(
                 homeChild._associatedTransaction,
                 homeTransaction,
-                "transaction mismatch on %s; %r is not %r " %
-                    (name, homeChild._associatedTransaction, homeTransaction))
+                "transaction mismatch on {n}; {at} is not {ht} ".format(
+                    n=name, at=homeChild._associatedTransaction,
+                    ht=homeTransaction
+                )
+            )
 
 
     @inlineCallbacks
@@ -575,12 +583,13 @@
         yield NamedLock.acquire(txn, "ImplicitUIDLock:%s" % (hashlib.md5("uid1").hexdigest(),))
 
         # PUT fails
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         request = SimpleStoreRequest(
             self,
             "PUT",
             "/calendars/users/wsanchez/calendar/1.ics",
             headers=Headers({"content-type": MimeType.fromString("text/calendar")}),
-            authid="wsanchez"
+            authRecord=authRecord
         )
         request.stream = MemoryStream("""BEGIN:VCALENDAR
 CALSCALE:GREGORIAN
@@ -606,12 +615,13 @@
         """
 
         # PUT works
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         request = SimpleStoreRequest(
             self,
             "PUT",
             "/calendars/users/wsanchez/calendar/1.ics",
             headers=Headers({"content-type": MimeType.fromString("text/calendar")}),
-            authid="wsanchez"
+            authRecord=authRecord
         )
         request.stream = MemoryStream("""BEGIN:VCALENDAR
 CALSCALE:GREGORIAN
@@ -635,11 +645,12 @@
         txn = self.transactionUnderTest()
         yield NamedLock.acquire(txn, "ImplicitUIDLock:%s" % (hashlib.md5("uid1").hexdigest(),))
 
+        authRecord = yield self.directory.recordWithShortName(RecordType.user, u"wsanchez")
         request = SimpleStoreRequest(
             self,
             "DELETE",
             "/calendars/users/wsanchez/calendar/1.ics",
-            authid="wsanchez"
+            authRecord=authRecord
         )
         response = yield self.send(request)
         self.assertEqual(response.code, responsecode.SERVICE_UNAVAILABLE)

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/util.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/util.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/test/util.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -28,17 +28,13 @@
 from twisted.python.failure import Failure
 from twistedcaldav import memcacher
 from twistedcaldav.bind import doBind
-from twistedcaldav.directory import augment
 from twistedcaldav.directory.addressbook import DirectoryAddressBookHomeProvisioningResource
-from twistedcaldav.directory.aggregate import AggregateDirectoryService
 from twistedcaldav.directory.calendar import (
     DirectoryCalendarHomeProvisioningResource
 )
-from twistedcaldav.directory.directory import DirectoryService
 from twistedcaldav.directory.principal import (
     DirectoryPrincipalProvisioningResource)
 from twistedcaldav.directory.util import transactionFromRequest
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
 from twistedcaldav.memcacheclient import ClientFactory
 from twistedcaldav.stdconfig import config
 from txdav.caldav.datastore.test.util import buildCalendarStore
@@ -81,92 +77,21 @@
 
 
 
-class DirectoryFixture(object):
-    """
-    Test fixture for creating various parts of the resource hierarchy related
-    to directories.
-    """
 
-    def __init__(self):
-        def _setUpPrincipals(ds):
-            # FIXME: see FIXME in
-            # DirectoryPrincipalProvisioningResource.__init__; this performs a
-            # necessary modification to any directory service object for it to
-            # be fully functional.
-            self.principalsResource = DirectoryPrincipalProvisioningResource(
-                "/principals/", ds
-            )
-        self._directoryChangeHooks = [_setUpPrincipals]
 
-    directoryService = None
-    principalsResource = None
-
-    def addDirectoryService(self, newService):
-        """
-        Add an L{IDirectoryService} to this test case.
-
-        If this test case does not have a directory service yet, create it and
-        assign C{directoryService} and C{principalsResource} attributes to this
-        test case.
-
-        If the test case already has a directory service, create an
-        L{AggregateDirectoryService} and re-assign the C{self.directoryService}
-        attribute to point at it instead, while setting the C{realmName} of the
-        new service to match the old one.
-
-        If the test already has an L{AggregateDirectoryService}, create a
-        I{new} L{AggregateDirectoryService} with the same list of services,
-        after adjusting the new service's realm to match the existing ones.
-        """
-
-        if self.directoryService is None:
-            directoryService = newService
-        else:
-            newService.realmName = self.directoryService.realmName
-            if isinstance(self.directoryService, AggregateDirectoryService):
-                directories = set(self.directoryService._recordTypes.items())
-                directories.add(newService)
-            else:
-                directories = [newService, self.directoryService]
-            directoryService = AggregateDirectoryService(directories, None)
-
-        self.directoryService = directoryService
-        # FIXME: see FIXME in DirectoryPrincipalProvisioningResource.__init__;
-        # this performs a necessary modification to the directory service object
-        # for it to be fully functional.
-        for hook in self._directoryChangeHooks:
-            hook(directoryService)
-
-
-    def whenDirectoryServiceChanges(self, callback):
-        """
-        When the C{directoryService} attribute is changed by
-        L{TestCase.addDirectoryService}, call the given callback in order to
-        update any state which relies upon that service.
-
-        If there's already a directory, invoke the callback immediately.
-        """
-        self._directoryChangeHooks.append(callback)
-        if self.directoryService is not None:
-            callback(self.directoryService)
-
-
-
 class SimpleStoreRequest(SimpleRequest):
     """
     A SimpleRequest that automatically grabs the proper transaction for a test.
     """
-    def __init__(self, test, method, uri, headers=None, content=None, authid=None):
+    def __init__(self, test, method, uri, headers=None, content=None, authRecord=None):
         super(SimpleStoreRequest, self).__init__(test.site, method, uri, headers, content)
         self._test = test
         self._newStoreTransaction = test.transactionUnderTest(txn=transactionFromRequest(self, test.storeUnderTest()))
         self.credentialFactories = {}
 
         # Fake credentials if auth needed
-        if authid is not None:
-            record = self._test.directory.recordWithShortName(DirectoryService.recordType_users, authid)
-            if record:
-                self.authzUser = self.authnUser = element.Principal(element.HRef("/principals/__uids__/%s/" % (record.uid,)))
+        if authRecord is not None:
+            self.authzUser = self.authnUser = element.Principal(element.HRef("/principals/__uids__/%s/" % (authRecord.uid,)))
 
 
     @inlineCallbacks
@@ -261,16 +186,7 @@
         accounts.setContent(xmlFile.getContent())
 
 
-    @property
-    def directoryService(self):
-        """
-        Read-only alias for L{DirectoryFixture.directoryService} for
-        compatibility with older tests.  TODO: remove this.
-        """
-        return self.directory
 
-
-
 class TestCase(txweb2.dav.test.util.TestCase):
     resource_class = RootResource
 
@@ -284,24 +200,6 @@
                                quota=deriveQuota(self))
 
 
-    def createStockDirectoryService(self):
-        """
-        Create a stock C{directoryService} attribute and assign it.
-        """
-        self.xmlFile = FilePath(config.DataRoot).child("accounts.xml")
-        self.xmlFile.setContent(xmlFile.getContent())
-        self.directoryFixture.addDirectoryService(
-            XMLDirectoryService(
-                {
-                    "xmlFile": "accounts.xml",
-                    "augmentService": augment.AugmentXMLDB(
-                        xmlFiles=(augmentsFile.path,)
-                    ),
-                }
-            )
-        )
-
-
     def setupCalendars(self):
         """
         When a directory service exists, set up the resources at C{/calendars}
@@ -352,20 +250,10 @@
         config.UsePackageTimezones = True
 
 
-    @property
-    def directoryService(self):
-        """
-        Read-only alias for L{DirectoryFixture.directoryService} for
-        compatibility with older tests.  TODO: remove this.
-        """
-        return self.directoryFixture.directoryService
 
-
     def setUp(self):
         super(TestCase, self).setUp()
 
-        self.directoryFixture = DirectoryFixture()
-
         # FIXME: this is only here to workaround circular imports
         doBind()
 
@@ -564,7 +452,6 @@
         that stores the data for that L{CalendarHomeResource}.
         """
         super(HomeTestCase, self).setUp()
-        self.createStockDirectoryService()
 
 
         @self.directoryFixture.whenDirectoryServiceChanges
@@ -650,7 +537,6 @@
         file.
         """
         super(AddressBookHomeTestCase, self).setUp()
-        self.createStockDirectoryService()
 
 
         @self.directoryFixture.whenDirectoryServiceChanges

Modified: CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/upgrade.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/upgrade.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/twistedcaldav/upgrade.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -39,11 +39,8 @@
 from twistedcaldav import caldavxml
 from twistedcaldav.directory import calendaruserproxy
 from twistedcaldav.directory.calendaruserproxyloader import XMLCalendarUserProxyLoader
-from twistedcaldav.directory.directory import DirectoryService
-from twistedcaldav.directory.directory import GroupMembershipCacheUpdater
 from twistedcaldav.directory.principal import DirectoryCalendarPrincipalResource
 from twistedcaldav.directory.resourceinfo import ResourceInfoDatabase
-from twistedcaldav.directory.xmlfile import XMLDirectoryService
 from twistedcaldav.ical import Component
 from txdav.caldav.datastore.scheduling.cuaddress import LocalCalendarUser
 from txdav.caldav.datastore.scheduling.imip.mailgateway import MailGatewayTokensDatabase
@@ -61,7 +58,6 @@
 from twisted.protocols.amp import AMP, Command, String, Boolean
 
 from calendarserver.tap.util import getRootResource, FakeRequest
-from calendarserver.tools.util import getDirectory
 
 from txdav.caldav.datastore.scheduling.imip.mailgateway import migrateTokensToStore
 
@@ -909,31 +905,31 @@
 
 
 
-# Deferred
-def migrateFromOD(config, directory):
-    #
-    # Migrates locations and resources from OD
-    #
-    try:
-        from twistedcaldav.directory.appleopendirectory import OpenDirectoryService
-        from calendarserver.tools.resources import migrateResources
-    except ImportError:
-        return succeed(None)
+# # Deferred
+# def migrateFromOD(config, directory):
+#     #
+#     # Migrates locations and resources from OD
+#     #
+#     try:
+#         from twistedcaldav.directory.appleopendirectory import OpenDirectoryService
+#         from calendarserver.tools.resources import migrateResources
+#     except ImportError:
+#         return succeed(None)
 
-    log.warn("Migrating locations and resources")
+#     log.warn("Migrating locations and resources")
 
-    userService = directory.serviceForRecordType("users")
-    resourceService = directory.serviceForRecordType("resources")
-    if (
-        not isinstance(userService, OpenDirectoryService) or
-        not isinstance(resourceService, XMLDirectoryService)
-    ):
-        # Configuration requires no migration
-        return succeed(None)
+#     userService = directory.serviceForRecordType("users")
+#     resourceService = directory.serviceForRecordType("resources")
+#     if (
+#         not isinstance(userService, OpenDirectoryService) or
+#         not isinstance(resourceService, XMLDirectoryService)
+#     ):
+#         # Configuration requires no migration
+#         return succeed(None)
 
-    # Create internal copies of resources and locations based on what is
-    # found in OD
-    return migrateResources(userService, resourceService)
+#     # Create internal copies of resources and locations based on what is
+#     # found in OD
+#     return migrateResources(userService, resourceService)
 
 
 
@@ -1042,27 +1038,28 @@
                 loader = XMLCalendarUserProxyLoader(self.config.ProxyLoadFromFile)
                 yield loader.updateProxyDB()
 
-            # Populate the group membership cache
-            if (self.config.GroupCaching.Enabled and
-                self.config.GroupCaching.EnableUpdater):
-                proxydb = calendaruserproxy.ProxyDBService
-                if proxydb is None:
-                    proxydbClass = namedClass(self.config.ProxyDBService.type)
-                    proxydb = proxydbClass(**self.config.ProxyDBService.params)
+            # # Populate the group membership cache
+            # if (self.config.GroupCaching.Enabled and
+            #     self.config.GroupCaching.EnableUpdater):
+            #     proxydb = calendaruserproxy.ProxyDBService
+            #     if proxydb is None:
+            #         proxydbClass = namedClass(self.config.ProxyDBService.type)
+            #         proxydb = proxydbClass(**self.config.ProxyDBService.params)
 
-                updater = GroupMembershipCacheUpdater(proxydb,
-                    directory,
-                    self.config.GroupCaching.UpdateSeconds,
-                    self.config.GroupCaching.ExpireSeconds,
-                    self.config.GroupCaching.LockSeconds,
-                    namespace=self.config.GroupCaching.MemcachedPool,
-                    useExternalProxies=self.config.GroupCaching.UseExternalProxies)
-                yield updater.updateCache(fast=True)
+            #     # MOVE2WHO FIXME: port to new group cacher
+            #     updater = GroupMembershipCacheUpdater(proxydb,
+            #         directory,
+            #         self.config.GroupCaching.UpdateSeconds,
+            #         self.config.GroupCaching.ExpireSeconds,
+            #         self.config.GroupCaching.LockSeconds,
+            #         namespace=self.config.GroupCaching.MemcachedPool,
+            #         useExternalProxies=self.config.GroupCaching.UseExternalProxies)
+            #     yield updater.updateCache(fast=True)
 
-                uid, gid = getCalendarServerIDs(self.config)
-                dbPath = os.path.join(self.config.DataRoot, "proxies.sqlite")
-                if os.path.exists(dbPath):
-                    os.chown(dbPath, uid, gid)
+            uid, gid = getCalendarServerIDs(self.config)
+            dbPath = os.path.join(self.config.DataRoot, "proxies.sqlite")
+            if os.path.exists(dbPath):
+                os.chown(dbPath, uid, gid)
 
             # Process old inbox items
             self.store.setMigrating(True)

Modified: CalendarServer/branches/users/sagen/move2who-2/txdav/dps/client.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/txdav/dps/client.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/txdav/dps/client.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -23,7 +23,6 @@
 from twext.who.idirectory import RecordType, IDirectoryService
 import twext.who.idirectory
 from twext.who.util import ConstantsContainer
-from twisted.cred.credentials import UsernamePassword
 from twisted.internet import reactor
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
 from twisted.internet.protocol import ClientCreator
@@ -43,7 +42,6 @@
 )
 import txdav.who.delegates
 import txdav.who.idirectory
-from txweb2.auth.digest import DigestedCredentials
 from zope.interface import implementer
 
 log = Logger()
@@ -78,22 +76,20 @@
          txdav.who.idirectory.FieldName)
     )
 
+
+    # MOVE2WHO: we talked about passing these in instead:
     # def __init__(self, fieldNames, recordTypes):
     #     self.fieldName = fieldNames
     #     self.recordType = recordTypes
 
-    # MOVE2WHO
+
+    # MOVE2WHO needed?
     def getGroups(self, guids=None):
         return succeed(set())
-
-
-    guid = "1332A615-4D3A-41FE-B636-FBE25BFB982E"
-
     # END MOVE2WHO
 
 
 
-
     def _dictToRecord(self, serializedFields):
         """
         Turn a dictionary of fields sent from the server into a directory
@@ -186,6 +182,13 @@
         # temporary hack until we can fix all callers not to pass strings:
         if isinstance(recordType, (str, unicode)):
             recordType = self.recordType.lookupByName(recordType)
+
+        # MOVE2WHO, REMOVE THIS HACK TOO:
+        if not isinstance(shortName, unicode):
+            log.warn("Need to change shortName to unicode")
+            shortName = shortName.decode("utf-8")
+
+
         return self._call(
             RecordWithShortNameCommand,
             self._processSingleRecord,
@@ -195,6 +198,11 @@
 
 
     def recordWithUID(self, uid):
+        # MOVE2WHO, REMOVE THIS:
+        if not isinstance(uid, unicode):
+            log.warn("Need to change uid to unicode")
+            uid = uid.decode("utf-8")
+
         return self._call(
             RecordWithUIDCommand,
             self._processSingleRecord,
@@ -236,45 +244,19 @@
         )
 
 
+    def recordsMatchingFields(self, fields, operand="or", recordType=None):
+        # MOVE2WHO FIXME: Need to add an AMP command
+        raise NotImplementedError
 
 
 
 
 
+
 @implementer(ICalendarStoreDirectoryRecord)
 class DirectoryRecord(BaseDirectoryRecord, CalendarDirectoryRecordMixin):
 
 
-    @inlineCallbacks
-    def verifyCredentials(self, credentials):
-
-        # XYZZY REMOVE THIS, it bypasses all authentication!:
-        returnValue(True)
-
-        if isinstance(credentials, UsernamePassword):
-            log.debug("UsernamePassword")
-            returnValue(
-                (yield self.verifyPlaintextPassword(credentials.password))
-            )
-
-        elif isinstance(credentials, DigestedCredentials):
-            log.debug("DigestedCredentials")
-            returnValue(
-                (yield self.verifyHTTPDigest(
-                    self.shortNames[0],
-                    self.service.realmName,
-                    credentials.fields["uri"],
-                    credentials.fields["nonce"],
-                    credentials.fields.get("cnonce", ""),
-                    credentials.fields["algorithm"],
-                    credentials.fields.get("nc", ""),
-                    credentials.fields.get("qop", ""),
-                    credentials.fields["response"],
-                    credentials.method
-                ))
-            )
-
-
     def verifyPlaintextPassword(self, password):
         return self.service._call(
             VerifyPlaintextPasswordCommand,
@@ -305,6 +287,7 @@
         )
 
 
+
     def members(self):
         return self.service._call(
             MembersCommand,
@@ -332,8 +315,6 @@
         )
 
 
-
-
     # For scheduling/freebusy
     # FIXME: doesn't this need to happen in the DPS?
     @inlineCallbacks

Modified: CalendarServer/branches/users/sagen/move2who-2/txdav/who/augment.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/txdav/who/augment.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/txdav/who/augment.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -125,6 +125,11 @@
 
     @inlineCallbacks
     def recordWithUID(self, uid):
+        # MOVE2WHO, REMOVE THIS:
+        if not isinstance(uid, unicode):
+            log.warn("Need to change uid to unicode")
+            uid = uid.decode("utf-8")
+
         record = yield self._directory.recordWithUID(uid)
         record = yield self.augment(record)
         returnValue(record)
@@ -149,6 +154,11 @@
 
     @inlineCallbacks
     def recordWithShortName(self, recordType, shortName):
+        # MOVE2WHO, REMOVE THIS:
+        if not isinstance(shortName, unicode):
+            log.warn("Need to change shortName to unicode")
+            shortName = shortName.decode("utf-8")
+
         record = yield self._directory.recordWithShortName(recordType, shortName)
         record = yield self.augment(record)
         returnValue(record)
@@ -156,6 +166,11 @@
 
     @inlineCallbacks
     def recordsWithEmailAddress(self, emailAddress):
+        # MOVE2WHO, REMOVE THIS:
+        if not isinstance(emailAddress, unicode):
+            log.warn("Need to change emailAddress to unicode")
+            emailAddress = emailAddress.decode("utf-8")
+
         records = yield self._directory.recordsWithEmailAddress(emailAddress)
         augmented = []
         for record in records:

Modified: CalendarServer/branches/users/sagen/move2who-2/txdav/who/directory.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/txdav/who/directory.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/txdav/who/directory.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -20,12 +20,21 @@
 
 
 import uuid
+from twext.python.log import Logger
+
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twext.who.expression import (
     MatchType, Operand, MatchExpression, CompoundExpression, MatchFlags
 )
+from twext.who.idirectory import RecordType as BaseRecordType
+from txdav.who.idirectory import RecordType as DAVRecordType
+from twisted.cred.credentials import UsernamePassword
+from txweb2.auth.digest import DigestedCredentials
 
 
+log = Logger()
+
+
 __all__ = [
     "CalendarDirectoryRecordMixin",
     "CalendarDirectoryServiceMixin",
@@ -34,6 +43,8 @@
 
 class CalendarDirectoryServiceMixin(object):
 
+    guid = "1332A615-4D3A-41FE-B636-FBE25BFB982E"
+
     # Must maintain the hack for a bit longer:
     def setPrincipalCollection(self, principalCollection):
         """
@@ -101,6 +112,40 @@
         return self.recordsFromExpression(expression)
 
 
+    def recordsMatchingFieldsWithCUType(self, fields, operand=Operand.OR,
+                                        cuType=None):
+        if cuType:
+            recordType = CalendarDirectoryRecordMixin.fromCUType(cuType)
+        else:
+            recordType = None
+
+        return self.recordsMatchingFields(
+            fields, operand=operand, recordType=recordType
+        )
+
+
+    def recordsMatchingFields(self, fields, operand=Operand.OR, recordType=None):
+        """
+        @param fields: a iterable of tuples, each tuple consisting of:
+            directory field name (C{unicode})
+            search term (C{unicode})
+            match flags (L{twext.who.expression.MatchFlags})
+            match type (L{twext.who.expression.MatchType})
+        """
+        subExpressions = []
+        for fieldName, searchTerm, matchFlags, matchType in fields:
+            subExpressions.append(
+                MatchExpression(
+                    self.fieldName.lookupByName(fieldName),
+                    searchTerm,
+                    matchType,
+                    matchFlags
+                )
+            )
+        expression = CompoundExpression(subExpressions, operand)
+        return self.recordsFromExpression(expression)
+
+
     # FIXME: Existing code assumes record type names are plural. Is there any
     # reason to maintain backwards compatibility?  I suppose there could be
     # scripts referring to record type of "users", "locations"
@@ -115,6 +160,37 @@
 
 class CalendarDirectoryRecordMixin(object):
 
+
+    @inlineCallbacks
+    def verifyCredentials(self, credentials):
+
+        # XYZZY REMOVE THIS, it bypasses all authentication!:
+        returnValue(True)
+
+        if isinstance(credentials, UsernamePassword):
+            log.debug("UsernamePassword")
+            returnValue(
+                (yield self.verifyPlaintextPassword(credentials.password))
+            )
+
+        elif isinstance(credentials, DigestedCredentials):
+            log.debug("DigestedCredentials")
+            returnValue(
+                (yield self.verifyHTTPDigest(
+                    self.shortNames[0],
+                    self.service.realmName,
+                    credentials.fields["uri"],
+                    credentials.fields["nonce"],
+                    credentials.fields.get("cnonce", ""),
+                    credentials.fields["algorithm"],
+                    credentials.fields.get("nc", ""),
+                    credentials.fields.get("qop", ""),
+                    credentials.fields["response"],
+                    credentials.method
+                ))
+            )
+
+
     @property
     def calendarUserAddresses(self):
         if not self.hasCalendars:
@@ -146,18 +222,45 @@
         return frozenset(cuas)
 
 
+    # Mapping from directory record.recordType to RFC2445 CUTYPE values
+    _cuTypes = {
+        BaseRecordType.user: 'INDIVIDUAL',
+        BaseRecordType.group: 'GROUP',
+        DAVRecordType.resource: 'RESOURCE',
+        DAVRecordType.location: 'ROOM',
+    }
+
+
     def getCUType(self):
-        # Mapping from directory record.recordType to RFC2445 CUTYPE values
-        self._cuTypes = {
-            self.service.recordType.user: 'INDIVIDUAL',
-            self.service.recordType.group: 'GROUP',
-            self.service.recordType.resource: 'RESOURCE',
-            self.service.recordType.location: 'ROOM',
-        }
-
         return self._cuTypes.get(self.recordType, "UNKNOWN")
 
 
+    @classmethod
+    def fromCUType(cls, cuType):
+        for key, val in cls._cuTypes.iteritems():
+            if val == cuType:
+                return key
+        return None
+
+
+    def applySACLs(self):
+        """
+        Disable calendaring and addressbooks as dictated by SACLs
+        """
+
+        # FIXME: need to re-implement SACLs
+        # if config.EnableSACLs and self.CheckSACL:
+        #     username = self.shortNames[0]
+        #     if self.CheckSACL(username, "calendar") != 0:
+        #         self.log.debug("%s is not enabled for calendaring due to SACL"
+        #                        % (username,))
+        #         self.enabledForCalendaring = False
+        #     if self.CheckSACL(username, "addressbook") != 0:
+        #         self.log.debug("%s is not enabled for addressbooks due to SACL"
+        #                        % (username,))
+        #         self.enabledForAddressBooks = False
+
+
     @property
     def displayName(self):
         return self.fullNames[0]

Modified: CalendarServer/branches/users/sagen/move2who-2/txdav/who/groups.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/txdav/who/groups.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/txdav/who/groups.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -45,17 +45,14 @@
         # Delete all other work items
         yield Delete(From=self.table, Where=None).on(self.transaction)
 
-        oldGroupCacher = getattr(self.transaction, "_groupCacher", None)
-        newGroupCacher = getattr(self.transaction, "_newGroupCacher", None)
-        if oldGroupCacher is not None or newGroupCacher is not None:
+        groupCacher = getattr(self.transaction, "_groupCacher", None)
+        if groupCacher is not None:
 
             # Schedule next update
 
-            # TODO: Be sure to move updateSeconds to the new cacher
-            # implementation
             notBefore = (
                 datetime.datetime.utcnow() +
-                datetime.timedelta(seconds=oldGroupCacher.updateSeconds)
+                datetime.timedelta(seconds=groupCacher.updateSeconds)
             )
             log.debug(
                 "Scheduling next group cacher update: {when}", when=notBefore
@@ -67,22 +64,13 @@
 
             # New implmementation
             try:
-                yield newGroupCacher.update(self.transaction)
+                yield groupCacher.update(self.transaction)
             except Exception, e:
                 log.error(
                     "Failed to update new group membership cache ({error})",
                     error=e
                 )
 
-            # Old implmementation
-            # try:
-            #     oldGroupCacher.updateCache()
-            # except Exception, e:
-            #     log.error(
-            #         "Failed to update old group membership cache ({error})",
-            #         error=e
-            #     )
-
         else:
             notBefore = (
                 datetime.datetime.utcnow() +
@@ -136,11 +124,11 @@
             From=self.table, Where=(self.table.GROUP_GUID == self.groupGuid)
         ).on(self.transaction)
 
-        newGroupCacher = getattr(self.transaction, "_newGroupCacher", None)
-        if newGroupCacher is not None:
+        groupCacher = getattr(self.transaction, "_groupCacher", None)
+        if groupCacher is not None:
 
             try:
-                yield newGroupCacher.refreshGroup(
+                yield groupCacher.refreshGroup(
                     self.transaction, self.groupGuid.decode("utf-8")
                 )
             except Exception, e:
@@ -255,13 +243,16 @@
 
     def __init__(
         self, directory,
-        useExternalProxies=False, externalProxiesSource=None
+        updateSeconds=600,
+        useExternalProxies=False,
+        externalProxiesSource=None
     ):
         self.directory = directory
         self.useExternalProxies = useExternalProxies
         if useExternalProxies and externalProxiesSource is None:
             externalProxiesSource = self.directory.getExternalProxyAssignments
         self.externalProxiesSource = externalProxiesSource
+        self.updateSeconds = updateSeconds
 
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/sagen/move2who-2/txweb2/channel/http.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/txweb2/channel/http.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/txweb2/channel/http.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -137,10 +137,10 @@
     subclass, it can parse either the client side or the server side of the
     connection.
     """
-    
+
     # Class config:
     parseCloseAsEnd = False
-    
+
     # Instance vars
     chunkedIn = False
     headerlen = 0
@@ -173,12 +173,12 @@
     #  channel.pauseProducing()
     #  channel.resumeProducing()
     #  channel.stopProducing()
-    
-    
+
+
     def __init__(self, channel):
         self.inHeaders = http_headers.Headers()
         self.channel = channel
-        
+
     def lineReceived(self, line):
         if self.chunkedIn:
             # Parsing a chunked input
@@ -208,7 +208,7 @@
                 self.chunkedIn = 1
             elif self.chunkedIn == 3:
                 # TODO: support Trailers (maybe! but maybe not!)
-                
+
                 # After getting the final "0" chunk we're here, and we *EAT MERCILESSLY*
                 # any trailer headers sent, and wait for the blank line to terminate the
                 # request.
@@ -237,7 +237,7 @@
             self.headerlen += len(line)
             if self.headerlen > self.channel.maxHeaderLength:
                 self._abortWithError(responsecode.BAD_REQUEST, 'Headers too long.')
-            
+
             if line[0] in ' \t':
                 # Append a header continuation
                 self.partialHeader += line
@@ -262,7 +262,7 @@
                 # NOTE: in chunked mode, self.length is the size of the current chunk,
                 # so we still have more to read.
                 self.chunkedIn = 2 # Read next chunksize
-            
+
             channel.setLineMode(extraneous)
 
 
@@ -293,13 +293,13 @@
         # Set connection parameters from headers
         self.setConnectionParams(connHeaders)
         self.connHeaders = connHeaders
-        
+
     def allContentReceived(self):
         self.finishedReading = True
         self.channel.requestReadFinished(self)
         self.handleContentComplete()
-        
-        
+
+
     def splitConnectionHeaders(self):
         """
         Split off connection control headers from normal headers.
@@ -382,7 +382,7 @@
         # Okay, now implement section 4.4 Message Length to determine
         # how to find the end of the incoming HTTP message.
         transferEncoding = connHeaders.getHeader('transfer-encoding')
-        
+
         if transferEncoding:
             if transferEncoding[-1] == 'chunked':
                 # Chunked
@@ -394,7 +394,7 @@
                 # client->server data. (Well..it could actually, since TCP has half-close
                 # but the HTTP spec says it can't, so we'll pretend it's right.)
                 self._abortWithError(responsecode.BAD_REQUEST, "Transfer-Encoding received without chunked in last position.")
-            
+
             # TODO: support gzip/etc encodings.
             # FOR NOW: report an error if the client uses any encodings.
             # They shouldn't, because we didn't send a TE: header saying it's okay.
@@ -423,23 +423,23 @@
 
         # Set the calculated persistence
         self.channel.setReadPersistent(readPersistent)
-        
+
     def abortParse(self):
         # If we're erroring out while still reading the request
         if not self.finishedReading:
             self.finishedReading = True
             self.channel.setReadPersistent(False)
             self.channel.requestReadFinished(self)
-        
+
     # producer interface
     def pauseProducing(self):
         if not self.finishedReading:
             self.channel.pauseProducing()
-        
+
     def resumeProducing(self):
         if not self.finishedReading:
             self.channel.resumeProducing()
-       
+
     def stopProducing(self):
         if not self.finishedReading:
             self.channel.stopProducing()
@@ -449,13 +449,13 @@
     It is responsible for all the low-level connection oriented behavior.
     Thus, it takes care of keep-alive, de-chunking, etc., and passes
     the non-connection headers on to the user-level Request object."""
-    
+
     command = path = version = None
     queued = 0
     request = None
-    
+
     out_version = "HTTP/1.1"
-    
+
     def __init__(self, channel, queued=0):
         HTTPParser.__init__(self, channel)
         self.queued=queued
@@ -466,14 +466,14 @@
             self.transport = StringTransport()
         else:
             self.transport = self.channel.transport
-        
+
         # set the version to a fallback for error generation
         self.version = (1,0)
 
 
     def gotInitialLine(self, initialLine):
         parts = initialLine.split()
-        
+
         # Parse the initial request line
         if len(parts) != 3:
             if len(parts) == 1:
@@ -490,9 +490,9 @@
                 raise ValueError()
         except ValueError:
             self._abortWithError(responsecode.BAD_REQUEST, "Unknown protocol: %s" % strversion)
-        
+
         self.version = protovers[1:3]
-        
+
         # Ensure HTTP 0 or HTTP 1.
         if self.version[0] > 1:
             self._abortWithError(responsecode.HTTP_VERSION_NOT_SUPPORTED, 'Only HTTP 0.9 and HTTP 1.x are supported.')
@@ -511,18 +511,18 @@
 
     def processRequest(self):
         self.request.process()
-        
+
     def handleContentChunk(self, data):
         self.request.handleContentChunk(data)
-        
+
     def handleContentComplete(self):
         self.request.handleContentComplete()
-        
+
 ############## HTTPChannelRequest *RESPONSE* methods #############
     producer = None
     chunkedOut = False
     finished = False
-    
+
     ##### Request Callbacks #####
     def writeIntermediateResponse(self, code, headers=None):
         if self.version >= (1,1):
@@ -530,15 +530,15 @@
 
     def writeHeaders(self, code, headers):
         self._writeHeaders(code, headers, True)
-        
+
     def _writeHeaders(self, code, headers, addConnectionHeaders):
         # HTTP 0.9 doesn't have headers.
         if self.version[0] == 0:
             return
-        
+
         l = []
         code_message = responsecode.RESPONSES.get(code, "Unknown Status")
-        
+
         l.append('%s %s %s\r\n' % (self.out_version, code,
                                    code_message))
         if headers is not None:
@@ -557,16 +557,16 @@
                 else:
                     # Cannot use persistent connections if we can't do chunking
                     self.channel.dropQueuedRequests()
-            
+
             if self.channel.isLastRequest(self):
                 l.append("%s: %s\r\n" % ('Connection', 'close'))
             elif self.version < (1,1):
                 l.append("%s: %s\r\n" % ('Connection', 'Keep-Alive'))
-        
+
         l.append("\r\n")
         self.transport.writeSequence(l)
-        
-    
+
+
     def write(self, data):
         if not data:
             return
@@ -574,17 +574,17 @@
             self.transport.writeSequence(("%X\r\n" % len(data), data, "\r\n"))
         else:
             self.transport.write(data)
-        
+
     def finish(self):
         """We are finished writing data."""
         if self.finished:
             warnings.warn("Warning! request.finish called twice.", stacklevel=2)
             return
-        
+
         if self.chunkedOut:
             # write last chunk and closing CRLF
             self.transport.write("0\r\n\r\n")
-        
+
         self.finished = True
         if not self.queued:
             self._cleanup()
@@ -596,7 +596,7 @@
         the writing side alone. This is mostly for internal use by
         the HTTP request parsing logic, so that it can call an error
         page generator.
-        
+
         Otherwise, completely shut down the connection.
         """
         self.abortParse()
@@ -604,7 +604,7 @@
             if self.producer:
                 self.producer.stopProducing()
                 self.unregisterProducer()
-            
+
             self.finished = True
             if self.queued:
                 self.transport.reset()
@@ -617,14 +617,14 @@
 
     def getRemoteHost(self):
         return self.channel.transport.getPeer()
-    
+
     ##### End Request Callbacks #####
 
     def _abortWithError(self, errorcode, text=''):
         """Handle low level protocol errors."""
         headers = http_headers.Headers()
         headers.setHeader('content-length', len(text)+1)
-        
+
         self.abortConnection(closeWrite=False)
         self.writeHeaders(errorcode, headers)
         self.write(text)
@@ -632,7 +632,7 @@
         self.finish()
         log.warn("Aborted request (%d) %s" % (errorcode, text))
         raise AbortedException
-    
+
     def _cleanup(self):
         """Called when have finished responding and are no longer queued."""
         if self.producer:
@@ -640,7 +640,7 @@
             self.unregisterProducer()
         self.channel.requestWriteFinished(self)
         del self.transport
-        
+
     # methods for channel - end users should not use these
 
     def noLongerQueued(self):
@@ -674,12 +674,12 @@
     def registerProducer(self, producer, streaming):
         """Register a producer.
         """
-        
+
         if self.producer:
             raise ValueError, "registering producer %s before previous one (%s) was unregistered" % (producer, self.producer)
-        
+
         self.producer = producer
-        
+
         if self.queued:
             producer.pauseProducing()
         else:
@@ -698,7 +698,7 @@
             self.producer = None
         if self.request:
             self.request.connectionLost(reason)
-    
+
 class HTTPChannel(basic.LineReceiver, policies.TimeoutMixin, object):
     """A receiver for HTTP requests. Handles splitting up the connection
     for the multiple HTTPChannelRequests that may be in progress on this
@@ -714,11 +714,11 @@
     the client.
 
     """
-    
+
     implements(interfaces.IHalfCloseableProtocol)
-    
+
     ## Configuration parameters. Set in instances or subclasses.
-    
+
     # How many simultaneous requests to handle.
     maxPipeline = 4
 
@@ -736,35 +736,35 @@
 
     # Allow persistent connections?
     allowPersistentConnections = True
-    
+
     # ChannelRequest
     chanRequestFactory = HTTPChannelRequest
     requestFactory = http.Request
-    
-    
+
+
     _first_line = 2
     readPersistent = PERSIST_PIPELINE
-    
+
     _readLost = False
     _writeLost = False
-    
+
     _abortTimer = None
     chanRequest = None
 
     def _callLater(self, secs, fun):
         reactor.callLater(secs, fun)
-    
+
     def __init__(self):
         # the request queue
         self.requests = []
-        
+
     def connectionMade(self):
         self._secure = interfaces.ISSLTransport(self.transport, None) is not None
         address = self.transport.getHost()
         self._host = _cachedGetHostByAddr(address.host)
         self.setTimeout(self.inputTimeOut)
         self.factory.addConnectedChannel(self)
-    
+
     def lineReceived(self, line):
         if self._first_line:
             self.setTimeout(self.inputTimeOut)
@@ -779,13 +779,13 @@
             if not line and self._first_line == 1:
                 self._first_line = 2
                 return
-            
+
             self._first_line = 0
-            
+
             if not self.allowPersistentConnections:
                 # Don't allow a second request
                 self.readPersistent = False
-                
+
             try:
                 self.chanRequest = self.chanRequestFactory(self, len(self.requests))
                 self.requests.append(self.chanRequest)
@@ -801,7 +801,7 @@
     def lineLengthExceeded(self, line):
         if self._first_line:
             # Fabricate a request object to respond to the line length violation.
-            self.chanRequest = self.chanRequestFactory(self, 
+            self.chanRequest = self.chanRequestFactory(self,
                                                        len(self.requests))
             self.requests.append(self.chanRequest)
             self.chanRequest.gotInitialLine("GET fake HTTP/1.0")
@@ -809,7 +809,7 @@
             self.chanRequest.lineLengthExceeded(line, self._first_line)
         except AbortedException:
             pass
-            
+
     def rawDataReceived(self, data):
         self.setTimeout(self.inputTimeOut)
         try:
@@ -821,17 +821,17 @@
         if(self.readPersistent is PERSIST_NO_PIPELINE or
            len(self.requests) >= self.maxPipeline):
             self.pauseProducing()
-        
+
         # reset state variables
         self._first_line = 1
         self.chanRequest = None
         self.setLineMode()
-        
+
         # Set an idle timeout, in case this request takes a long
         # time to finish generating output.
         if len(self.requests) > 0:
             self.setTimeout(self.idleTimeOut)
-        
+
     def _startNextRequest(self):
         # notify next request, if present, it can start writing
         del self.requests[0]
@@ -840,7 +840,7 @@
             self.transport.loseConnection()
         elif self.requests:
             self.requests[0].noLongerQueued()
-            
+
             # resume reading if allowed to
             if(not self._readLost and
                self.readPersistent is not PERSIST_NO_PIPELINE and
@@ -866,11 +866,11 @@
         for request in self.requests[1:]:
             request.connectionLost(None)
         del self.requests[1:]
-    
+
     def isLastRequest(self, request):
         # Is this channel handling the last possible request
         return not self.readPersistent and self.requests[-1] == request
-    
+
     def requestWriteFinished(self, request):
         """Called by first request in queue when it is done."""
         if request != self.requests[0]: raise TypeError
@@ -878,7 +878,7 @@
         # Don't del because we haven't finished cleanup, so,
         # don't want queue len to be 0 yet.
         self.requests[0] = None
-        
+
         if self.readPersistent or len(self.requests) > 1:
             # Do this in the next reactor loop so as to
             # not cause huge call stacks with fast
@@ -910,26 +910,26 @@
             self._abortTimer = None
             self.transport.loseConnection()
             return
-        
+
         # If between requests, drop connection
         # when all current requests have written their data.
         self._readLost = True
         if not self.requests:
             # No requests in progress, lose now.
             self.transport.loseConnection()
-            
+
         # If currently in the process of reading a request, this is
         # probably a client abort, so lose the connection.
         if self.chanRequest:
             self.transport.loseConnection()
-        
+
     def connectionLost(self, reason):
         self.factory.removeConnectedChannel(self)
 
         self._writeLost = True
         self.readConnectionLost()
         self.setTimeout(None)
-        
+
         # Tell all requests to abort.
         for request in self.requests:
             if request is not None:
@@ -963,7 +963,7 @@
     """
 
     protocol = HTTPChannel
-    
+
     protocolArgs = None
 
     def __init__(self, requestFactory, maxRequests=600, **kwargs):
@@ -977,9 +977,9 @@
     def buildProtocol(self, addr):
         if self.outstandingRequests >= self.maxRequests:
             return OverloadedServerProtocol()
-        
+
         p = protocol.ServerFactory.buildProtocol(self, addr)
-        
+
         for arg,value in self.protocolArgs.iteritems():
             setattr(p, arg, value)
         return p
@@ -1050,19 +1050,19 @@
         return p
 
 class HTTPLoggingChannelRequest(HTTPChannelRequest):
-    
+
     class TransportLoggingWrapper(object):
-        
+
         def __init__(self, transport, logData):
-            
+
             self.transport = transport
             self.logData = logData
-            
+
         def write(self, data):
             if self.logData is not None and data:
                 self.logData.append(data)
             self.transport.write(data)
-            
+
         def writeSequence(self, seq):
             if self.logData is not None and seq:
                 self.logData.append(''.join(seq))
@@ -1075,7 +1075,7 @@
         def __init__(self):
             self.request = []
             self.response = []
-            
+
     def __init__(self, channel, queued=0):
         super(HTTPLoggingChannelRequest, self).__init__(channel, queued)
 
@@ -1093,7 +1093,7 @@
         super(HTTPLoggingChannelRequest, self).gotInitialLine(initialLine)
 
     def lineReceived(self, line):
-        
+
         if self.logData is not None:
             # We don't want to log basic credentials
             loggedLine = line
@@ -1105,13 +1105,13 @@
         super(HTTPLoggingChannelRequest, self).lineReceived(line)
 
     def handleContentChunk(self, data):
-        
+
         if self.logData is not None:
             self.logData.request.append(data)
         super(HTTPLoggingChannelRequest, self).handleContentChunk(data)
-        
+
     def handleContentComplete(self):
-        
+
         if self.logData is not None:
             doneTime = time.time()
             self.logData.request.append("\r\n\r\n>>>> Request complete at: %.3f (elapsed: %.1f ms)" % (doneTime, 1000 * (doneTime - self.startTime),))
@@ -1124,7 +1124,7 @@
         super(HTTPLoggingChannelRequest, self).writeHeaders(code, headers)
 
     def finish(self):
-        
+
         super(HTTPLoggingChannelRequest, self).finish()
 
         if self.logData is not None:

Modified: CalendarServer/branches/users/sagen/move2who-2/txweb2/server.py
===================================================================
--- CalendarServer/branches/users/sagen/move2who-2/txweb2/server.py	2014-03-12 18:28:29 UTC (rev 12880)
+++ CalendarServer/branches/users/sagen/move2who-2/txweb2/server.py	2014-03-12 18:49:18 UTC (rev 12881)
@@ -192,7 +192,7 @@
                        error.defaultErrorHandler, defaultHeadersFilter]
 
     def __init__(self, *args, **kw):
-        
+
         self.timeStamps = [("t", time.time(),)]
 
         if kw.has_key('site'):
@@ -308,10 +308,10 @@
         clients into using an inappropriate scheme for subsequent requests. What we should do is
         take the port number from the Host header or request-URI and map that to the scheme that
         matches the service we configured to listen on that port.
- 
+
         @param port: the port number to test
         @type port: C{int}
-        
+
         @return: C{True} if scheme is https (secure), C{False} otherwise
         @rtype: C{bool}
         """
@@ -322,7 +322,7 @@
                 return True
             elif port in self.site.BindSSLPorts:
                 return True
-        
+
         return False
 
     def _fixupURLParts(self):
@@ -558,7 +558,7 @@
                 break
             else:
                 postSegments.insert(0, preSegments.pop())
-        
+
         if cachedParent is None:
             cachedParent = self.site.resource
             postSegments = segments[1:]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20140312/9d890aa9/attachment-0001.html>


More information about the calendarserver-changes mailing list