[CalendarServer-changes] [15301] CalendarServer/branches/users/cdaboo/newlog

source_changes at macosforge.org source_changes at macosforge.org
Mon Nov 9 16:06:26 PST 2015


Revision: 15301
          http://trac.calendarserver.org//changeset/15301
Author:   cdaboo at apple.com
Date:     2015-11-09 16:06:26 -0800 (Mon, 09 Nov 2015)
Log Message:
-----------
Update log messages to use new .format() syntax. Switch over to new Twisted logging module.

Modified Paths:
--------------
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/accesslog.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/provision/root.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/amppush.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/applepush.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/notifier.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/util.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/caldav.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/test/test_caldav.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/cmdline.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/dkimtool.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/purge.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/cmd.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/terminal.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/vfs.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/upgrade.py
    CalendarServer/branches/users/cdaboo/newlog/calendarserver/webcal/resource.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/accounting.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/authkerb.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/caldavxml.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/geturl.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/pool.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/reverseproxy.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/database.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/augment.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/calendar.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/common.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/principal.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/util.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directorybackedaddressbook.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/extensions.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/freebusyurl.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/ical.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/localization.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheclient.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachelock.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachepool.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheprops.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacher.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcalendar.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcol.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/post.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/propfind.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_addressbook_query.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_calendar_query.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_freebusy.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_sync_collection.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/resource.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/scheduling_store/caldav/resource.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sharing.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sql.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/storebridge.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/test_config.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/util.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezones.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezonestdservice.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/upgrade.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/util.py
    CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/vcard.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/util.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/base/propertystore/test/base.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/file.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/index_file.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/query/filter.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/caldav/delivery.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/freebusy.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/icaldiff.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/inbound.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/outbound.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/implicit.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/ischedule/dkim.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/itip.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/processing.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/utils.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/work.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/sql.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_implicit.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/util.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/file.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/index_file.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/sql.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/util.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/file.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/podding/migration/home_sync.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/sql.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/util.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/migrate.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/sql/upgrade.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/inbox_cleanup.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/load_work.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/dps/server.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/who/delegates.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/who/directory.py
    CalendarServer/branches/users/cdaboo/newlog/txdav/xml/base.py
    CalendarServer/branches/users/cdaboo/newlog/txweb2/dav/http.py
    CalendarServer/branches/users/cdaboo/newlog/txweb2/http.py
    CalendarServer/branches/users/cdaboo/newlog/txweb2/log.py

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/accesslog.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/accesslog.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/accesslog.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -359,10 +359,10 @@
 
         newpath = "%s.%s" % (self.logpath, self.suffix(self.lastDate))
         if os.path.exists(newpath):
-            log.info("Cannot rotate log file to %s because it already exists." % (newpath,))
+            log.info("Cannot rotate log file to '{path}' because it already exists.", path=newpath)
             return
         self.accessLog("Log closed - rotating: [%s]." % (datetime.datetime.now().ctime(),), False)
-        log.info("Rotating log file to: %s" % (newpath,), system="Logging")
+        log.info("Rotating log file to: '{path}'", path=newpath, system="Logging")
         self.f.close()
         os.rename(self.logpath, newpath)
         self._open()

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/provision/root.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/provision/root.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/provision/root.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -249,9 +249,9 @@
                             )
                         else:
                             log.error(
-                                "Failed to look up wiki token {token}: "
-                                "{message}",
-                                token=token, message=w.message
+                                "Failed to look up wiki token {token}: {msg}",
+                                token=token,
+                                msg=w.message
                             )
 
                     except Exception as e:

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/amppush.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/amppush.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/amppush.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -267,7 +267,7 @@
 
     def notify(self, token, id, dataChangedTimestamp, priority):
         if self.subscribedToID(id) == token:
-            self.log.debug("Sending notification for %s to %s" % (id, token))
+            self.log.debug("Sending notification for {id} to {token}", id=id, token=token)
             return self.callRemote(
                 NotificationForID, id=id,
                 dataChangedTimestamp=dataChangedTimestamp,

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/applepush.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/applepush.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/applepush.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -133,8 +133,9 @@
                 provider.setServiceParent(service)
                 service.providers[protocol] = provider
                 service.log.info(
-                    "APNS %s topic: %s" %
-                    (protocol, settings[protocol]["Topic"]))
+                    "APNS {proto} topic: {topic}",
+                    proto=protocol, topic=settings[protocol]["Topic"]
+                )
 
                 feedback = APNFeedbackService(
                     service.store,
@@ -218,7 +219,7 @@
             protocol = pushKey.split("/")[1]
         except ValueError:
             # pushKey has no protocol, so we can't do anything with it
-            self.log.error("Push key '%s' is missing protocol" % (pushKey,))
+            self.log.error("Push key '{key}' is missing protocol", key=pushKey)
             return
 
         # Unit tests can pass this value in; otherwise it defaults to now
@@ -234,8 +235,9 @@
             numSubscriptions = len(subscriptions)
             if numSubscriptions > 0:
                 self.log.debug(
-                    "Sending %d APNS notifications for %s" %
-                    (numSubscriptions, pushKey))
+                    "Sending {num} APNS notifications for {key}",
+                    num=numSubscriptions, key=pushKey
+                )
                 tokens = [record.token for record in subscriptions if record.token and record.subscriberGUID]
                 if tokens:
                     provider.scheduleNotifications(
@@ -292,7 +294,7 @@
 
 
     def connectionLost(self, reason=None):
-        # self.log.debug("ProviderProtocol connectionLost: %s" % (reason,))
+        # self.log.debug("ProviderProtocol connectionLost: {reason}", reason=reason)
         # Clear the reference to us from the factory
         self.factory.connection = None
 
@@ -307,7 +309,7 @@
         if fn is None:
             fn = self.processError
 
-        self.log.debug("ProviderProtocol dataReceived %d bytes" % (len(data),))
+        self.log.debug("ProviderProtocol dataReceived {len} bytes", len=len(data))
         self.buffer += data
 
         while len(self.buffer) >= self.MESSAGE_LENGTH:
@@ -320,8 +322,9 @@
                     yield fn(status, identifier)
             except Exception, e:
                 self.log.warn(
-                    "ProviderProtocol could not process error: %s (%s)" %
-                    (message.encode("hex"), e))
+                    "ProviderProtocol could not process error: {code} ({ex})",
+                    code=message.encode("hex"), ex=e
+                )
 
 
     @inlineCallbacks
@@ -339,19 +342,21 @@
         @type status: C{int}
         """
         msg = self.STATUS_CODES.get(status, "Unknown status code")
-        self.log.info("Received APN error %d on identifier %d: %s" % (status, identifier, msg))
+        self.log.info("Received APN error {status} on identifier {id}: {msg}", status=status, id=identifier, msg=msg)
         if status in self.TOKEN_REMOVAL_CODES:
             token = self.history.extractIdentifier(identifier)
             if token is not None:
                 self.log.debug(
-                    "Removing subscriptions for bad token: %s" %
-                    (token,))
+                    "Removing subscriptions for bad token: {token}",
+                    token=token,
+                )
                 txn = self.factory.store.newTransaction(label="APNProviderProtocol.processError")
                 subscriptions = (yield txn.apnSubscriptionsByToken(token))
                 for record in subscriptions:
                     self.log.debug(
-                        "Removing subscription: %s %s" %
-                        (token, record.resourceKey))
+                        "Removing subscription: {token} {key}",
+                        token=token, key=record.resourceKey
+                    )
                     yield txn.removeAPNSubscription(token, record.resourceKey)
                 yield txn.commit()
 
@@ -376,7 +381,7 @@
         try:
             binaryToken = token.replace(" ", "").decode("hex")
         except:
-            self.log.error("Invalid APN token in database: %s" % (token,))
+            self.log.error("Invalid APN token in database: {token}", token=token)
             return
 
         identifier = self.history.add(token)
@@ -485,12 +490,12 @@
 
     def clientConnectionLost(self, connector, reason):
         if not self.shuttingDown:
-            self.log.info("Connection to APN server lost: %s" % (reason,))
+            self.log.info("Connection to APN server lost: {reason}", reason=reason)
         ReconnectingClientFactory.clientConnectionLost(self, connector, reason)
 
 
     def clientConnectionFailed(self, connector, reason):
-        self.log.error("Unable to connect to APN server: %s" % (reason,))
+        self.log.error("Unable to connect to APN server: {reason}", reason=reason)
         self.connected = False
         ReconnectingClientFactory.clientConnectionFailed(
             self, connector,
@@ -653,10 +658,10 @@
             tokenKeyPair = (token, key)
             for existingPair, _ignore_timstamp, priority in self.queue:
                 if tokenKeyPair == existingPair:
-                    self.log.debug("APNProviderService has no connection; skipping duplicate: %s %s" % (token, key))
+                    self.log.debug("APNProviderService has no connection; skipping duplicate: {token} {key}", token=token, key=key)
                     break # Already scheduled
             else:
-                self.log.debug("APNProviderService has no connection; queuing: %s %s" % (token, key))
+                self.log.debug("APNProviderService has no connection; queuing: {token} {key}", token=token, key=key)
                 self.queue.append(((token, key), dataChangedTimestamp, priority))
 
 
@@ -708,7 +713,7 @@
         if fn is None:
             fn = self.processFeedback
 
-        self.log.debug("FeedbackProtocol dataReceived %d bytes" % (len(data),))
+        self.log.debug("FeedbackProtocol dataReceived {len} bytes", len=len(data))
         self.buffer += data
 
         while len(self.buffer) >= self.MESSAGE_LENGTH:
@@ -723,8 +728,9 @@
                 yield fn(timestamp, token)
             except Exception, e:
                 self.log.warn(
-                    "FeedbackProtocol could not process message: %s (%s)" %
-                    (message.encode("hex"), e))
+                    "FeedbackProtocol could not process message: {code} ({ex})",
+                    code=message.encode("hex"), ex=e
+                )
 
 
     @inlineCallbacks
@@ -743,16 +749,18 @@
         """
 
         self.log.debug(
-            "FeedbackProtocol processFeedback time=%d token=%s" %
-            (timestamp, token))
+            "FeedbackProtocol processFeedback time={time} token={token}",
+            time=timestamp, token=token
+        )
         txn = self.factory.store.newTransaction(label="APNFeedbackProtocol.processFeedback")
         subscriptions = (yield txn.apnSubscriptionsByToken(token))
 
         for record in subscriptions:
             if timestamp > record.modified:
                 self.log.debug(
-                    "FeedbackProtocol removing subscription: %s %s" %
-                    (token, record.resourceKey))
+                    "FeedbackProtocol removing subscription: {token} {key}",
+                    token=token, key=record.resourceKey,
+                )
                 yield txn.removeAPNSubscription(token, record.resourceKey)
         yield txn.commit()
 
@@ -769,8 +777,9 @@
 
     def clientConnectionFailed(self, connector, reason):
         self.log.error(
-            "Unable to connect to APN feedback server: %s" %
-            (reason,))
+            "Unable to connect to APN feedback server: {reason}",
+            reason=reason,
+        )
         self.connected = False
         ClientFactory.clientConnectionFailed(self, connector, reason)
 

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/notifier.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/notifier.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/notifier.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -107,12 +107,12 @@
 
 
     def enableNotify(self, arg):
-        self.log.debug("enableNotify: %s" % (self._ids['default'][1],))
+        self.log.debug("enableNotify: {id}", id=self._ids['default'][1])
         self._notify = True
 
 
     def disableNotify(self):
-        self.log.debug("disableNotify: %s" % (self._ids['default'][1],))
+        self.log.debug("disableNotify: {id}", id=self._ids['default'][1])
         self._notify = False
 
 
@@ -138,15 +138,17 @@
         for prefix, id in ids:
             if self._notify:
                 self.log.debug(
-                    "Notifications are enabled: %s %s/%s priority=%d" %
-                    (self._storeObject, prefix, id, priority.value))
+                    "Notifications are enabled: {obj} {prefix}/{id} priority={priority}",
+                    obj=self._storeObject, prefix=prefix, id=id, priority=priority.value
+                )
                 yield self._notifierFactory.send(
                     prefix, id, txn,
                     priority=priority)
             else:
                 self.log.debug(
-                    "Skipping notification for: %s %s/%s" %
-                    (self._storeObject, prefix, id,))
+                    "Skipping notification for: %{obj} {prefix}/{id}",
+                    obj=self._storeObject, prefix=prefix, id=id,
+                )
 
 
     def clone(self, storeObject):

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/push/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -189,15 +189,17 @@
             internalKey = (token, key)
             if internalKey in self.outstanding:
                 self.log.debug(
-                    "PushScheduler already has this scheduled: %s" %
-                    (internalKey,))
+                    "PushScheduler already has this scheduled: {key}",
+                    key=internalKey,
+                )
             else:
                 self.outstanding[internalKey] = self.reactor.callLater(
                     scheduleTime, self.send, token, key, dataChangedTimestamp,
                     priority)
                 self.log.debug(
-                    "PushScheduler scheduled: %s in %.0f sec" %
-                    (internalKey, scheduleTime))
+                    "PushScheduler scheduled: {key} in {time:.0f} sec",
+                    key=internalKey, time=scheduleTime
+                )
                 scheduleTime += self.staggerSeconds
 
 
@@ -215,7 +217,7 @@
             which triggered this notification
         @type key: C{int}
         """
-        self.log.debug("PushScheduler fired for %s %s %d" % (token, key, dataChangedTimestamp))
+        self.log.debug("PushScheduler fired for {token} {key} {time}", token=token, key=key, time=dataChangedTimestamp)
         del self.outstanding[(token, key)]
         return self.callback(token, key, dataChangedTimestamp, priority)
 
@@ -225,5 +227,5 @@
         Cancel all outstanding delayed calls
         """
         for (token, key), delayed in self.outstanding.iteritems():
-            self.log.debug("PushScheduler cancelling %s %s" % (token, key))
+            self.log.debug("PushScheduler cancelling {token} {key}", token=token, key=key)
             delayed.cancel()

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/caldav.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/caldav.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/caldav.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -40,7 +40,7 @@
 
 from twistedcaldav.stdconfig import config
 
-from twisted.python.log import FileLogObserver, ILogObserver
+from twisted.python.log import ILogObserver
 from twisted.python.logfile import LogFile
 from twisted.python.usage import Options, UsageError
 from twisted.python.util import uidFromString, gidFromString
@@ -56,9 +56,10 @@
 from twisted.application.service import MultiService, IServiceMaker
 from twisted.application.service import Service
 from twisted.protocols.amp import AMP
+from twisted.logger import LogLevel
 
 from twext.python.filepath import CachingFilePath
-from twext.python.log import Logger, LogLevel, replaceTwistedLoggers
+from twext.python.log import Logger
 from twext.internet.fswatch import (
     DirectoryChangeListener, IDirectoryChangeListenee
 )
@@ -254,13 +255,16 @@
                 rotateLength=self.logRotateLength,
                 maxRotatedFiles=self.logMaxFiles
             )
-            errorLogObserver = FileLogObserver(errorLogFile).emit
             if self.logRotateOnStart:
                 errorLogFile.rotate()
+        else:
+            errorLogFile = sys.stdout
+        withTime = not (not config.ErrorLogFile and config.ProcessType in ("Slave", "DPS",))
+        errorLogObserver = Logger.makeFilteredFileLogObserver(errorLogFile, withTime=withTime)
 
-            # Registering ILogObserver with the Application object
-            # gets our observer picked up within AppLogger.start( )
-            app.setComponent(ILogObserver, errorLogObserver)
+        # Registering ILogObserver with the Application object
+        # gets our observer picked up within AppLogger.start( )
+        app.setComponent(ILogObserver, errorLogObserver)
 
 
 
@@ -452,13 +456,6 @@
 
         self.checkDirectories(config)
 
-        #
-        # Nuke the file log observer's time format.
-        #
-
-        if not config.ErrorLogFile and config.ProcessType == "Slave":
-            FileLogObserver.timeFormat = ""
-
         # Check current umask and warn if changed
         oldmask = umask(config.umask)
         if oldmask != config.umask:
@@ -534,6 +531,9 @@
             "--reactor={}".format(config.Twisted.reactor),
             "-n", "caldav_directoryproxy",
             "-f", self.configPath,
+            "-o", "ProcessType=DPS",
+            "-o", "ErrorLogFile=None",
+            "-o", "ErrorLogEnabled=False",
         ))
         self.monitor.addProcess(
             "directoryproxy", dpsArgv, env=PARENT_ENVIRONMENT
@@ -746,7 +746,6 @@
         """
         Create the top-level service.
         """
-        replaceTwistedLoggers()
 
         self.log.info(
             "{log_source.description} {version} starting "
@@ -988,8 +987,8 @@
         # Change default log level to "info" as its useful to have
         # that during startup
         #
-        oldLogLevel = log.publisher.levels.logLevelForNamespace(None)
-        log.publisher.levels.setLogLevelForNamespace(None, LogLevel.info)
+        oldLogLevel = log.levels().logLevelForNamespace(None)
+        log.levels().setLogLevelForNamespace(None, LogLevel.info)
 
         # Note: 'additional' was used for IMIP reply resource, and perhaps
         # we can remove this
@@ -1175,7 +1174,7 @@
                     ).setServiceParent(connectionService)
 
         # Change log level back to what it was before
-        log.publisher.levels.setLogLevelForNamespace(None, oldLogLevel)
+        log.levels().setLogLevelForNamespace(None, oldLogLevel)
         return service
 
 
@@ -2559,8 +2558,9 @@
 
 
     def lineReceived(self, line):
-        from twisted.python.log import msg
-        msg("[{}] {}".format(self.tag, line))
+        # Hand off slave log entry to logging system as critical to ensure it is always
+        # displayed (as the slave will have filtered out unwanted entries itself).
+        log.critical("{msg}", msg=line, log_system=self.tag, isError=False)
 
 
     def lineLengthExceeded(self, line):

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/test/test_caldav.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/test/test_caldav.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tap/test/test_caldav.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -178,7 +178,7 @@
         finally:
             sys.stdout = oldout
             log.info(
-                "load configuration console output: %s" % newout.getvalue()
+                "load configuration console output: {result}", result=newout.getvalue()
             )
 
 
@@ -1087,10 +1087,9 @@
                 return
             if event.get('isError'):
                 d.errback()
-            m = event.get('message')[0]
-            if m.startswith('[Dummy] '):
-                logged.append(event)
-                if m == '[Dummy] z':
+            if event.get("log_system") == u'Dummy':
+                logged.append(event["msg"])
+                if event["msg"] == u'z':
                     d.callback("done")
 
         logging.addObserver(tempObserver)
@@ -1098,18 +1097,16 @@
         d = Deferred()
 
         def assertions(result):
-            self.assertEquals(["[Dummy] x",
-                               "[Dummy] y",
-                               "[Dummy] y",  # final segment
-                               "[Dummy] z"],
-                              [''.join(evt['message'])[:len('[Dummy]') + 2]
-                               for evt in logged])
+            self.assertEquals(["x",
+                               "y",
+                               "y",  # final segment
+                               "z"],
+                              [msg[0] for msg in logged])
             self.assertEquals([" (truncated, continued)",
                                " (truncated, continued)",
-                               "[Dummy] y",
-                               "[Dummy] z"],
-                              [''.join(evt['message'])[-len(" (truncated, continued)"):]
-                               for evt in logged])
+                               "y",
+                               "z"],
+                              [msg[-len(" (truncated, continued)"):] for msg in logged])
         d.addCallback(assertions)
         return d
 

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/cmdline.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/cmdline.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/cmdline.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -18,22 +18,23 @@
 Shared main-point between utilities.
 """
 
-from calendarserver.tap.util import checkDirectories
+from calendarserver.tap.util import checkDirectories, getRootResource
 from calendarserver.tools.util import loadConfig, autoDisableMemcached
 
-from twext.python.log import StandardIOObserver
+from twext.enterprise.jobs.queue import NonPerformingQueuer
+from twext.python.log import Logger
 
 from twistedcaldav.config import ConfigurationError
+from twistedcaldav.timezones import TimezoneCache
+
+from twisted.application.service import Service
 from twisted.internet.defer import inlineCallbacks, succeed
-from twisted.application.service import Service
+from twisted.logger import STDLibLogObserver
 from twisted.python.logfile import LogFile
-from twisted.python.log import FileLogObserver
+from twisted.python.log import addObserver
 
 import sys
-from calendarserver.tap.util import getRootResource
 from errno import ENOENT, EACCES
-from twext.enterprise.jobs.queue import NonPerformingQueuer
-from twistedcaldav.timezones import TimezoneCache
 
 # TODO: direct unit tests for these functions.
 
@@ -87,7 +88,7 @@
 
     # Install std i/o observer
     if verbose:
-        observer = StandardIOObserver()
+        observer = STDLibLogObserver()
         observer.start()
 
     if reactor is None:
@@ -104,8 +105,8 @@
             rotateLength=config.ErrorLogRotateMB * 1024 * 1024,
             maxRotatedFiles=config.ErrorLogMaxRotatedFiles
         )
-        utilityLogObserver = FileLogObserver(utilityLogFile)
-        utilityLogObserver.start()
+        utilityLogObserver = Logger.makeFilteredFileLogObserver(utilityLogFile)
+        addObserver(utilityLogObserver)
 
         config.ProcessType = "Utility"
         config.UtilityServiceClass = _makeValidService

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/dkimtool.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/dkimtool.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/dkimtool.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -22,9 +22,10 @@
 
 from twisted.internet import reactor
 from twisted.internet.defer import inlineCallbacks
+from twisted.logger import LogLevel, STDLibLogObserver
 from twisted.python.usage import Options
 
-from twext.python.log import Logger, LogLevel, StandardIOObserver
+from twext.python.log import Logger
 from txweb2.http_headers import Headers
 
 from txdav.caldav.datastore.scheduling.ischedule.dkim import RSA256, DKIMRequest, \
@@ -69,7 +70,7 @@
 def _doRequest(options):
 
     if options["verbose"]:
-        log.publisher.levels.setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug)
+        log.levels().setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug)
 
     # Parse the HTTP file
     with open(options["request"]) as f:
@@ -300,11 +301,11 @@
     #
     # Send logging output to stdout
     #
-    observer = StandardIOObserver()
+    observer = STDLibLogObserver()
     observer.start()
 
     if options["verbose"]:
-        log.publisher.levels.setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug)
+        log.levels().setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug)
 
     if options["key-gen"]:
         _doKeyGeneration(options)

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/purge.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/purge.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/purge.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -420,7 +420,7 @@
         """
         Find all the calendar homes that match the uuid cli argument.
         """
-        log.debug("Searching for calendar homes matching: '{}'".format(self.uuid))
+        log.debug("Searching for calendar homes matching: '{uid}'", uid=self.uuid)
         txn = self.store.newTransaction(label="Find matching homes")
         ch = schema.CALENDAR_HOME
         if self.uuid:
@@ -437,7 +437,7 @@
             ).on(txn))
 
         yield txn.commit()
-        log.debug("  Found {} calendar homes".format(len(rows)))
+        log.debug("  Found {len} calendar homes", len=len(rows))
         returnValue(sorted(rows, key=lambda x: x[1]))
 
 
@@ -451,7 +451,7 @@
         @param owner_uid: owner UUID of home to check
         @type owner_uid: L{str}
         """
-        log.debug("Checking calendar home: {} '{}'".format(home_id, owner_uid))
+        log.debug("Checking calendar home: {id} '{uid}'", id=home_id, uid=owner_uid)
         txn = self.store.newTransaction(label="Find matching calendars")
         cb = schema.CALENDAR_BIND
         kwds = {"home_id": home_id}
@@ -463,7 +463,7 @@
             ),
         ).on(txn, **kwds))
         yield txn.commit()
-        log.debug("  Found {} calendars".format(len(rows)))
+        log.debug("  Found {len} calendars", len=len(rows))
         returnValue(rows)
 
 
@@ -483,7 +483,7 @@
         @type calendar_name: L{str}
         """
 
-        log.debug("  Checking calendar: {} '{}'".format(calendar_id, calendar_name))
+        log.debug("  Checking calendar: {id} '{name}'", id=calendar_id, name=calendar_name)
         purge = set()
         txn = self.store.newTransaction(label="Find matching resources")
         co = schema.CALENDAR_OBJECT
@@ -503,7 +503,7 @@
             ),
         ).on(txn, **kwds))
 
-        log.debug("    Found {} resources to check".format(len(rows)))
+        log.debug("    Found {len} resources to check", len=len(rows))
         for resource_id, recurrence_max, recurrence_min, max_end_date in rows:
 
             recurrence_max = parseSQLDateToPyCalendar(recurrence_max) if recurrence_max else None
@@ -530,7 +530,7 @@
                     purge.add(self.PurgeEvent(home_id, calendar_id, resource_id,))
 
         yield txn.commit()
-        log.debug("    Found {} resources to purge".format(len(purge)))
+        log.debug("    Found {len} resources to purge", len=len(purge))
         returnValue(purge)
 
 
@@ -639,13 +639,14 @@
                 last_calendar = event.calendar
             resource = (yield calendar.objectResourceWithID(event.resource))
             yield resource.purge(implicitly=False)
-            log.debug("Removed resource {} '{}' from calendar {} '{}' of calendar home '{}'".format(
-                resource.id(),
-                resource.name(),
-                resource.parentCollection().id(),
-                resource.parentCollection().name(),
-                resource.parentCollection().ownerHome().uid()
-            ))
+            log.debug(
+                "Removed resource {id} '{name}' from calendar {pid} '{pname}' of calendar home '{uid}'",
+                id=resource.id(),
+                name=resource.name(),
+                pid=resource.parentCollection().id(),
+                pname=resource.parentCollection().name(),
+                uid=resource.parentCollection().ownerHome().uid()
+            )
             count += 1
         yield txn.commit()
         returnValue(count)
@@ -679,18 +680,18 @@
         if self.dryrun:
             eventCount = len(purge)
             if eventCount == 0:
-                log.info("No events are older than %s" % (self.cutoff,))
+                log.info("No events are older than {cutoff}", cutoff=self.cutoff)
             elif eventCount == 1:
-                log.info("1 event is older than %s" % (self.cutoff,))
+                log.info("1 event is older than {cutoff}", cutoff=self.cutoff)
             else:
-                log.info("%d events are older than %s" % (eventCount, self.cutoff))
+                log.info("{count} events are older than {cutoff}", count=eventCount, cutoff=self.cutoff)
             returnValue(eventCount)
 
         purge = list(purge)
         purge.sort()
         totalEvents = len(purge)
 
-        log.info("Removing {} events older than {}...".format(len(purge), self.cutoff,))
+        log.info("Removing {len} events older than {cutoff}...", len=len(purge), cutoff=self.cutoff)
 
         numEventsRemoved = -1
         totalRemoved = 0
@@ -698,7 +699,7 @@
             numEventsRemoved = (yield self.purgeResources(purge[:self.batchSize]))
             if numEventsRemoved:
                 totalRemoved += numEventsRemoved
-                log.debug("  Removed {} of {} events...".format(totalRemoved, totalEvents))
+                log.debug("  Removed {removed} of {total} events...", removed=totalRemoved, total=totalEvents)
                 purge = purge[numEventsRemoved:]
 
         if totalRemoved == 0:
@@ -706,7 +707,7 @@
         elif totalRemoved == 1:
             log.info("1 event was removed in total")
         else:
-            log.info("%d events were removed in total" % (totalRemoved,))
+            log.info("{total} events were removed in total", total=totalRemoved)
 
         returnValue(totalRemoved)
 

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/cmd.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/cmd.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/cmd.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -472,7 +472,7 @@
         if not isinstance(wd, Folder):
             raise NotFoundError("Not a folder: %s" % (wd,))
 
-        # log.info("wd -> %s" % (wd,))
+        # log.info("wd -> {wd}", wd=wd)
         self.wd = wd
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/terminal.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/terminal.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/terminal.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -310,7 +310,7 @@
             except Exception, e:
                 self.handleFailure(Failure(e))
                 return
-            log.info("COMPLETIONS: %r" % (completions,))
+            log.info("COMPLETIONS: {comp}", comp=completions)
         else:
             # Completing command name
             completions = tuple(self.commands.complete_commands(cmd))

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/vfs.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/vfs.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/shell/vfs.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -691,7 +691,7 @@
                 self.mainComponent = mainComponent
 
             except InvalidICalendarDataError, e:
-                log.error("%s: %s" % (self.path, e))
+                log.error("{path}: {ex}", path=self.path, ex=e)
 
                 self.componentType = "?"
                 self.summary = "** Invalid data **"

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/upgrade.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/tools/upgrade.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -26,10 +26,11 @@
 import time
 
 from twisted.internet.defer import succeed
+from twisted.logger import LogLevel, formatEvent
 from twisted.python.text import wordWrap
 from twisted.python.usage import Options, UsageError
 
-from twext.python.log import Logger, LogLevel, formatEvent, addObserver
+from twext.python.log import Logger
 
 from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
 
@@ -220,8 +221,8 @@
         output.flush()
 
     if not options["status"]:
-        log.publisher.levels.setLogLevelForNamespace(None, LogLevel.debug)
-        addObserver(onlyUpgradeEvents)
+        log.levels().setLogLevelForNamespace(None, LogLevel.debug)
+        log.observer.addObserver(onlyUpgradeEvents)
 
 
     def customServiceMaker():

Modified: CalendarServer/branches/users/cdaboo/newlog/calendarserver/webcal/resource.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/calendarserver/webcal/resource.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/calendarserver/webcal/resource.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -169,7 +169,7 @@
         tzid = queryValue("tzid")
         if not tzid:
             tzid = getLocalTimezone()
-            self.log.debug("Determined timezone to be %s" % (tzid,))
+            self.log.debug("Determined timezone to be {tzid}", tzid=tzid)
 
         #
         # Make some HTML

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/accounting.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/accounting.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/accounting.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -127,7 +127,7 @@
                         logFilename = path
                         break
                     if index == 1000:
-                        log.error("Too many %s accounting files for %s" % (category, record))
+                        log.error("Too many {cat} accounting files for {rec}", cat=category, rec=record)
                         return None
                     index += 1
 
@@ -141,5 +141,5 @@
 
     except OSError, e:
         # No failures in accounting should propagate out
-        log.error("Failed to write accounting data due to: %s" % (str(e),))
+        log.error("Failed to write accounting data due to: {ex}", ex=str(e))
         return None

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/authkerb.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/authkerb.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/authkerb.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -78,7 +78,7 @@
             try:
                 principal = kerberos.getServerPrincipalDetails(serviceType, hostname)
             except kerberos.KrbError, ex:
-                self.log.error("getServerPrincipalDetails: %s" % (ex[0],))
+                self.log.error("getServerPrincipalDetails: {ex}", ex=ex[0])
                 raise ValueError('Authentication System Failure: %s' % (ex[0],))
 
         self.service, self.realm = self._splitPrincipal(principal)
@@ -93,7 +93,7 @@
             service = splits[0].upper()
             realm = splits[1]
         except IndexError:
-            self.log.error("Invalid Kerberos principal: %s" % (principal,))
+            self.log.error("Invalid Kerberos principal: {principal}", principal=principal)
             raise ValueError('Authentication System Failure: Invalid Kerberos principal: %s' % (principal,))
 
         service = "%s@%s" % (servicetype, service,)
@@ -188,7 +188,7 @@
             try:
                 kerberos.checkPassword(creds.username, creds.password, creds.service, creds.default_realm)
             except kerberos.BasicAuthError, ex:
-                self.log.error("%s" % (ex[0],))
+                self.log.error("{ex}", ex=ex[0])
                 raise error.UnauthorizedLogin("Bad credentials for: %s (%s: %s)" % (pcreds.authnURI, ex[0], ex[1],))
             else:
                 return succeed((
@@ -247,18 +247,18 @@
         try:
             _ignore_result, context = kerberos.authGSSServerInit("")
         except kerberos.GSSError, ex:
-            self.log.error("authGSSServerInit: %s(%s)" % (ex[0][0], ex[1][0],))
+            self.log.error("authGSSServerInit: {ex0}({ex1})", ex0=ex[0][0], ex1=ex[1][0])
             raise error.LoginFailed('Authentication System Failure: %s(%s)' % (ex[0][0], ex[1][0],))
 
         # Do the GSSAPI step and get response and username
         try:
             kerberos.authGSSServerStep(context, base64data)
         except kerberos.GSSError, ex:
-            self.log.error("authGSSServerStep: %s(%s)" % (ex[0][0], ex[1][0],))
+            self.log.error("authGSSServerStep: {ex0}(ex1)", ex0=ex[0][0], ex1=ex[1][0])
             kerberos.authGSSServerClean(context)
             raise error.UnauthorizedLogin('Bad credentials: %s(%s)' % (ex[0][0], ex[1][0],))
         except kerberos.KrbError, ex:
-            self.log.error("authGSSServerStep: %s" % (ex[0],))
+            self.log.error("authGSSServerStep: {ex}", ex=ex[0])
             kerberos.authGSSServerClean(context)
             raise error.UnauthorizedLogin('Bad credentials: %s' % (ex[0],))
 
@@ -266,11 +266,11 @@
         try:
             service, _ignore_realm = self._splitPrincipal(targetname)
         except ValueError:
-            self.log.error("authGSSServerTargetName invalid target name: '%s'" % (targetname,))
+            self.log.error("authGSSServerTargetName invalid target name: '{target}'", target=targetname)
             kerberos.authGSSServerClean(context)
             raise error.UnauthorizedLogin('Bad credentials: bad target name %s' % (targetname,))
         if service.lower() != self.service.lower():
-            self.log.error("authGSSServerTargetName mismatch got: '%s' wanted: '%s'" % (service, self.service))
+            self.log.error("authGSSServerTargetName mismatch got: '{got}' wanted: '{wanted}'", got=service, wanted=self.service)
             kerberos.authGSSServerClean(context)
             raise error.UnauthorizedLogin('Bad credentials: wrong target name %s' % (targetname,))
 
@@ -294,7 +294,7 @@
         try:
             kerberos.authGSSServerClean(context)
         except kerberos.GSSError, ex:
-            self.log.error("authGSSServerClean: %s" % (ex[0][0], ex[1][0],))
+            self.log.error("authGSSServerClean: {ex0}({ex1})", ex0=ex[0][0], ex1=ex[1][0])
             raise error.LoginFailed('Authentication System Failure %s(%s)' % (ex[0][0], ex[1][0],))
 
         # If we successfully decoded and verified the Kerberos credentials we need to add the Kerberos

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/caldavxml.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/caldavxml.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/caldavxml.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -122,16 +122,16 @@
         """
 
         if self.start is not None and self.start.isDateOnly():
-            log.info("start attribute in <time-range> is not a date-time: %s" % (self.start,))
+            log.info("start attribute in <time-range> is not a date-time: {start}", start=self.start)
             return False
         if self.end is not None and self.end.isDateOnly():
-            log.info("end attribute in <time-range> is not a date-time: %s" % (self.end,))
+            log.info("end attribute in <time-range> is not a date-time: {end}", end=self.end)
             return False
         if self.start is not None and not self.start.utc():
-            log.info("start attribute in <time-range> is not UTC: %s" % (self.start,))
+            log.info("start attribute in <time-range> is not UTC: {start}", start=self.start)
             return False
         if self.end is not None and not self.end.utc():
-            log.info("end attribute in <time-range> is not UTC: %s" % (self.end,))
+            log.info("end attribute in <time-range> is not UTC: {end}", end=self.end)
             return False
 
         # No other tests

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/geturl.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/geturl.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/geturl.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -113,6 +113,6 @@
         yield whenFinished
         response.data = protocol.data
     else:
-        log.error("Failed getURL: %s" % (url,))
+        log.error("Failed getURL: {url}", url=url)
 
     returnValue(response)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/pool.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/pool.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/pool.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -177,8 +177,10 @@
         """
         self._pendingConnects += 1
 
-        self.log.debug("Initating new client connection to: %r" % (
-            self._endpoint,))
+        self.log.debug(
+            "Initiating new client connection to: {endpoint!r}",
+            endpoint=self._endpoint,
+        )
         self._logClientStats()
 
         factory = self.clientFactory(self._reactor)
@@ -272,12 +274,12 @@
                 response = (yield self._submitRequest(request, args, kwargs))
 
             except (ConnectionLost, ConnectionDone, ConnectError), e:
-                self.log.error("HTTP pooled client connection error (attempt: %d) - retrying: %s" % (ctr + 1, e,))
+                self.log.error("HTTP pooled client connection error (attempt: {ctr}) - retrying: {ex}", ctr=ctr + 1, ex=e)
                 continue
 
             # TODO: find the proper cause of these assertions and fix
             except (AssertionError,), e:
-                self.log.error("HTTP pooled client connection assertion error (attempt: %d) - retrying: %s" % (ctr + 1, e,))
+                self.log.error("HTTP pooled client connection assertion error (attempt: {ctr}) - retrying: {ex}", ctr=ctr + 1, ex=e)
                 continue
 
             else:
@@ -307,7 +309,7 @@
         elif len(self._busyClients) + self._pendingConnects >= self._maxClients:
             d = Deferred()
             self._pendingRequests.append((d, request, args, kwargs))
-            self.log.debug("Request queued: %s, %r, %r" % (request, args, kwargs))
+            self.log.debug("Request queued: {req}, {args!r}, {kwargs!r}", req=request, args=args, kwargs=kwargs)
             self._logClientStats()
 
         else:
@@ -319,12 +321,11 @@
 
     def _logClientStats(self):
         self.log.debug(
-            "Clients #free: %d, #busy: %d, #pending: %d, #queued: %d" % (
-                len(self._freeClients),
-                len(self._busyClients),
-                self._pendingConnects,
-                len(self._pendingRequests)
-            )
+            "Clients #free: {free}, #busy: {busy}, #pending: {pending}, #queued: {queued}",
+            free=len(self._freeClients),
+            busy=len(self._busyClients),
+            pending=self._pendingConnects,
+            queued=len(self._pendingRequests)
         )
 
 
@@ -340,7 +341,7 @@
         elif client in self._freeClients:
             self._freeClients.remove(client)
 
-        self.log.debug("Removed client: %r" % (client,))
+        self.log.debug("Removed client: {client!r}", client=client)
         self._logClientStats()
 
         self._processPending()
@@ -358,7 +359,7 @@
 
         self._busyClients.add(client)
 
-        self.log.debug("Busied client: %r" % (client,))
+        self.log.debug("Busied client: {client!r}", client=client)
         self._logClientStats()
 
 
@@ -376,7 +377,7 @@
         if self.shutdown_deferred and self._isIdle():
             self.shutdown_deferred.callback(None)
 
-        self.log.debug("Freed client: %r" % (client,))
+        self.log.debug("Freed client: {client!r}", client=client)
         self._logClientStats()
 
         self._processPending()
@@ -386,8 +387,10 @@
         if len(self._pendingRequests) > 0:
             d, request, args, kwargs = self._pendingRequests.pop(0)
 
-            self.log.debug("Performing Queued Request: %s, %r, %r" % (
-                request, args, kwargs))
+            self.log.debug(
+                "Performing Queued Request: {req}, {args!r}, {kwargs!r}",
+                req=request, args=args, kwargs=kwargs
+            )
             self._logClientStats()
 
             _ign_d = self._submitRequest(request, *args, **kwargs)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/reverseproxy.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/reverseproxy.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/client/reverseproxy.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -69,7 +69,7 @@
         @return: Deferred L{Response}
         """
 
-        self.log.info("%s %s %s" % (request.method, request.uri, "HTTP/%s.%s" % request.clientproto))
+        self.log.info("{method} {uri} {proto}", method=request.method, uri=request.uri, proto="HTTP/%s.%s" % request.clientproto)
 
         # Check for multi-hop
         if not self.allowMultiHop:

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/database.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/database.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/database.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -137,16 +137,20 @@
                     if (version != self._db_version()) or (dbtype != self._db_type()):
 
                         if dbtype != self._db_type():
-                            log.error("Database %s has different type (%s vs. %s)"
-                                      % (self.dbID, dbtype, self._db_type()))
+                            log.error(
+                                "Database {db} has different type ({t1} vs. {t2})",
+                                db=self.dbID, t1=dbtype, t2=self._db_type()
+                            )
 
                             # Delete this index and start over
                             yield self._db_remove()
                             yield self._db_init()
 
                         elif version != self._db_version():
-                            log.error("Database %s has different schema (v.%s vs. v.%s)"
-                                      % (self.dbID, version, self._db_version()))
+                            log.error(
+                                "Database {db} has different schema (v.{v1} vs. v.{v2})",
+                                db=self.dbID, v1=version, v2=self._db_version()
+                            )
 
                             # Upgrade the DB
                             yield self._db_upgrade(version)
@@ -167,7 +171,7 @@
             try:
                 self.pool.close()
             except Exception, e:
-                log.error("Error whilst closing connection pool: %s" % (e,))
+                log.error("Error whilst closing connection pool: {ex}", ex=e)
             self.pool = None
             self.initialized = False
 
@@ -183,7 +187,7 @@
             try:
                 yield self._db_empty_data_tables()
             except Exception, e:
-                log.error("Error in database clean: %s" % (e,))
+                log.error("Error in database clean: {ex}", ex=e)
                 self.close()
             else:
                 break
@@ -200,7 +204,7 @@
             try:
                 yield self._db_execute(sql, *query_params)
             except Exception, e:
-                log.error("Error in database execute: %s" % (e,))
+                log.error("Error in database execute: {ex}", ex=e)
                 self.close()
             else:
                 break
@@ -217,7 +221,7 @@
             try:
                 yield self._db_execute_script(script)
             except Exception, e:
-                log.error("Error in database executescript: %s" % (e,))
+                log.error("Error in database executescript: {ex}", ex=e)
                 self.close()
             else:
                 break
@@ -234,7 +238,7 @@
             try:
                 result = (yield self._db_all_values_for_sql(sql, *query_params))
             except Exception, e:
-                log.error("Error in database query: %s" % (e,))
+                log.error("Error in database query: {ex}", ex=e)
                 self.close()
             else:
                 break
@@ -253,7 +257,7 @@
             try:
                 result = (yield self._db_values_for_sql(sql, *query_params))
             except Exception, e:
-                log.error("Error in database queryList: %s" % (e,))
+                log.error("Error in database queryList: {ex}", ex=e)
                 self.close()
             else:
                 break
@@ -272,7 +276,7 @@
             try:
                 result = (yield self._db_value_for_sql(sql, *query_params))
             except Exception, e:
-                log.error("Error in database queryOne: %s" % (e,))
+                log.error("Error in database queryOne: {ex}", ex=e)
                 self.close()
             else:
                 break
@@ -303,7 +307,7 @@
         """
         Initialise the underlying database tables.
         """
-        log.info("Initializing database %s" % (self.dbID,))
+        log.info("Initializing database {db}", db=self.dbID)
 
         # TODO we need an exclusive lock of some kind here to prevent a race condition
         # in which multiple processes try to create the tables.

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/augment.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/augment.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/augment.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -363,13 +363,13 @@
                 try:
                     uid = pwd.getpwnam(config.UserName).pw_uid
                 except KeyError:
-                    log.error("User not found: %s" % (config.UserName,))
+                    log.error("User not found: {user}", user=config.UserName)
             gid = -1
             if config.GroupName:
                 try:
                     gid = grp.getgrnam(config.GroupName).gr_gid
                 except KeyError:
-                    log.error("Group not found: %s" % (config.GroupName,))
+                    log.error("Group not found: {grp}", grp=config.GroupName)
             if uid != -1 and gid != -1:
                 os.chown(xmlfile, uid, gid)
 

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/calendar.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/calendar.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/calendar.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -305,7 +305,7 @@
         def gotACL(wikiACL):
             if wikiACL is not None:
                 # ACL depends on wiki server...
-                log.debug("Wiki ACL: %s" % (wikiACL.toxml(),))
+                log.debug("Wiki ACL: {acl}", acl=wikiACL.toxml())
                 return succeed(wikiACL)
             else:
                 # ...otherwise permissions are fixed, and are not subject to

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/common.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/common.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/common.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -69,12 +69,14 @@
         name = record.uid
 
         if record is None:
-            log.debug("No directory record with UID %r" % (name,))
+            log.debug("No directory record with UID {name}", name=name)
             returnValue(None)
 
         if not getattr(record, self.enabledAttribute, False):
-            log.debug("Directory record %r is not enabled for %s" % (
-                record, self.homeResourceTypeName))
+            log.debug(
+                "Directory record {rec!r} is not enabled for {type}",
+                rec=record, type=self.homeResourceTypeName
+            )
             returnValue(None)
 
         assert len(name) > 4, "Directory record has an invalid GUID: %r" % (

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/principal.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/principal.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/principal.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -90,7 +90,7 @@
 
         if wikiACL is not None:
             # ACL depends on wiki server...
-            log.debug("Wiki ACL: %s" % (wikiACL.toxml(),))
+            log.debug("Wiki ACL: {acl}", acl=wikiACL.toxml())
             returnValue(wikiACL)
         else:
             # ...otherwise permissions are fixed, and are not subject to
@@ -398,7 +398,7 @@
             if record is not None and record.hasCalendars:
                 returnValue((yield self.principalForRecord(record)))
 
-        log.debug("No principal for calendar user address: %r" % (address,))
+        log.debug("No principal for calendar user address: {addr}", addr=address)
         returnValue(None)
 
 
@@ -413,7 +413,7 @@
     ##
 
     def createSimilarFile(self, path):
-        log.error("Attempt to create clone %r of resource %r" % (path, self))
+        log.error("Attempt to create clone {path} of resource {rsrc!r}", path=path, rsrc=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
 
@@ -477,7 +477,7 @@
 
 
     def createSimilarFile(self, path):
-        log.error("Attempt to create clone %r of resource %r" % (path, self))
+        log.error("Attempt to create clone {path} of resource {rsrc!r}", path=path, rsrc=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
 
@@ -499,8 +499,10 @@
                     for shortName in getattr(record, "shortNames", []):
                         children.append(shortName)
             except AttributeError:
-                log.warn("Cannot list children of record type {rt}",
-                         rt=self.recordType.name)
+                log.warn(
+                    "Cannot list children of record type {rt}",
+                    rt=self.recordType.name
+                )
             returnValue(children)
 
         else:
@@ -562,7 +564,7 @@
 
 
     def createSimilarFile(self, path):
-        log.error("Attempt to create clone %r of resource %r" % (path, self))
+        log.error("Attempt to create clone {path} of resource {rsrc!r}", path=path, rsrc=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
 
@@ -581,7 +583,7 @@
         record = (yield self.directory.recordWithUID(primaryUID))
         primaryPrincipal = (yield self.principalForRecord(record))
         if primaryPrincipal is None:
-            log.info("No principal found for UID: %s" % (name,))
+            log.info("No principal found for UID: {name}", name=name)
             returnValue(None)
 
         if subType is None:
@@ -1035,7 +1037,7 @@
                 if relative not in records:
                     found = (yield self.parent.principalForRecord(relative))
                     if found is None:
-                        log.error("No principal found for directory record: %r" % (relative,))
+                        log.error("No principal found for directory record: {rec!r}", rec=relative)
                     else:
                         if proxy:
                             if proxy == "read-write":
@@ -1194,7 +1196,7 @@
 
 
     def createSimilarFile(self, path):
-        log.error("Attempt to create clone %r of resource %r" % (path, self))
+        log.error("Attempt to create clone {path} of resource {rsrc!r}", path=path, rsrc=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directory/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -227,7 +227,7 @@
                 yield item
             yield "\n"
     except Exception, e:
-        log.error("Exception while rendering: %s" % (e,))
+        log.error("Exception while rendering: {ex}", ex=e)
         Failure().printTraceback()
         yield "  ** %s **: %s\n" % (e.__class__.__name__, e)
     if not thereAreAny:

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directorybackedaddressbook.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directorybackedaddressbook.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/directorybackedaddressbook.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -80,7 +80,8 @@
         if self.directory is None:
             log.info(
                 "Setting search directory to {principalDirectory}",
-                principalDirectory=self.principalDirectory)
+                principalDirectory=self.principalDirectory
+            )
             self.directory = self.principalDirectory
             # future: instantiate another directory based on /Search/Contacts (?)
 
@@ -162,8 +163,10 @@
         Get vCards for a given addressBookFilter and addressBookQuery
         """
 
-        log.debug("doAddressBookDirectoryQuery: directory={directory} addressBookFilter={addressBookFilter}, addressBookQuery={addressBookQuery}, maxResults={maxResults}",
-                  directory=self.directory, addressBookFilter=addressBookFilter, addressBookQuery=addressBookQuery, maxResults=maxResults)
+        log.debug(
+            "doAddressBookDirectoryQuery: directory={directory} addressBookFilter={addressBookFilter}, addressBookQuery={addressBookQuery}, maxResults={maxResults}",
+            directory=self.directory, addressBookFilter=addressBookFilter, addressBookQuery=addressBookQuery, maxResults=maxResults
+        )
         results = []
         limited = False
         maxQueryRecords = 0
@@ -232,8 +235,10 @@
                 allRecords = set()
                 if fields:
                     records = yield self.directory.recordsMatchingFields(fields, operand, queryRecordType)
-                    log.debug("doAddressBookDirectoryQuery: recordsMatchingFields({f}, {o}): #records={n}, records={records!r}",
-                              f=fields, o=operand, n=len(records), records=records)
+                    log.debug(
+                        "doAddressBookDirectoryQuery: recordsMatchingFields({f}, {o}): #records={n}, records={records!r}",
+                        f=fields, o=operand, n=len(records), records=records
+                    )
                     allRecords = set(records)
                 else:
                     recordTypes = set([queryRecordType]) if queryRecordType else set(self.directory.recordTypes()) & set(recordTypeToVCardKindMap.keys())

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/extensions.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/extensions.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/extensions.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -130,7 +130,7 @@
         # Only handle Depth: 0
         depth = request.headers.getHeader("depth", "0")
         if depth != "0":
-            log.error("Error in principal-property-search REPORT, Depth set to %s" % (depth,))
+            log.error("Error in principal-property-search REPORT, Depth set to {depth}", depth=depth)
             raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, "Depth %s not allowed" % (depth,)))
 
         # Get any limit value from xml
@@ -319,7 +319,7 @@
         # Only handle Depth: 0
         depth = request.headers.getHeader("depth", "0")
         if depth != "0":
-            log.error("Error in calendarserver-principal-search REPORT, Depth set to %s" % (depth,))
+            log.error("Error in calendarserver-principal-search REPORT, Depth set to {depth}", depth=depth)
             raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, "Depth %s not allowed" % (depth,)))
 
         tokens, context, applyTo, clientLimit, propElement = extractCalendarServerPrincipalSearchData(calendarserver_principal_search)
@@ -484,9 +484,11 @@
                 f.trap(HTTPError)
                 code = f.value.response.code
                 if code == responsecode.NOT_FOUND:
-                    log.error("Property {p} was returned by listProperties() "
-                              "but does not exist for resource {r}.",
-                              p=name, r=self.resource)
+                    log.error(
+                        "Property {name} was returned by listProperties() "
+                        "but does not exist for resource {rsrc!r}.",
+                        name=name, rsrc=self.resource
+                    )
                     return (name, None)
                 if code == responsecode.UNAUTHORIZED:
                     return (name, accessDeniedValue)
@@ -954,7 +956,7 @@
 
 
     def get(self, qname, uid=None):
-        # self.log.debug("Get: %r, %r" % (self.resource.fp.path, qname))
+        # self.log.debug("Get: {p}, {n}", p=self.resource.fp.path, n=qname)
 
         cache = self._cache()
 
@@ -963,7 +965,7 @@
         if cachedQname in cache:
             property = cache.get(cachedQname, None)
             if property is None:
-                self.log.debug("Cache miss: %r, %r, %r" % (self, self.resource.fp.path, qname))
+                self.log.debug("Cache miss: {s!r}, {p}, {n}", s=self, p=self.resource.fp.path, n=qname)
                 try:
                     property = self.propertyStore.get(qname, uid)
                 except HTTPError:
@@ -977,7 +979,7 @@
 
 
     def set(self, property, uid=None):
-        # self.log.debug("Set: %r, %r" % (self.resource.fp.path, property))
+        # self.log.debug("Set: {p}, {prop!r}", p=self.resource.fp.path, prop=property)
 
         cache = self._cache()
 
@@ -989,7 +991,7 @@
 
 
     def contains(self, qname, uid=None):
-        # self.log.debug("Contains: %r, %r" % (self.resource.fp.path, qname))
+        # self.log.debug("Contains: {p}, {n}", p=self.resource.fp.path, n=qname)
 
         cachedQname = qname + (uid,)
 
@@ -1002,14 +1004,14 @@
                 raise
 
         if cachedQname in cache:
-            # self.log.debug("Contains cache hit: %r, %r, %r" % (self, self.resource.fp.path, qname))
+            # self.log.debug("Contains cache hit: {s!r}, {p}, {n}", s=self, p=self.resource.fp.path, n=qname)
             return True
         else:
             return False
 
 
     def delete(self, qname, uid=None):
-        # self.log.debug("Delete: %r, %r" % (self.resource.fp.path, qname))
+        # self.log.debug("Delete: {p}, {n}", p=self.resource.fp.path, n=qname)
 
         cachedQname = qname + (uid,)
 
@@ -1020,7 +1022,7 @@
 
 
     def list(self, uid=None, filterByUID=True):
-        # self.log.debug("List: %r" % (self.resource.fp.path,))
+        # self.log.debug("List: {p}", p=self.resource.fp.path)
         keys = self._cache().iterkeys()
         if filterByUID:
             return [
@@ -1034,7 +1036,7 @@
 
     def _cache(self):
         if not hasattr(self, "_data"):
-            # self.log.debug("Cache init: %r" % (self.resource.fp.path,))
+            # self.log.debug("Cache init: {p}", p=self.resource.fp.path)
             self._data = dict(
                 (name, None)
                 for name in self.propertyStore.list(filterByUID=False)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/freebusyurl.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/freebusyurl.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/freebusyurl.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -260,7 +260,7 @@
         try:
             inbox = (yield request.locateResource(inboxURL))
         except:
-            log.error("No schedule inbox for principal: %s" % (principal,))
+            log.error("No schedule inbox for principal: {p}", p=principal)
             inbox = None
         if inbox is None:
             raise HTTPError(StatusResponse(responsecode.INTERNAL_SERVER_ERROR, "No schedule inbox for principal: %s" % (principal,)))

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/ical.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/ical.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/ical.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -1892,10 +1892,10 @@
             cannot be fixed.
         """
         if self.name() != "VCALENDAR":
-            log.debug("Not a calendar: {0}".format(self,))
+            log.debug("Not a calendar: {s!r}", s=self)
             raise InvalidICalendarDataError("Not a calendar")
         if not self.resourceType():
-            log.debug("Unknown resource type: {0}".format(self,))
+            log.debug("Unknown resource type: {s!r}", s=self)
             raise InvalidICalendarDataError("Unknown resource type")
 
         # Do underlying iCalendar library validation with data fix
@@ -1908,11 +1908,11 @@
             unfixed.extend(runfixed)
 
         if unfixed:
-            log.debug("Calendar data had unfixable problems:\n  {0}".format("\n  ".join(unfixed),))
+            log.debug("Calendar data had unfixable problems:\n  {d}", d="\n  ".join(unfixed))
             if doRaise:
                 raise InvalidICalendarDataError("Calendar data had unfixable problems:\n  {0}".format("\n  ".join(unfixed),))
         if fixed:
-            log.debug("Calendar data had fixable problems:\n  {0}".format("\n  ".join(fixed),))
+            log.debug("Calendar data had fixable problems:\n  {d}", d="\n  ".join(fixed))
 
         return fixed, unfixed
 
@@ -2099,7 +2099,7 @@
                 # If it does we will have to reinstate this check but only after we have checked for implicit.
 # UNCOMMENT OUT master_recurring AND got_override ASSIGNMENTS ABOVE
 #                if got_override and got_master and not master_recurring:
-#                    msg = "Calendar resources must have a recurring master component if there is an overridden one (%s)" % (subcomponent.propertyValue("UID"),)
+#                    msg = "Calendar resources must have a recurring master component if there is an overridden one ({uid})".format(uid=subcomponent.propertyValue("UID"),)
 #                    log.debug(msg)
 #                    raise InvalidICalendarDataError(msg)
 
@@ -2131,7 +2131,7 @@
         for timezone in timezones:
             if timezone not in timezone_refs:
                 log.debug(
-                    "Timezone {0} is not referenced by any non-timezone component".format(timezone,)
+                    "Timezone {tz} is not referenced by any non-timezone component", tz=timezone
                 )
 
         # TZIDs without a VTIMEZONE must be available in the server's TZ database

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/localization.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/localization.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/localization.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -383,7 +383,7 @@
     # Do we have an Apple translations directory?
     if lprojRoot and gnuRoot and os.path.exists(lprojRoot):
 
-        log.info("Looking for Apple .lproj directories in %s" % (lprojRoot,))
+        log.info("Looking for Apple .lproj directories in {r}", r=lprojRoot)
 
         # Make sure the gnutext translations directory exists
         if not os.path.exists(gnuRoot):
@@ -391,8 +391,9 @@
                 os.mkdir(gnuRoot)
             except OSError:
                 log.warn(
-                    "Could not create gnuttext translation directory: %s"
-                    % (gnuRoot,))
+                    "Could not create gnuttext translation directory: {r}",
+                    r=gnuRoot,
+                )
                 return
 
         # Scan for Apple translations (directories ending in .lproj)
@@ -413,16 +414,16 @@
                         os.stat(stringsFile).st_mtime >
                         os.stat(moFile).st_mtime
                     ):
-                        log.info("Converting %s to %s" % (stringsFile, moFile))
+                        log.info("Converting {s} to {m}", s=stringsFile, m=moFile)
                         try:
                             convertStringsFile(stringsFile, moFile)
                         except Exception, e:
                             log.error(
-                                "Failed to convert %s to %s: %s" %
-                                (stringsFile, moFile, e)
+                                "Failed to convert {s} to {m}: {ex}",
+                                s=stringsFile, m=moFile, ex=e,
                             )
                     else:
-                        log.info("%s is up to date" % (moFile,))
+                        log.info("{m} is up to date", m=moFile)
 
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheclient.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheclient.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheclient.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -803,7 +803,7 @@
                 log.debug("Memcacheclient check-and-set failed")
                 raise TokenMismatchError(key)
 
-            log.error("Memcacheclient %s command failed with result (%s)" % (cmd, result))
+            log.error("Memcacheclient {cmd} command failed with result ({r})", cmd=cmd, r=result)
 
             return False
 
@@ -1276,7 +1276,7 @@
 
 
     def mark_dead(self, reason):
-        log.error("Memcacheclient socket marked dead (%s)" % (reason,))
+        log.error("Memcacheclient socket marked dead ({r})", r=reason)
         self.debuglog("MemCache: %s: %s.  Marking dead." % (self, reason))
         self.deaduntil = time.time() + _Host._DEAD_RETRY
         self.close_socket()
@@ -1294,13 +1294,13 @@
         try:
             s.connect(self.address)
         except socket.timeout, msg:
-            log.error("Memcacheclient _get_socket() connection timed out (%s)" % (msg,))
+            log.error("Memcacheclient _get_socket() connection timed out ({m})", m=msg)
             self.mark_dead("connect: %s" % msg)
             return None
         except socket.error, msg:
             if isinstance(msg, tuple):
                 msg = msg[1]
-            log.error("Memcacheclient _get_socket() connection error (%s)" % (msg,))
+            log.error("Memcacheclient _get_socket() connection error ({m})", m=msg)
             self.mark_dead("connect: %s" % msg)
             return None
         self.socket = s

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachelock.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachelock.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachelock.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -68,19 +68,19 @@
             if result:
                 self._hasLock = True
                 if waiting:
-                    self.log.debug("Got lock after waiting on %s" % (self._locktoken,))
+                    self.log.debug("Got lock after waiting on {t}", t=self._locktoken)
                 break
 
             if self._timeout and time.time() < timeout_at:
                 waiting = True
-                self.log.debug("Waiting for lock on %s" % (self._locktoken,))
+                self.log.debug("Waiting for lock on {t}", t=self._locktoken)
                 pause = Deferred()
                 def _timedDeferred():
                     pause.callback(True)
                 reactor.callLater(self._retry_interval, _timedDeferred)
                 yield pause
             else:
-                self.log.debug("Timed out lock after waiting on %s" % (self._locktoken,))
+                self.log.debug("Timed out lock after waiting on {t}", t=self._locktoken)
                 raise MemcacheLockTimeoutError()
 
         returnValue(True)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachepool.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachepool.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcachepool.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -78,7 +78,7 @@
             # The reactor is stopping; don't reconnect
             return
 
-        self.log.error("MemCache connection lost: %s" % (reason,))
+        self.log.error("MemCache connection lost: {r}", r=reason)
         if self._protocolInstance is not None:
             self.connectionPool.clientBusy(self._protocolInstance)
 
@@ -92,7 +92,7 @@
         """
         Notify the connectionPool that we're unable to connect
         """
-        self.log.error("MemCache connection failed: %s" % (reason,))
+        self.log.error("MemCache connection failed: {r}", r=reason)
         if self._protocolInstance is not None:
             self.connectionPool.clientBusy(self._protocolInstance)
 
@@ -196,7 +196,7 @@
         @return: A L{Deferred} that fires with the L{IProtocol} instance.
         """
         self.log.debug(
-            "Initiating new client connection to: %r" % (self._endpoint,)
+            "Initiating new client connection to: {r!r}", r=self._endpoint
         )
         self._logClientStats()
 
@@ -245,10 +245,10 @@
             message and free the client.
             """
             self.log.error(
-                "Memcache error: %s; request: %s %s" % (
-                    failure.value, command,
-                    " ".join(args)[:self.REQUEST_LOGGING_SIZE],
-                )
+                "Memcache error: {ex}; request: {cmd} {args}",
+                ex=failure.value,
+                cmd=command,
+                args=" ".join(args)[:self.REQUEST_LOGGING_SIZE],
             )
             self.clientFree(client)
 
@@ -290,7 +290,7 @@
             d = Deferred()
             self._commands.append((d, command, args, kwargs))
             self.log.debug(
-                "Command queued: %s, %r, %r" % (command, args, kwargs)
+                "Command queued: {c}, {a!r}, {k!r}", c=command, a=args, k=kwargs
             )
             self._logClientStats()
 
@@ -304,12 +304,11 @@
 
     def _logClientStats(self):
         self.log.debug(
-            "Clients #free: %d, #busy: %d, #pending: %d, #queued: %d" % (
-                len(self._freeClients),
-                len(self._busyClients),
-                self._pendingConnects,
-                len(self._commands),
-            )
+            "Clients #free: {f}, #busy: {b}, #pending: {p}, #queued: {q}",
+            f=len(self._freeClients),
+            b=len(self._busyClients),
+            p=self._pendingConnects,
+            q=len(self._commands),
         )
 
 
@@ -325,7 +324,7 @@
         elif client in self._freeClients:
             self._freeClients.remove(client)
 
-        self.log.debug("Removed client: %r" % (client,))
+        self.log.debug("Removed client: {c!r}", c=client)
         self._logClientStats()
 
 
@@ -341,7 +340,7 @@
 
         self._busyClients.add(client)
 
-        self.log.debug("Busied client: %r" % (client,))
+        self.log.debug("Busied client: {c!r}", c=client)
         self._logClientStats()
 
 
@@ -363,8 +362,8 @@
             d, command, args, kwargs = self._commands.pop(0)
 
             self.log.debug(
-                "Performing Queued Command: %s, %r, %r"
-                % (command, args, kwargs)
+                "Performing Queued Command: {c}, {a}, {k}",
+                c=command, a=args, k=kwargs,
             )
             self._logClientStats()
 
@@ -373,7 +372,7 @@
 
             _ign_d.addCallback(d.callback)
 
-        self.log.debug("Freed client: %r" % (client,))
+        self.log.debug("Freed client: {c!r}", c=client)
         self._logClientStats()
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheprops.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheprops.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacheprops.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -104,7 +104,7 @@
         try:
             childCache, token = propertyCache[key]
         except KeyError:
-            self.log.debug("No child property cache for %s" % (child,))
+            self.log.debug("No child property cache for {c!r}", c=child)
             childCache, token = ({}, None)
 
             # message = "No child property cache for %s" % (child,)
@@ -132,7 +132,7 @@
             else:
                 return {}
 
-        self.log.debug("Loading cache for %s" % (self.collection,))
+        self.log.debug("Loading cache for {c}", c=self.collection)
 
         client = self.memcacheClient()
         assert client is not None, "OMG no cache!"
@@ -220,7 +220,7 @@
 
 
     def _storeCache(self, cache):
-        self.log.debug("Storing cache for %s" % (self.collection,))
+        self.log.debug("Storing cache for {c}", c=self.collection)
 
         values = dict((
             (self._keyForPath(path), props)
@@ -242,7 +242,7 @@
         elif not childNames:
             return {}
 
-        self.log.debug("Building cache for %s" % (self.collection,))
+        self.log.debug("Building cache for {c}", c=self.collection)
 
         cache = {}
 
@@ -372,29 +372,32 @@
                         "No such property: %s%s" % (uid if uid else "", encodeXMLName(*qname))
                     ))
 
-            self.log.debug("Read for %s%s on %s" % (
-                ("{%s}:" % (uid,)) if uid else "",
-                qname,
-                self.childPropertyStore.resource.fp.path
-            ))
+            self.log.debug(
+                "Read for {u}{q} on {p}",
+                u=("{%s}:" % (uid,)) if uid else "",
+                q=qname,
+                p=self.childPropertyStore.resource.fp.path
+            )
             return self.childPropertyStore.get(qname, uid=uid)
 
         def set(self, property, uid=None):
-            self.log.debug("Write for %s%s on %s" % (
-                ("{%s}:" % (uid,)) if uid else "",
-                property.qname(),
-                self.childPropertyStore.resource.fp.path
-            ))
+            self.log.debug(
+                "Write for {u}{q} on {p}",
+                u=("{%s}:" % (uid,)) if uid else "",
+                q=property.qname(),
+                p=self.childPropertyStore.resource.fp.path
+            )
 
             self.parentPropertyCollection.setProperty(self.child, property, uid)
             self.childPropertyStore.set(property, uid=uid)
 
         def delete(self, qname, uid=None):
-            self.log.debug("Delete for %s%s on %s" % (
-                ("{%s}:" % (uid,)) if uid else "",
-                qname,
-                self.childPropertyStore.resource.fp.path
-            ))
+            self.log.debug(
+                "Delete for {u}{q} on {p}",
+                u=("{%s}:" % (uid,)) if uid else "",
+                q=qname,
+                p=self.childPropertyStore.resource.fp.path,
+            )
 
             self.parentPropertyCollection.deleteProperty(self.child, qname, uid)
             self.childPropertyStore.delete(qname, uid=uid)
@@ -405,11 +408,12 @@
                 qnameuid = qname + (uid,)
                 return qnameuid in propertyCache
 
-            self.log.debug("Contains for %s%s on %s" % (
-                ("{%s}:" % (uid,)) if uid else "",
-                qname,
-                self.childPropertyStore.resource.fp.path,
-            ))
+            self.log.debug(
+                "Contains for {u}{q} on {p}",
+                u=("{%s}:" % (uid,)) if uid else "",
+                q=qname,
+                p=self.childPropertyStore.resource.fp.path,
+            )
             return self.childPropertyStore.contains(qname, uid=uid)
 
         def list(self, uid=None, filterByUID=True, cache=True):
@@ -425,6 +429,7 @@
                 else:
                     return results
 
-            self.log.debug("List for %s"
-                           % (self.childPropertyStore.resource.fp.path,))
+            self.log.debug(
+                "List for{p}", p=self.childPropertyStore.resource.fp.path
+            )
             return self.childPropertyStore.list(uid=uid, filterByUID=filterByUID)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacher.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacher.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/memcacher.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -288,7 +288,7 @@
         my_value = value
         if self._pickle:
             my_value = cPickle.dumps(value)
-        self.log.debug("Adding Cache Token for %r" % (key,))
+        self.log.debug("Adding Cache Token for {k!r}", k=key)
         return proto.add('%s:%s' % (self._namespace, self._normalizeKey(key)), my_value, expireTime=expireTime)
 
 
@@ -299,7 +299,7 @@
         my_value = value
         if self._pickle:
             my_value = cPickle.dumps(value)
-        self.log.debug("Setting Cache Token for %r" % (key,))
+        self.log.debug("Setting Cache Token for {k!r}", k=key)
         return proto.set('%s:%s' % (self._namespace, self._normalizeKey(key)), my_value, expireTime=expireTime)
 
 
@@ -310,7 +310,7 @@
         my_value = value
         if self._pickle:
             my_value = cPickle.dumps(value)
-        self.log.debug("Setting Cache Token for %r" % (key,))
+        self.log.debug("Setting Cache Token for {k!r}", k=key)
         return proto.checkAndSet('%s:%s' % (self._namespace, self._normalizeKey(key)), my_value, cas, expireTime=expireTime)
 
 
@@ -326,24 +326,24 @@
                 value = (identifier, value)
             return value
 
-        self.log.debug("Getting Cache Token for %r" % (key,))
+        self.log.debug("Getting Cache Token for {k!r}", k=key)
         d = self._getMemcacheProtocol().get('%s:%s' % (self._namespace, self._normalizeKey(key)), withIdentifier=withIdentifier)
         d.addCallback(_gotit, withIdentifier)
         return d
 
 
     def delete(self, key):
-        self.log.debug("Deleting Cache Token for %r" % (key,))
+        self.log.debug("Deleting Cache Token for {k!r}", k=key)
         return self._getMemcacheProtocol().delete('%s:%s' % (self._namespace, self._normalizeKey(key)))
 
 
     def incr(self, key, delta=1):
-        self.log.debug("Incrementing Cache Token for %r" % (key,))
+        self.log.debug("Incrementing Cache Token for {k!r}", k=key)
         return self._getMemcacheProtocol().incr('%s:%s' % (self._namespace, self._normalizeKey(key)), delta)
 
 
     def decr(self, key, delta=1):
-        self.log.debug("Decrementing Cache Token for %r" % (key,))
+        self.log.debug("Decrementing Cache Token for {k!r}", k=key)
         return self._getMemcacheProtocol().incr('%s:%s' % (self._namespace, self._normalizeKey(key)), delta)
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcalendar.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcalendar.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcalendar.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -51,7 +51,7 @@
     yield parent.authorize(request, (davxml.Bind(),))
 
     if self.exists():
-        log.error("Attempt to create collection where resource exists: %s" % (self,))
+        log.error("Attempt to create collection where resource exists: {s!r}", s=self)
         raise HTTPError(ErrorResponse(
             responsecode.FORBIDDEN,
             (davxml.dav_namespace, "resource-must-be-null"),
@@ -59,8 +59,7 @@
         ))
 
     if not parent.isCollection():
-        log.error("Attempt to create collection with non-collection parent: %s"
-                  % (self,))
+        log.error("Attempt to create collection with non-collection parent: {s!r}", s=self)
         raise HTTPError(ErrorResponse(
             responsecode.CONFLICT,
             (caldavxml.caldav_namespace, "calendar-collection-location-ok"),
@@ -74,7 +73,7 @@
         doc = (yield davXMLFromStream(request.stream))
         yield self.createCalendar(request)
     except ValueError, e:
-        log.error("Error while handling MKCALENDAR: %s" % (e,))
+        log.error("Error while handling MKCALENDAR: {ex}", ex=e)
         raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, str(e)))
 
     set_supported_component_set = False

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcol.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcol.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/mkcol.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -54,8 +54,7 @@
     yield parent.authorize(request, (davxml.Bind(),))
 
     if self.exists():
-        log.error("Attempt to create collection where resource exists: %s"
-                  % (self,))
+        log.error("Attempt to create collection where resource exists: {s!r}", s=self)
         raise HTTPError(ErrorResponse(
             responsecode.FORBIDDEN,
             (davxml.dav_namespace, "resource-must-be-null"),
@@ -63,8 +62,7 @@
         ))
 
     if not parent.isCollection():
-        log.error("Attempt to create collection with non-collection parent: %s"
-                  % (self,))
+        log.error("Attempt to create collection with non-collection parent: {s!r}", s=self)
         raise HTTPError(ErrorResponse(
             responsecode.CONFLICT,
             (davxml.dav_namespace, "collection-location-ok"),
@@ -99,7 +97,7 @@
     try:
         doc = (yield davXMLFromStream(request.stream))
     except ValueError, e:
-        log.error("Error while handling MKCOL: %s" % (e,))
+        log.error("Error while handling MKCOL: {ex}", ex=e)
         # TODO: txweb2.dav 'MKCOL' tests demand this particular response
         # code, but should we really be looking at the XML content or the
         # content-type header?  It seems to me like this ought to be considered

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/post.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/post.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/post.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -37,7 +37,7 @@
 
     # POST can only target an existing resource
     if not self.exists():
-        log.error("Resource not found: %s" % (self,))
+        log.error("Resource not found: {s!r}", s=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
     # POST can support many different APIs

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/propfind.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/propfind.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/propfind.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -57,7 +57,7 @@
         parent = (yield request.locateResource(parentURL))
         yield parent.authorize(request, (davxml.Bind(),))
 
-        log.error("Resource not found: %s" % (self,))
+        log.error("Resource not found: {s!r}", s=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
     #
@@ -71,7 +71,7 @@
     try:
         doc = (yield davXMLFromStream(request.stream))
     except ValueError, e:
-        log.error("Error while handling PROPFIND body: %s" % (e,))
+        log.error("Error while handling PROPFIND body: {ex}", ex=e)
         raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, str(e)))
 
     if doc is None:
@@ -159,7 +159,7 @@
                 try:
                     resource_properties = (yield resource.listProperties(request))
                 except:
-                    log.error("Unable to get properties for resource %r" % (resource,))
+                    log.error("Unable to get properties for resource {r!r}", r=resource)
                     raise
 
                 properties_by_status = {

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -50,7 +50,7 @@
     Respond to a REPORT request. (RFC 3253, section 3.6)
     """
     if not self.exists():
-        log.error("Resource not found: %s" % (self,))
+        log.error("Resource not found: {s!r}", s=self)
         raise HTTPError(responsecode.NOT_FOUND)
 
     #
@@ -117,8 +117,10 @@
         #
         # Requested report is not supported.
         #
-        log.error("Unsupported REPORT {name} for resource {resource} (no method {method})",
-                  name=encodeXMLName(namespace, name), resource=self, method=method_name)
+        log.error(
+            "Unsupported REPORT {name} for resource {resource} (no method {method})",
+            name=encodeXMLName(namespace, name), resource=self, method=method_name,
+        )
 
         raise HTTPError(ErrorResponse(
             responsecode.FORBIDDEN,

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_addressbook_query.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_addressbook_query.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_addressbook_query.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -91,7 +91,7 @@
 
     # Verify that the filter element is valid
     if (filter is None) or not filter.valid():
-        log.error("Invalid filter element: %r" % (filter,))
+        log.error("Invalid filter element: {f!r}", f=filter)
         raise HTTPError(ErrorResponse(
             responsecode.FORBIDDEN,
             (carddav_namespace, "valid-filter"),

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_calendar_query.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_calendar_query.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_calendar_query.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -61,7 +61,7 @@
     if not self.isCollection():
         parent = (yield self.locateParent(request, request.uri))
         if not parent.isPseudoCalendarCollection():
-            log.error("calendar-query report is not allowed on a resource outside of a calendar collection %s" % (self,))
+            log.error("calendar-query report is not allowed on a resource outside of a calendar collection {s!r}", s=self)
             raise HTTPError(StatusResponse(responsecode.FORBIDDEN, "Must be calendar collection or calendar resource"))
 
     responses = []
@@ -125,7 +125,7 @@
 
     # Verify that the filter element is valid
     if (filter is None) or not filter.valid():
-        log.error("Invalid filter element: %r" % (xmlfilter,))
+        log.error("Invalid filter element: {f!r}", f=xmlfilter)
         raise HTTPError(ErrorResponse(
             responsecode.FORBIDDEN,
             (caldav_namespace, "valid-filter"),
@@ -179,7 +179,7 @@
                     # of one of these resources in another request.  In this
                     # case, we ignore the now missing resource rather
                     # than raise an error for the entire report.
-                    log.error("Missing resource during query: %s" % (href,))
+                    log.error("Missing resource during query: {h}", h=href)
 
         # Check whether supplied resource is a calendar or a calendar object resource
         if calresource.isPseudoCalendarCollection():

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_freebusy.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_freebusy.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_freebusy.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -52,7 +52,7 @@
     (CalDAV-access-09, section 7.8)
     """
     if not self.isCollection():
-        log.error("freebusy report is only allowed on collection resources %s" % (self,))
+        log.error("freebusy report is only allowed on collection resources {s!r}", s=self)
         raise HTTPError(StatusResponse(responsecode.FORBIDDEN, "Not a calendar collection"))
 
     if freebusy.qname() != (caldavxml.caldav_namespace, "free-busy-query"):

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_sync_collection.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_sync_collection.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/method/report_sync_collection.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -50,7 +50,7 @@
 
     # These resource support the report
     if not config.EnableSyncReport or element.Report(element.SyncCollection(),) not in self.supportedReports():
-        log.error("sync-collection report is only allowed on calendar/inbox/addressbook/notification collection resources %s" % (self,))
+        log.error("sync-collection report is only allowed on calendar/inbox/addressbook/notification collection resources {s!r}", s=self)
         raise HTTPError(ErrorResponse(
             responsecode.FORBIDDEN,
             element.SupportedReport(),
@@ -79,7 +79,7 @@
         descriptor = "Depth header without DAV:sync-level"
 
     if depth not in ("1", "infinity"):
-        log.error("sync-collection report with invalid depth header: %s" % (depth,))
+        log.error("sync-collection report with invalid depth header: {d}", d=depth)
         raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, "Invalid %s value" % (descriptor,)))
 
     propertyreq = sync_collection.property.children if sync_collection.property else None
@@ -128,7 +128,7 @@
             # of one of these resources in another request.  In this
             # case, we ignore the now missing resource rather
             # than raise an error for the entire report.
-            log.error("Missing resource during sync: %s" % (href,))
+            log.error("Missing resource during sync: {h}", h=href)
 
     for child, child_uri in forbidden_resources:
         href = element.HRef.fromString(child_uri)
@@ -149,7 +149,7 @@
             # of one of these resources in another request.  In this
             # case, we ignore the now missing resource rather
             # than raise an error for the entire report.
-            log.error("Missing resource during sync: %s" % (href,))
+            log.error("Missing resource during sync: {h}", h=href)
 
     for name in removed:
         href = element.HRef.fromString(joinURL(request.uri, name))

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/resource.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/resource.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/resource.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -1395,12 +1395,12 @@
             created in this resource.
         """
         if self.exists():
-            self.log.error("Attempt to create collection where file exists: %s" % (self,))
+            self.log.error("Attempt to create collection where file exists: {s!r}", s=self)
             raise HTTPError(StatusResponse(responsecode.NOT_ALLOWED, "File exists"))
 
         # newStore guarantees that we always have a parent calendar home
         # if not self.fp.parent().isdir():
-        #     log.error("Attempt to create collection with no parent: %s" % (self.fp.path,))
+        #     log.error("Attempt to create collection with no parent: {p}", p=self.fp.path)
         #     raise HTTPError(StatusResponse(responsecode.CONFLICT, "No parent collection"))
 
         #
@@ -1410,7 +1410,7 @@
         parent = (yield self._checkParents(request, isPseudoCalendarCollectionResource))
 
         if parent is not None:
-            self.log.error("Cannot create a calendar collection within a calendar collection %s" % (parent,))
+            self.log.error("Cannot create a calendar collection within a calendar collection {p!r}", p=parent)
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (caldavxml.caldav_namespace, "calendar-collection-location-ok"),
@@ -1421,7 +1421,7 @@
         if config.MaxCollectionsPerHome:
             parent = (yield self.locateParent(request, request.urlForResource(self)))
             if (yield parent.countOwnedChildren()) >= config.MaxCollectionsPerHome: # NB this ignores shares
-                self.log.error("Cannot create a calendar collection because there are too many already present in %s" % (parent,))
+                self.log.error("Cannot create a calendar collection because there are too many already present in {p!r}", p=parent)
                 raise HTTPError(ErrorResponse(
                     responsecode.FORBIDDEN,
                     customxml.MaxCollections(),
@@ -1484,12 +1484,12 @@
         #
 
         if self.exists():
-            self.log.error("Attempt to create collection where file exists: %s" % (self,))
+            self.log.error("Attempt to create collection where file exists: {s!r}", s=self)
             raise HTTPError(StatusResponse(responsecode.NOT_ALLOWED, "File exists"))
 
         # newStore guarantees that we always have a parent calendar home
         # if not os.path.isdir(os.path.dirname(self.fp.path)):
-        #     log.error("Attempt to create collection with no parent: %s" % (self.fp.path,))
+        #     log.error("Attempt to create collection with no parent: {p}", p=self.fp.path)
         #     raise HTTPError(StatusResponse(responsecode.CONFLICT, "No parent collection"))
 
         #
@@ -1498,7 +1498,7 @@
 
         parent = (yield self._checkParents(request, isAddressBookCollectionResource))
         if parent is not None:
-            self.log.error("Cannot create an address book collection within an address book collection %s" % (parent,))
+            self.log.error("Cannot create an address book collection within an address book collection {p!r}", p=parent)
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (carddavxml.carddav_namespace, "addressbook-collection-location-ok"),
@@ -1509,7 +1509,7 @@
         if config.MaxCollectionsPerHome:
             parent = (yield self.locateParent(request, request.urlForResource(self)))
             if (yield parent.countOwnedChildren()) >= config.MaxCollectionsPerHome: # NB this ignores shares
-                self.log.error("Cannot create a calendar collection because there are too many already present in %s" % (parent,))
+                self.log.error("Cannot create a calendar collection because there are too many already present in {p!r}", p=parent)
                 raise HTTPError(ErrorResponse(
                     responsecode.FORBIDDEN,
                     customxml.MaxCollections(),

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/scheduling_store/caldav/resource.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/scheduling_store/caldav/resource.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/scheduling_store/caldav/resource.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -453,7 +453,7 @@
         contentType = request.headers.getHeader("content-type")
         format = self.determineType(contentType)
         if format is None:
-            self.log.error("MIME type %s not allowed in calendar collection" % (contentType,))
+            self.log.error("MIME type {t} not allowed in calendar collection", t=contentType)
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (caldav_namespace, "supported-calendar-data"),
@@ -465,7 +465,7 @@
             calendar = (yield Component.fromIStream(request.stream, format=format))
         except:
             # FIXME: Bare except
-            self.log.error("Error while handling POST: %s" % (Failure(),))
+            self.log.error("Error while handling POST: {f}", f=Failure())
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (caldav_namespace, "valid-calendar-data"),
@@ -486,7 +486,7 @@
             originator = originatorPrincipal.canonicalCalendarUserAddress()
 
         if not originator:
-            self.log.error("%s request must have Originator" % (self.method,))
+            self.log.error("{m} request must have Originator", m=self.method)
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (caldav_namespace, "originator-specified"),

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sharing.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sharing.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sharing.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -775,7 +775,7 @@
         try:
             doc = element.WebDAVDocument.fromString(xmldata)
         except ValueError, e:
-            self.log.error("Error parsing doc (%s) Doc:\n %s" % (str(e), xmldata,))
+            self.log.error("Error parsing doc ({ex}) Doc:\n {x}", ex=str(e), x=xmldata)
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (customxml.calendarserver_namespace, "valid-request"),
@@ -787,7 +787,7 @@
             result = (yield self.xmlDocHandlers[type(root)](self, request, root))
             returnValue(result)
         else:
-            self.log.error("Unsupported XML (%s)" % (root,))
+            self.log.error("Unsupported XML ({r})", r=root)
             raise HTTPError(ErrorResponse(
                 responsecode.FORBIDDEN,
                 (customxml.calendarserver_namespace, "valid-request"),
@@ -813,7 +813,7 @@
                 if contentType in self._postHandlers:
                     return self._postHandlers[contentType](self, request)
                 else:
-                    self.log.info("Got a POST on collection or group with an unsupported content type: %s" % (contentType,))
+                    self.log.info("Got a POST on collection or group with an unsupported content type: {t}", t=contentType)
             else:
                 self.log.info("Got a POST on collection or group with no content type")
         return succeed(responsecode.FORBIDDEN)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -124,16 +124,20 @@
                         del(self._db_connection)
 
                         if dbtype != self._db_type():
-                            log.error("Database %s has different type (%s vs. %s)"
-                                      % (db_filename, dbtype, self._db_type()))
+                            log.error(
+                                "Database {f} has different type ({t1} vs. {t2})",
+                                f=db_filename, t1=dbtype, t2=self._db_type(),
+                            )
 
                             # Delete this index and start over
                             os.remove(db_filename)
                             return self._db()
 
                         if version != self._db_version():
-                            log.error("Database %s has different schema (v.%s vs. v.%s)"
-                                      % (db_filename, version, self._db_version()))
+                            log.error(
+                                "Database {f} has different schema (v.{v1} vs. v.{v2})",
+                                f=db_filename, v1=version, v2=self._db_version(),
+                            )
 
                             # Upgrade the DB
                             return self._db_upgrade(version)
@@ -186,7 +190,7 @@
         @param db_filename: the file name of the index database.
         @param q:           a database cursor to use.
         """
-        log.info("Initializing database %s" % (db_filename,))
+        log.info("Initializing database {f}", f=db_filename)
 
         # We need an exclusive lock here as we are making a big change to the database and we don't
         # want other processes to get stomped on or stomp on us.
@@ -363,7 +367,7 @@
             self.lastrowid = q.lastrowid
             return q.fetchall()
         except DatabaseError:
-            log.error("Exception while executing SQL on DB %s: %r %r" % (self, sql, query_params))
+            log.error("Exception while executing SQL on DB {s!r}: {q!r} {p!r}", s=self, q=sql, p=query_params)
             raise
         finally:
             q.close()

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/stdconfig.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/stdconfig.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -28,8 +28,9 @@
 from twext.enterprise.jobs.jobitem import JobItem
 from twext.enterprise.jobs.queue import ControllerQueue
 from twext.enterprise.jobs.workitem import WorkItem
-from twext.python.log import Logger, InvalidLogLevelError, LogLevel
+from twext.python.log import Logger
 
+from twisted.logger import InvalidLogLevelError, LogLevel
 from twisted.python.filepath import FilePath
 from twisted.python.runtime import platform
 
@@ -1213,7 +1214,7 @@
             with open(filename) as f:
                 configDict = parser.parse(f)
         except (IOError, OSError):
-            log.error("Configuration file does not exist or is inaccessible: %s" % (filename,))
+            log.error("Configuration file does not exist or is inaccessible: {f}", f=filename)
             raise ConfigurationError("Configuration file does not exist or is inaccessible: %s" % (filename,))
         else:
             configDict = _cleanup(configDict, self._defaults)
@@ -1416,7 +1417,7 @@
 
     for param in items.get("DirectoryService", {}).get("params", {}):
         if dsType in DEFAULT_SERVICE_PARAMS and param not in DEFAULT_SERVICE_PARAMS[dsType]:
-            log.warn("Parameter %s is not supported by service %s" % (param, dsType))
+            log.warn("Parameter {p} is not supported by service {t}", p=param, t=dsType)
 
 
 
@@ -1446,7 +1447,7 @@
 
     for param in items.get("ResourceService", {}).get("params", {}):
         if dsType in DEFAULT_RESOURCE_PARAMS and param not in DEFAULT_RESOURCE_PARAMS[dsType]:
-            log.warn("Parameter %s is not supported by service %s" % (param, dsType))
+            log.warn("Parameter {p} is not supported by service {t}", p=param, t=dsType)
 
 
 
@@ -1502,7 +1503,7 @@
     if configDict.AugmentService.type in DEFAULT_AUGMENT_PARAMS:
         for param in tuple(configDict.AugmentService.params):
             if param not in DEFAULT_AUGMENT_PARAMS[configDict.AugmentService.type]:
-                log.warn("Parameter %s is not supported by service %s" % (param, configDict.AugmentService.type))
+                log.warn("Parameter {p} is not supported by service {t}", p=param, t=configDict.AugmentService.type)
                 del configDict.AugmentService.params[param]
 
     # Upgrading augments.xml must be done prior to using the store/directory
@@ -1567,7 +1568,7 @@
         * configDict.AdminACEs
     )
 
-    log.debug("Root ACL: %s" % (configDict.RootResourceACL.toxml(),))
+    log.debug("Root ACL: {x}", x=configDict.RootResourceACL.toxml())
 
     configDict.ProvisioningResourceACL = davxml.ACL(
         # Read-only for anon or authenticated, depending on config
@@ -1588,7 +1589,7 @@
         ]
     )
 
-    log.debug("Nav ACL: %s" % (configDict.ProvisioningResourceACL.toxml(),))
+    log.debug("Nav ACL: {x}", x=configDict.ProvisioningResourceACL.toxml())
 
     def principalObjects(urls):
         for principalURL in urls:
@@ -1628,19 +1629,19 @@
 
 
 def _updateLogLevels(configDict, reloading=False):
-    log.publisher.levels.clearLogLevels()
+    log.levels().clearLogLevels()
 
     try:
         if "DefaultLogLevel" in configDict:
             levelName = configDict["DefaultLogLevel"]
             if levelName:
-                log.publisher.levels.setLogLevelForNamespace(
+                log.levels().setLogLevelForNamespace(
                     None, LogLevel.levelWithName(levelName)
                 )
 
         if "LogLevels" in configDict:
             for namespace, levelName in configDict["LogLevels"].iteritems():
-                log.publisher.levels.setLogLevelForNamespace(
+                log.levels().setLogLevelForNamespace(
                     namespace, LogLevel.levelWithName(levelName)
                 )
 
@@ -1682,7 +1683,7 @@
                             topic = getAPNTopicFromCertificate(certPath)
                             service[protocol]["Topic"] = topic
                         else:
-                            log.error("APNS certificate not found: %s" % (certPath,))
+                            log.error("APNS certificate not found: {p}", p=certPath)
                     else:
                         log.error("APNS certificate path not specified")
 
@@ -1695,13 +1696,13 @@
                 try:
                     passphrase = getPasswordFromKeychain(accountName)
                     service[protocol]["Passphrase"] = passphrase
-                    log.info("%s APNS certificate passphrase retreived from keychain" % (protocol,))
+                    log.info("{p} APNS certificate passphrase retreived from keychain", p=protocol)
                 except KeychainAccessError:
                     # The system doesn't support keychain
                     pass
                 except KeychainPasswordNotFound:
                     # The password doesn't exist in the keychain.
-                    log.info("%s APNS certificate passphrase not found in keychain" % (protocol,))
+                    log.info("{p} APNS certificate passphrase not found in keychain", p=protocol)
 
 
 
@@ -1738,13 +1739,13 @@
                     if not service[direction]["Password"]:
                         password = getPasswordFromKeychain(account)
                         service[direction]["Password"] = password
-                        log.info("iMIP %s password successfully retrieved from keychain" % (direction,))
+                        log.info("iMIP {d} password successfully retrieved from keychain", d=direction)
                 except KeychainAccessError:
                     # The system doesn't support keychain
                     pass
                 except KeychainPasswordNotFound:
                     # The password doesn't exist in the keychain.
-                    log.info("iMIP %s password not found in keychain" % (direction,))
+                    log.info("iMIP {d} password not found in keychain", d=direction)
 
 
 
@@ -1801,12 +1802,12 @@
         if config_key in os.environ and os.environ[config_key] == config_key_value:
             pass
         else:
-            log.error("Ignoring unknown configuration option: %r" % (key,))
+            log.error("Ignoring unknown configuration option: {k}", k=key)
             del cleanDict[key]
 
 
     def deprecated(oldKey, newKey):
-        log.error("Configuration option %r is deprecated in favor of %r." % (oldKey, newKey))
+        log.error("Configuration option {o} is deprecated in favor of {n}.", o=oldKey, n=newKey)
         if oldKey in configDict and newKey in configDict:
             raise ConfigurationError(
                 "Both %r and %r options are specified; use the %r option only."

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/storebridge.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/storebridge.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/storebridge.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -436,7 +436,7 @@
         """
 
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         depth = request.headers.getHeader("depth", "infinity")
@@ -473,13 +473,13 @@
 
         # Check sharee collection first
         if self.isShareeResource():
-            log.debug("Removing shared collection %s" % (self,))
+            log.debug("Removing shared collection {s!r}", s=self)
             yield self.removeShareeResource(request)
             # Re-initialize to get stuff setup again now we have no object
             self._initializeWithHomeChild(None, self._parentResource)
             returnValue(NO_CONTENT)
 
-        log.debug("Deleting collection %s" % (self,))
+        log.debug("Deleting collection {s!r}", s=self)
 
         # 'deluri' is this resource's URI; I should be able to synthesize it
         # from 'self'.
@@ -536,7 +536,7 @@
         that collections's name.
         """
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         # Can not move outside of home or to existing collection
@@ -2094,7 +2094,7 @@
             raise HTTPError(FORBIDDEN)
 
         except Exception, e:
-            log.error("Unable to store attachment: %s" % (e,))
+            log.error("Unable to store attachment: {ex}", ex=e)
             raise HTTPError(SERVICE_UNAVAILABLE)
 
         try:
@@ -2116,7 +2116,7 @@
     def http_GET(self, request):
 
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         stream = ProducerStream()
@@ -2130,7 +2130,7 @@
         try:
             self._newStoreAttachment.retrieve(StreamProtocol())
         except IOError, e:
-            log.error("Unable to read attachment: %s, due to: %s" % (self, e,))
+            log.error("Unable to read attachment: {s!r}, due to: {ex}", s=self, ex=e)
             raise HTTPError(NOT_FOUND)
 
         headers = {"content-type": self.contentType()}
@@ -2146,7 +2146,7 @@
             raise HTTPError(FORBIDDEN)
 
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         yield self._newStoreCalendarObject.removeAttachmentWithName(
@@ -2265,7 +2265,7 @@
     @inlineCallbacks
     def render(self, request):
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         # Accept header handling
@@ -2368,7 +2368,7 @@
         Override http_DELETE to validate 'depth' header.
         """
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         return self.storeRemove(request)
@@ -2391,7 +2391,7 @@
         # Do some pre-flight checks - must exist, must be move to another
         # CommonHomeChild in the same Home, destination resource must not exist
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         parent = (yield request.locateResource(parentForURL(request.uri)))
@@ -2666,8 +2666,9 @@
             # Do "precondition" test
             if (self.scheduleTag != header):
                 log.debug(
-                    "If-Schedule-Tag-Match: header value '%s' does not match resource value '%s'" %
-                    (header, self.scheduleTag,))
+                    "If-Schedule-Tag-Match: header value '{h}' does not match resource value '{r}'",
+                    h=header, r=self.scheduleTag
+                )
                 raise HTTPError(PRECONDITION_FAILED)
             return True
 
@@ -2907,7 +2908,7 @@
         except Exception as err:
 
             if isinstance(err, ValueError):
-                log.error("Error while handling (calendar) PUT: %s" % (err,))
+                log.error("Error while handling (calendar) PUT: {ex}", ex=err)
                 raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, str(err)))
             else:
                 raise
@@ -2919,7 +2920,7 @@
         Override http_DELETE to do schedule tag behavior.
         """
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         # Do schedule tag check
@@ -2937,7 +2938,7 @@
         # Do some pre-flight checks - must exist, must be move to another
         # CommonHomeChild in the same Home, destination resource must not exist
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         # Do schedule tag check
@@ -3556,7 +3557,7 @@
 
         # Handle sharing
         if self.isShareeResource():
-            log.debug("Removing shared resource %s" % (self,))
+            log.debug("Removing shared resource {s!r}", s=self)
             yield self.removeShareeResource(request)
             # Re-initialize to get stuff setup again now we have no object
             self._initializeWithObject(None, self._newStoreParent)
@@ -3672,7 +3673,7 @@
         except Exception as err:
 
             if isinstance(err, ValueError):
-                log.error("Error while handling (vCard) PUT: %s" % (err,))
+                log.error("Error while handling (vCard) PUT: {ex}", ex=err)
                 raise HTTPError(StatusResponse(responsecode.BAD_REQUEST, str(err)))
             else:
                 raise
@@ -3717,7 +3718,7 @@
         @rtype: L{davxml.ACL}
         """
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         if not self._parentResource.isShareeResource():
@@ -4044,7 +4045,7 @@
     @inlineCallbacks
     def http_GET(self, request):
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         returnValue(
@@ -4059,7 +4060,7 @@
         Override http_DELETE to validate 'depth' header.
         """
         if not self.exists():
-            log.debug("Resource not found: %s" % (self,))
+            log.debug("Resource not found: {s!r}", s=self)
             raise HTTPError(NOT_FOUND)
 
         return self.storeRemove(request)

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/test_config.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/test_config.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/test_config.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -17,17 +17,19 @@
 import socket
 
 from plistlib import writePlist #@UnresolvedImport
-from twext.python.log import LogLevel
-from twext.python.test.test_log import defaultLogLevel, logLevelForNamespace
 
+from twext.python.log import Logger
+
 from twistedcaldav.config import config, ConfigDict, mergeData
 from twistedcaldav.resource import CalDAVResource
 from twistedcaldav.stdconfig import DEFAULT_CONFIG, PListConfigProvider, \
     RELATIVE_PATHS
 from twistedcaldav.test.util import TestCase
 
+from twisted.logger import LogLevel
 
 
+
 testConfig = """<?xml version="1.0" encoding="UTF-8"?>
 <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
 <plist version="1.0">
@@ -404,6 +406,7 @@
         """
         Logging module configures properly.
         """
+        defaultLogLevel = Logger.filterPredicate.logLevelForNamespace(None)
         self.assertNotEqual(
             defaultLogLevel, LogLevel.error,
             "This test assumes the default log level is not error."
@@ -412,19 +415,19 @@
         config.setDefaults(DEFAULT_CONFIG)
         config.reload()
 
-        self.assertEquals(logLevelForNamespace(None), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("some.namespace"), defaultLogLevel)
+        self.assertEquals(Logger.filterPredicate.logLevelForNamespace(None), defaultLogLevel)
+        self.assertEquals(Logger.filterPredicate.logLevelForNamespace("some.namespace"), defaultLogLevel)
 
         config.load(self.testConfig)
 
-        self.assertEquals(logLevelForNamespace(None), LogLevel.error)
-        self.assertEquals(logLevelForNamespace("some.namespace"), LogLevel.debug)
+        self.assertEquals(Logger.filterPredicate.logLevelForNamespace(None), LogLevel.error)
+        self.assertEquals(Logger.filterPredicate.logLevelForNamespace("some.namespace"), LogLevel.debug)
 
         writePlist({}, self.testConfig)
         config.reload()
 
-        self.assertEquals(logLevelForNamespace(None), defaultLogLevel)
-        self.assertEquals(logLevelForNamespace("some.namespace"), defaultLogLevel)
+        self.assertEquals(Logger.filterPredicate.logLevelForNamespace(None), defaultLogLevel)
+        self.assertEquals(Logger.filterPredicate.logLevelForNamespace("some.namespace"), defaultLogLevel)
 
 
     def test_ConfigDict(self):

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/test/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -572,7 +572,7 @@
         # Attempt to exit promptly if a traceback is displayed, so we don't
         # deal with timeouts.
         if "Traceback" in data and not self.terminated:
-            log.error("Terminating process due to output: %s" % (data,))
+            log.error("Terminating process due to output: {d}", d=data)
             self.terminated = True
             self.transport.signalProcess("TERM")
 

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezones.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezones.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezones.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -139,7 +139,7 @@
                 dbpath.remove()
                 TimezoneCache.copyPackage("Updating")
             else:
-                log.info("Valid timezones at %s" % (dbpath.path,))
+                log.info("Valid timezones at {p}", p=dbpath.path)
 
 
     @staticmethod

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezonestdservice.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezonestdservice.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/timezonestdservice.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -632,7 +632,7 @@
                 try:
                     vtz = readVTZ(self.aliases[tzid])
                 except TimezoneException:
-                    log.error("Failed to find timezone data for alias: %s" % (tzid,))
+                    log.error("Failed to find timezone data for alias: {tzid}", tzid=tzid)
                     return None
                 else:
                     vtz = vtz.duplicate()
@@ -640,7 +640,7 @@
                     addVTZ(tzid, vtz)
                     calendar.addComponent(vtz.getComponents()[0].duplicate())
             else:
-                log.error("Failed to find timezone data for: %s" % (tzid,))
+                log.error("Failed to find timezone data for: {tzid}", tzid=tzid)
                 return None
 
         return calendar
@@ -700,7 +700,7 @@
                         tzdata = f.read()
                     md5 = hashlib.md5(tzdata).hexdigest()
                 except IOError:
-                    log.error("Unable to read timezone file: %s" % (fullPath,))
+                    log.error("Unable to read timezone file: {p}", p=fullPath)
                     continue
 
                 if checkIfChanged:
@@ -716,7 +716,7 @@
             with open(os.path.join(self.basepath, "links.txt")) as f:
                 aliases = f.read()
         except IOError, e:
-            log.error("Unable to read links.txt file: %s" % (str(e),))
+            log.error("Unable to read links.txt file: {ex}", ex=str(e))
             aliases = ""
 
         try:
@@ -729,9 +729,9 @@
                             tzinfo.aliases += (alias_from,)
                         self.aliases[alias_from] = alias_to
                 else:
-                    log.error("Missing alias from '%s' to '%s'" % (alias_from, alias_to,))
+                    log.error("Missing alias from '{fr}' to '{to}'", fr=alias_from, to=alias_to)
         except ValueError:
-            log.error("Unable to parse links.txt file: %s" % (str(e),))
+            log.error("Unable to parse links.txt file: {ex}", ex=str(e))
 
 
     def updateDatabase(self):
@@ -758,7 +758,7 @@
         self.discovered = False
         self._url = None
 
-        log.debug("Configuring secondary server with basepath: %s" % (self.basepath,))
+        log.debug("Configuring secondary server with basepath: {p}", p=self.basepath)
 
         if not os.path.exists(self.basepath):
             os.makedirs(self.basepath)
@@ -799,7 +799,7 @@
             if self.timezones[tzid].dtstamp < newtimezones[tzid].dtstamp:
                 changedtzids.add(tzid)
 
-        log.debug("Fetching %d new, %d changed timezones on secondary server" % (len(newtzids), len(changedtzids),))
+        log.debug("Fetching {n} new, {c} changed timezones on secondary server", n=len(newtzids), c=len(changedtzids))
 
         # Now apply changes - do requests in parallel for speedier fetching
         BATCH = 5
@@ -833,17 +833,17 @@
             self.uri = "https://%s/.well-known/timezone" % (self.uri,)
 
         testURI = "%s?action=capabilities" % (self.uri,)
-        log.debug("Discovering secondary server: %s" % (testURI,))
+        log.debug("Discovering secondary server: {uri}", uri=testURI)
         response = (yield getURL(testURI))
         if response is None or response.code / 100 != 2:
-            log.error("Unable to discover secondary server: %s" % (testURI,))
+            log.error("Unable to discover secondary server: {uri}", uri=testURI)
             self.discovered = False
             returnValue(False)
 
         # Cache the redirect target
         if hasattr(response, "location"):
             self.uri = response.location
-            log.debug("Redirected secondary server to: %s" % (self.uri,))
+            log.debug("Redirected secondary server to: {uri}", uri=self.uri)
 
         # TODO: Ignoring the data from capabilities for now
 
@@ -867,7 +867,7 @@
         url = "%s?action=list" % (self.uri,)
         if self.dtstamp:
             url = "%s&changedsince=%s" % (url, self.dtstamp,)
-        log.debug("Getting timezone list from secondary server: %s" % (url,))
+        log.debug("Getting timezone list from secondary server: {url}", url=url)
         response = (yield getURL(url))
         if response is None or response.code / 100 != 2:
             returnValue(None)
@@ -888,9 +888,9 @@
                 aliases = timezone.get("aliases", ())
                 timezones[tzid] = TimezoneInfo(tzid, aliases, lastmod, None)
         except (ValueError, KeyError):
-            log.debug("Failed to parse JSON timezone list response: %s" % (response.data,))
+            log.debug("Failed to parse JSON timezone list response: {resp}", resp=response.data)
             returnValue(None)
-        log.debug("Got %s timezones from secondary server" % (len(timezones),))
+        log.debug("Got {len} timezones from secondary server", len=len(timezones))
 
         returnValue((dtstamp, timezones,))
 
@@ -899,7 +899,7 @@
     def _getTimezoneFromServer(self, tzinfo):
         # List all from the server
         url = "%s?action=get&tzid=%s" % (self.uri, tzinfo.tzid,)
-        log.debug("Getting timezone from secondary server: %s" % (url,))
+        log.debug("Getting timezone from secondary server: {url}", url=url)
         response = (yield getURL(url))
         if response is None or response.code / 100 != 2:
             returnValue(None)
@@ -908,14 +908,14 @@
         ct = ct.split(";", 1)
         ct = ct[0]
         if ct not in ("text/calendar",):
-            log.error("Invalid content-type '%s' for tzid : %s" % (ct, tzinfo.tzid,))
+            log.error("Invalid content-type '{ct}' for tzid : {tzid}", ct=ct, tzid=tzinfo.tzid)
             returnValue(None)
 
         ical = response.data
         try:
             calendar = Calendar.parseText(ical)
         except InvalidData:
-            log.error("Invalid calendar data for tzid: %s" % (tzinfo.tzid,))
+            log.error("Invalid calendar data for tzid: {tzid}", tzid=tzinfo.tzid)
             returnValue(None)
         ical = calendar.getText()
 
@@ -928,7 +928,7 @@
             with open(tzpath, "w") as f:
                 f.write(ical)
         except IOError, e:
-            log.error("Unable to write calendar file for %s: %s" % (tzinfo.tzid, str(e),))
+            log.error("Unable to write calendar file for {tzid}: {ex}", tzid=tzinfo.tzid, ex=str(e))
         else:
             self.timezones[tzinfo.tzid] = tzinfo
 
@@ -939,4 +939,4 @@
             os.remove(tzpath)
             del self.timezones[tzid]
         except IOError, e:
-            log.error("Unable to write calendar file for %s: %s" % (tzid, str(e),))
+            log.error("Unable to write calendar file for {tzid}: {ex}", tzid=tzid, ex=str(e))

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/upgrade.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/upgrade.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -93,14 +93,14 @@
         try:
             uid = pwd.getpwnam(config.UserName).pw_uid
         except KeyError:
-            log.error("User not found: %s" % (config.UserName,))
+            log.error("User not found: {user}", user=config.UserName)
 
     gid = -1
     if config.GroupName:
         try:
             gid = grp.getgrnam(config.GroupName).gr_gid
         except KeyError:
-            log.error("Group not found: %s" % (config.GroupName,))
+            log.error("Group not found: {grp}", grp=config.GroupName)
 
     return uid, gid
 
@@ -143,7 +143,7 @@
             # Skip directories
             continue
 
-        log.debug("Processing: %s" % (resPath,))
+        log.debug("Processing: {path}", path=resPath)
         needsRewrite = False
         with open(resPath) as res:
             data = res.read()
@@ -151,12 +151,12 @@
             try:
                 data, fixed = fixBadQuotes(data)
                 if fixed:
-                    log.warn("Fixing bad quotes in %s" % (resPath,))
+                    log.warn("Fixing bad quotes in {path}", path=resPath)
                     needsRewrite = True
             except Exception, e:
                 log.error(
-                    "Error while fixing bad quotes in %s: %s" %
-                    (resPath, e)
+                    "Error while fixing bad quotes in {path}: {ex}",
+                    path=resPath, ex=e,
                 )
                 errorOccurred = True
                 continue
@@ -164,12 +164,12 @@
             try:
                 data, fixed = removeIllegalCharacters(data)
                 if fixed:
-                    log.warn("Removing illegal characters in %s" % (resPath,))
+                    log.warn("Removing illegal characters in {path}", path=resPath)
                     needsRewrite = True
             except Exception, e:
                 log.error(
-                    "Error while removing illegal characters in %s: %s" %
-                    (resPath, e)
+                    "Error while removing illegal characters in {path}: {ex}",
+                    path=resPath, ex=e,
                 )
                 errorOccurred = True
                 continue
@@ -177,12 +177,12 @@
             try:
                 data, fixed = (yield normalizeCUAddrs(data, directory, cuaCache))
                 if fixed:
-                    log.debug("Normalized CUAddrs in %s" % (resPath,))
+                    log.debug("Normalized CUAddrs in {path}", path=resPath)
                     needsRewrite = True
             except Exception, e:
                 log.error(
-                    "Error while normalizing %s: %s" %
-                    (resPath, e)
+                    "Error while normalizing {path}: {ex}",
+                    path=resPath, ex=e,
                 )
                 errorOccurred = True
                 continue
@@ -229,7 +229,7 @@
 
     errorOccurred = False
 
-    log.debug("Upgrading calendar home: %s" % (homePath,))
+    log.debug("Upgrading calendar home: {path}", path=homePath)
 
     try:
         for cal in os.listdir(homePath):
@@ -242,7 +242,7 @@
                 # Delete the old, now obsolete, notifications directory.
                 rmdir(calPath)
                 continue
-            log.debug("Upgrading calendar: %s" % (calPath,))
+            log.debug("Upgrading calendar: {path}", path=calPath)
             if not (yield upgradeCalendarCollection(calPath, directory, cuaCache)):
                 errorOccurred = True
 
@@ -265,7 +265,7 @@
                 except:
                     raise
     except Exception, e:
-        log.error("Failed to upgrade calendar home %s: %s" % (homePath, e))
+        log.error("Failed to upgrade calendar home {path}: {ex}", path=homePath, ex=e)
         raise
 
     returnValue(errorOccurred)
@@ -319,8 +319,8 @@
             )
 
         log.debug(
-            "Moved the calendar user proxy database from '%s' to '%s'."
-            % (oldDbPath, newDbPath,)
+            "Moved the calendar user proxy database from '{old}' to '{new}'.",
+            old=oldDbPath, new=newDbPath
         )
 
 
@@ -375,8 +375,8 @@
             # Now delete the on disk representation of principals
             rmdir(oldPrincipals)
             log.debug(
-                "Removed the old principal directory at '%s'."
-                % (oldPrincipals,)
+                "Removed the old principal directory at '{path}'.",
+                path=oldPrincipals,
             )
 
         calRoot = os.path.join(docRoot, "calendars")
@@ -386,7 +386,7 @@
 
             # Move calendar homes to new location:
 
-            log.warn("Moving calendar homes to %s" % (uidHomes,))
+            log.warn("Moving calendar homes to {path}", path=uidHomes)
 
             if os.path.exists(uidHomes):
                 for home in os.listdir(uidHomes):
@@ -470,7 +470,7 @@
                 os.chown(inboxItemsFile, uid, gid)
 
             if total:
-                log.warn("Processing %d calendar homes in %s" % (total, uidHomes))
+                log.warn("Processing {total} calendar homes in {path}", total=total, path=uidHomes)
 
                 # Upgrade calendar homes in the new location:
                 count = 0
@@ -495,8 +495,8 @@
                                     count += 1
                                     if count % 10 == 0:
                                         log.warn(
-                                            "Processed calendar home %d of %d"
-                                            % (count, total)
+                                            "Processed calendar home {count} of {total}",
+                                            count=count, total=total,
                                         )
                 log.warn("Done processing calendar homes")
 
@@ -569,7 +569,7 @@
 
     def flattenHome(calHome):
 
-        log.debug("Flattening calendar home: %s" % (calHome,))
+        log.debug("Flattening calendar home: {path}", path=calHome)
 
         try:
             for cal in os.listdir(calHome):
@@ -598,19 +598,19 @@
 #                                newPath = os.path.join(calHome, child)
 #                                if os.path.exists(newPath):
 #                                    newPath = os.path.join(calHome, str(uuid.uuid4()))
-#                                log.debug("Moving up calendar: %s" % (childCollection,))
+#                                log.debug("Moving up calendar: {path}", path=childCollection)
 #                                os.rename(childCollection, newPath)
 #                            else:
 #                                scanCollection(childCollection)
 
                 if os.path.isdir(calPath):
-                    # log.debug("Regular collection scan: %s" % (calPath,))
+                    # log.debug("Regular collection scan: {path}", path=calPath)
                     # scanCollection(calPath)
-                    log.warn("Regular collection hidden: %s" % (calPath,))
+                    log.warn("Regular collection hidden: {path}", path=calPath)
                     os.rename(calPath, os.path.join(calHome, ".collection." + os.path.basename(calPath)))
 
         except Exception, e:
-            log.error("Failed to upgrade calendar home %s: %s" % (calHome, e))
+            log.error("Failed to upgrade calendar home {path}: {ex}", path=calHome, ex=e)
             return succeed(False)
 
         return succeed(True)
@@ -780,7 +780,7 @@
     docRoot = config.DocumentRoot
 
     if not os.path.exists(docRoot):
-        log.info("DocumentRoot (%s) doesn't exist; skipping migration" % (docRoot,))
+        log.info("DocumentRoot ({path}) doesn't exist; skipping migration", path=docRoot)
         return
 
     versionFilePath = os.path.join(docRoot, ".calendarserver_version")
@@ -792,22 +792,22 @@
                 onDiskVersion = int(versionFile.read().strip())
         except IOError:
             log.error(
-                "Cannot open %s; skipping migration" %
-                (versionFilePath,)
+                "Cannot open {path}; skipping migration",
+                path=versionFilePath,
             )
         except ValueError:
             log.error(
-                "Invalid version number in %s; skipping migration" %
-                (versionFilePath,)
+                "Invalid version number in {path}; skipping migration",
+                path=versionFilePath,
             )
 
     uid, gid = getCalendarServerIDs(config)
 
     for version, method in upgradeMethods:
         if onDiskVersion < version:
-            log.warn("Upgrading to version %d" % (version,))
+            log.warn("Upgrading to version {vers}", vers=version)
             (yield method(config, directory))
-            log.warn("Upgraded to version %d" % (version,))
+            log.warn("Upgraded to version {vers}", vers=version)
             with open(versionFilePath, "w") as verFile:
                 verFile.write(str(version))
             os.chown(versionFilePath, uid, gid)
@@ -841,7 +841,7 @@
     record = yield directory.recordWithShortName(directory.oldNameToRecordType(recordType), shortName)
     if record is None:
         # We will simply ignore this and not write out an fb-set entry
-        log.error("Can't update free-busy href; %s is not in the directory" % shortName)
+        log.error("Can't update free-busy href; {user} is not in the directory", user=shortName)
         returnValue("")
 
     uid = record.uid
@@ -1023,7 +1023,7 @@
 
             if augmentRecords:
                 yield augmentService.addAugmentRecords(augmentRecords)
-            log.warn("Migrated %d auto-schedule settings" % (len(augmentRecords),))
+            log.warn("Migrated {len} auto-schedule settings", len=len(augmentRecords))
 
 
 
@@ -1227,24 +1227,24 @@
                 totalItems = len(itemsToProcess)
                 ignoreUUIDs = set()
                 for ctr, inboxItem in enumerate(itemsToProcess):
-                    log.info("Processing %d/%d inbox item: %s" % (ctr + 1, totalItems, inboxItem,))
+                    log.info("Processing {ctr}/{total} inbox item: {item}", ctr=ctr + 1, total=totalItems, item=inboxItem)
                     ignore, uuid, ignore, fileName = inboxItem.rsplit("/", 3)
 
                     if uuid in ignoreUUIDs:
-                        log.debug("Ignored inbox item - uuid ignored: %s" % (inboxItem,))
+                        log.debug("Ignored inbox item - uuid ignored: {item}", item=inboxItem)
                         inboxItems.remove(inboxItem)
                         continue
 
                     record = yield directory.recordWithUID(uuid)
                     if record is None:
-                        log.debug("Ignored inbox item - no record: %s" % (inboxItem,))
+                        log.debug("Ignored inbox item - no record: {item}", item=inboxItem)
                         inboxItems.remove(inboxItem)
                         ignoreUUIDs.add(uuid)
                         continue
 
                     principal = yield principalCollection.principalForRecord(record)
                     if principal is None or not isinstance(principal, DirectoryCalendarPrincipalResource):
-                        log.debug("Ignored inbox item - no principal: %s" % (inboxItem,))
+                        log.debug("Ignored inbox item - no principal: {item}", item=inboxItem)
                         inboxItems.remove(inboxItem)
                         ignoreUUIDs.add(uuid)
                         continue
@@ -1260,7 +1260,7 @@
                     try:
                         calendarHome = yield principal.calendarHome(request)
                         if calendarHome is None:
-                            log.debug("Ignored inbox item - no calendar home: %s" % (inboxItem,))
+                            log.debug("Ignored inbox item - no calendar home: {item}", item=inboxItem)
                             inboxItems.remove(inboxItem)
                             ignoreUUIDs.add(uuid)
                             continue
@@ -1288,13 +1288,13 @@
                                     )
                                 except Exception, e:
                                     log.error(
-                                        "Error processing inbox item: %s (%s)"
-                                        % (inboxItem, e)
+                                        "Error processing inbox item: {item} ({ex})",
+                                        item=inboxItem, ex=e,
                                     )
                             else:
-                                log.debug("Ignored inbox item - no resource: %s" % (inboxItem,))
+                                log.debug("Ignored inbox item - no resource: {item}", item=inboxItem)
                         else:
-                            log.debug("Ignored inbox item - no inbox: %s" % (inboxItem,))
+                            log.debug("Ignored inbox item - no inbox: {item}", item=inboxItem)
 
                         inboxItems.remove(inboxItem)
 
@@ -1305,7 +1305,7 @@
             # FIXME: Some generic exception handlers to deal with unexpected errors that for some reason
             # we are not logging properly.
             except Exception, e:
-                log.error("Exception during inbox item processing: %s" % (e,))
+                log.error("Exception during inbox item processing: {ex}", ex=e)
 
             except:
                 log.error("Unknown exception during inbox item processing.")
@@ -1336,7 +1336,7 @@
         the inbox item.
         """
 
-        log.debug("Processing inbox item %s" % (inboxItem,))
+        log.debug("Processing inbox item {item}", item=inboxItem)
 
         txn = request._newStoreTransaction
 
@@ -1351,7 +1351,7 @@
         if calendar.mainType() is not None:
             try:
                 method = calendar.propertyValue("METHOD")
-                log.info("Inbox item method is %s" % (method,))
+                log.info("Inbox item method is {method}", method=method)
             except ValueError:
                 returnValue(None)
 
@@ -1374,7 +1374,7 @@
                 internal_request=False, noAttendeeRefresh=True
             )
         else:
-            log.warn("Removing invalid inbox item: %s" % (uri,))
+            log.warn("Removing invalid inbox item: {uri}", uri=uri)
 
         #
         # Remove item

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -128,14 +128,14 @@
         try:
             cpuCount = getNCPU()
         except NotImplementedError, e:
-            log.error("Unable to detect number of CPUs: %s" % (str(e),))
+            log.error("Unable to detect number of CPUs: {ex}", ex=str(e))
             return minimum
 
     if memSize is None:
         try:
             memSize = getMemorySize()
         except NotImplementedError, e:
-            log.error("Unable to detect amount of installed RAM: %s" % (str(e),))
+            log.error("Unable to detect amount of installed RAM: {ex}", ex=str(e))
             return minimum
 
     countByCore = perCPU * cpuCount
@@ -384,7 +384,7 @@
 
         self.factory.retried = True
 
-        # self.log.debug("Got a 401 trying to inject [%s]" % (self.headers,))
+        # self.log.debug("Got a 401 trying to inject [{hdrs}]", hdrs=self.headers)
         details = {}
         basicAvailable = digestAvailable = False
         wwwauth = self.headers.get("www-authenticate")

Modified: CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/vcard.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/vcard.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/twistedcaldav/vcard.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -564,17 +564,17 @@
         @raise InvalidVCardDataError: if the given vcard data is not valid.
         """
         if self.name() != "VCARD":
-            log.debug("Not a vcard: %s" % (self,))
+            log.debug("Not a vcard: {s!r}", s=self)
             raise InvalidVCardDataError("Not a vcard")
 
         # Do underlying vCard library validation with data fix
         fixed, unfixed = self._pycard.validate(doFix=doFix)
         if unfixed:
-            log.debug("vCard data had unfixable problems:\n  %s" % ("\n  ".join(unfixed),))
+            log.debug("vCard data had unfixable problems:\n  {problems}", problems="\n  ".join(unfixed))
             if doRaise:
                 raise InvalidVCardDataError("Address data had unfixable problems:\n  %s" % ("\n  ".join(unfixed),))
         if fixed:
-            log.debug("vCard data had fixable problems:\n  %s" % ("\n  ".join(fixed),))
+            log.debug("vCard data had fixable problems:\n  {problems}", problems="\n  ".join(fixed))
 
         return fixed, unfixed
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/subpostgres.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/subpostgres.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -82,14 +82,14 @@
     def outReceived(self, out):
         for line in out.split("\n"):
             if line:
-                self.log.info("{message}", message=line)
+                self.log.info("{msg}", msg=line)
         # self.lineReceiver.dataReceived(out)
 
 
     def errReceived(self, err):
         for line in err.split("\n"):
             if line:
-                self.log.error("{message}", message=line)
+                self.log.error("{msg}", msg=line)
         self.lineReceiver.dataReceived(err)
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/base/datastore/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -162,8 +162,7 @@
         uu = UUID(normstr)
     except ValueError:
         if isURI:
-            log.info(format="normalizing urn:uuid: without UUID: %(uid)r",
-                     uid=somestr)
+            log.info("normalizing urn:uuid: without UUID: {uid!r}", uid=somestr)
         # not a UUID, whatever
         return somestr
     else:

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/base/propertystore/test/base.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/base/propertystore/test/base.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/base/propertystore/test/base.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -602,6 +602,7 @@
 
         self.assertEquals(self.propertyStore.get(name, None), None)
         self.assertEquals(len(self.propertyStore), 0)
+        self.flushLoggedErrors()
 
 
     @inlineCallbacks

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/file.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/file.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -484,10 +484,10 @@
         fixed, unfixed = component.validCalendarData(doFix=True, doRaise=False)
 
         if unfixed:
-            self.log.error("Calendar data at %s had unfixable problems:\n  %s" % (self._path.path, "\n  ".join(unfixed),))
+            self.log.error("Calendar data at {path} had unfixable problems:\n  {problems}", path=self._path.path, problems="\n  ".join(unfixed))
 
         if fixed:
-            self.log.error("Calendar data at %s had fixable problems:\n  %s" % (self._path.path, "\n  ".join(fixed),))
+            self.log.error("Calendar data at {path} had fixable problems:\n  {problems}", path=self._path.path, problems="\n  ".join(fixed))
 
         return component
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/index_file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/index_file.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/index_file.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -165,7 +165,7 @@
             name_utf8 = name.encode("utf-8")
             if name is not None and self.resource.getChild(name_utf8) is None:
                 # Clean up
-                log.error("Stale resource record found for child %s with UID %s in %s" % (name, uid, self.resource))
+                log.error("Stale resource record found for child {name} with UID {uid} in {rsrc!r}", name=name, uid=uid, resource=self.resource)
                 self._delete_from_db(name, uid, False)
                 self._db_commit()
             else:
@@ -270,7 +270,7 @@
         names = self.notExpandedBeyond(minDate)
         # Actually expand recurrence max
         for name in names:
-            self.log.info("Search falls outside range of index for %s %s" % (name, minDate))
+            self.log.info("Search falls outside range of index for {name} {date}", name=name, date=minDate)
             self.reExpandResource(name, minDate)
 
 
@@ -383,8 +383,10 @@
                     del row[9]
                 results.append(row)
             else:
-                log.error("Calendar resource %s is missing from %s. Removing from index."
-                          % (name, self.resource))
+                log.error(
+                    "Calendar resource {name} is missing from {rsrc!r}. Removing from index.",
+                    name=name, rsrc=self.resource,
+                )
                 self.deleteResource(name)
 
         return results
@@ -406,9 +408,10 @@
             if self.resource.getChild(name.encode("utf-8")):
                 results.append(row)
             else:
-                log.error("Calendar resource %s is missing from %s. Removing from index."
-                          % (name, self.resource))
-                self.deleteResource(name)
+                log.error(
+                    "Calendar resource {name} is missing from {rsrc!r}. Removing from index.",
+                    name=name, rsrc=self.resource,
+                )
 
         return results
 
@@ -735,7 +738,7 @@
             instances = calendar.expandTimeRanges(expand, ignoreInvalidInstances=reCreate)
             recurrenceLimit = instances.limit
         except InvalidOverriddenInstanceError, e:
-            log.error("Invalid instance %s when indexing %s in %s" % (e.rid, name, self.resource,))
+            log.error("Invalid instance {rid} when indexing {name} in {rsrc!r}", rid=e.rid, name=name, rsrc=self.resource)
             raise
 
         # Now coerce indexing to off if needed
@@ -876,9 +879,9 @@
 
     def reserveUID(self, uid):
         uid = uid.encode('utf-8')
-        self.log.debug("Reserving UID %r @ %r" % (
-            uid,
-            self.index.resource.fp.path)
+        self.log.debug(
+            "Reserving UID {uid} @ {path}",
+            uid=uid, path=self.index.resource.fp.path,
         )
 
         def _handleFalse(result):
@@ -897,9 +900,9 @@
 
     def unreserveUID(self, uid):
         uid = uid.encode('utf-8')
-        self.log.debug("Unreserving UID %r @ %r" % (
-            uid,
-            self.index.resource.fp.path)
+        self.log.debug(
+            "Unreserving UID {uid} @ {path}",
+            uid=uid, path=self.index.resource.fp.path,
         )
 
         def _handleFalse(result):
@@ -916,9 +919,9 @@
 
     def isReservedUID(self, uid):
         uid = uid.encode('utf-8')
-        self.log.debug("Is reserved UID %r @ %r" % (
-            uid,
-            self.index.resource.fp.path)
+        self.log.debug(
+            "Is reserved UID {uid} @ {path}",
+            uid=uid, path=self.index.resource.fp.path,
         )
 
         def _checkValue((flags, value)):
@@ -956,7 +959,7 @@
                 % (uid, self.index.resource)
             )
         except sqlite.Error, e:
-            log.error("Unable to reserve UID: %s", (e,))
+            log.error("Unable to reserve UID: {ex}", ex=e)
             self.index._db_rollback()
             raise
 
@@ -980,7 +983,7 @@
                         "delete from RESERVED where UID = :1", uid)
                     self.index._db_commit()
                 except sqlite.Error, e:
-                    log.error("Unable to unreserve UID: %s", (e,))
+                    log.error("Unable to unreserve UID: {ex}", ex=e)
                     self.index._db_rollback()
                     raise
 
@@ -1008,7 +1011,7 @@
                     self.index._db_execute("delete from RESERVED where UID = :1", uid)
                     self.index._db_commit()
                 except sqlite.Error, e:
-                    log.error("Unable to unreserve UID: %s", (e,))
+                    log.error("Unable to unreserve UID: {ex}", ex=e)
                     self.index._db_rollback()
                     raise
                 return False
@@ -1112,7 +1115,7 @@
             try:
                 stream = child.open()
             except (IOError, OSError), e:
-                log.error("Unable to open resource %s: %s" % (name, e))
+                log.error("Unable to open resource {name}: {ex}", name=name, ex=e)
                 continue
 
             # FIXME: This is blocking I/O
@@ -1121,9 +1124,9 @@
                 calendar.validCalendarData()
                 calendar.validCalendarForCalDAV(methodAllowed=False)
             except ValueError:
-                log.error("Non-calendar resource: %s" % (name,))
+                log.error("Non-calendar resource: {name}", name=name)
             else:
-                # og.info("Indexing resource: %s" % (name,))
+                # og.info("Indexing resource: {name}", name=name)
                 self.addResource(name, calendar, True, reCreate=True)
             finally:
                 stream.close()
@@ -1227,7 +1230,7 @@
             try:
                 stream = child.open()
             except (IOError, OSError), e:
-                log.error("Unable to open resource %s: %s" % (name, e))
+                log.error("Unable to open resource {name}: {ex}", name=name, ex=e)
                 continue
 
             # FIXME: This is blocking I/O
@@ -1236,9 +1239,9 @@
                 calendar.validCalendarData()
                 calendar.validCalendarForCalDAV(methodAllowed=True)
             except ValueError:
-                log.error("Non-calendar resource: %s" % (name,))
+                log.error("Non-calendar resource: {name}", name=name)
             else:
-                # log.info("Indexing resource: %s" % (name,))
+                # log.info("Indexing resource: {name}", name=name)
                 self.addResource(name, calendar, True, reCreate=True)
             finally:
                 stream.close()

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/query/filter.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/query/filter.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/query/filter.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -396,32 +396,32 @@
         if level == 0:
             # Must have VCALENDAR at the top
             if (self.filter_name != "VCALENDAR") or timerange:
-                log.info("Top-level comp-filter must be VCALENDAR, instead: %s" % (self.filter_name,))
+                log.info("Top-level comp-filter must be VCALENDAR, instead: {name}", name=self.filter_name)
                 return False
         elif level == 1:
             # Disallow VCALENDAR, VALARM, STANDARD, DAYLIGHT, AVAILABLE at the top, everything else is OK
             if self.filter_name in ("VCALENDAR", "VALARM", "STANDARD", "DAYLIGHT", "AVAILABLE"):
-                log.info("comp-filter wrong component type: %s" % (self.filter_name,))
+                log.info("comp-filter wrong component type: {name}", name=self.filter_name)
                 return False
 
             # time-range only on VEVENT, VTODO, VJOURNAL, VFREEBUSY, VAVAILABILITY
             if timerange and self.filter_name not in ("VEVENT", "VTODO", "VJOURNAL", "VFREEBUSY", "VAVAILABILITY"):
-                log.info("time-range cannot be used with component %s" % (self.filter_name,))
+                log.info("time-range cannot be used with component {name}", name=self.filter_name)
                 return False
         elif level == 2:
             # Disallow VCALENDAR, VTIMEZONE, VEVENT, VTODO, VJOURNAL, VFREEBUSY, VAVAILABILITY at the top, everything else is OK
             if (self.filter_name in ("VCALENDAR", "VTIMEZONE", "VEVENT", "VTODO", "VJOURNAL", "VFREEBUSY", "VAVAILABILITY")):
-                log.info("comp-filter wrong sub-component type: %s" % (self.filter_name,))
+                log.info("comp-filter wrong sub-component type: {name}", name=self.filter_name)
                 return False
 
             # time-range only on VALARM, AVAILABLE
             if timerange and self.filter_name not in ("VALARM", "AVAILABLE",):
-                log.info("time-range cannot be used with sub-component %s" % (self.filter_name,))
+                log.info("time-range cannot be used with sub-component {name}", name=self.filter_name)
                 return False
         else:
             # Disallow all standard iCal components anywhere else
             if (self.filter_name in ("VCALENDAR", "VTIMEZONE", "VEVENT", "VTODO", "VJOURNAL", "VFREEBUSY", "VALARM", "STANDARD", "DAYLIGHT", "AVAILABLE")) or timerange:
-                log.info("comp-filter wrong standard component type: %s" % (self.filter_name,))
+                log.info("comp-filter wrong standard component type: {name}", name=self.filter_name)
                 return False
 
         # Test each property
@@ -548,7 +548,7 @@
 
         # time-range only on COMPLETED, CREATED, DTSTAMP, LAST-MODIFIED
         if timerange and self.filter_name.upper() not in ("COMPLETED", "CREATED", "DTSTAMP", "LAST-MODIFIED"):
-            log.info("time-range cannot be used with property %s" % (self.filter_name,))
+            log.info("time-range cannot be used with property {name}", name=self.filter_name)
             return False
 
         # Test the time-range
@@ -832,16 +832,16 @@
         """
 
         if self.start is not None and self.start.isDateOnly():
-            log.info("start attribute in <time-range> is not a date-time: %s" % (self.start,))
+            log.info("start attribute in <time-range> is not a date-time: {start}", start=self.start)
             return False
         if self.end is not None and self.end.isDateOnly():
-            log.info("end attribute in <time-range> is not a date-time: %s" % (self.end,))
+            log.info("end attribute in <time-range> is not a date-time: {end}", start=self.end)
             return False
         if self.start is not None and not self.start.utc():
-            log.info("start attribute in <time-range> is not UTC: %s" % (self.start,))
+            log.info("start attribute in <time-range> is not UTC: {start}", start=self.start)
             return False
         if self.end is not None and not self.end.utc():
-            log.info("end attribute in <time-range> is not UTC: %s" % (self.end,))
+            log.info("end attribute in <time-range> is not UTC: {end}", start=self.end)
             return False
 
         # No other tests

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/caldav/delivery.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/caldav/delivery.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/caldav/delivery.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -14,10 +14,11 @@
 # limitations under the License.
 ##
 
-from twext.python.log import Logger, LogLevel
+from twext.python.log import Logger
 from txweb2.dav.http import ErrorResponse
 
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
+from twisted.logger import LogLevel
 from twisted.python.failure import Failure
 from txweb2 import responsecode
 from txweb2.http import HTTPError

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/freebusy.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/freebusy.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/freebusy.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -462,7 +462,7 @@
                     # between our initial index query and getting here. For now we will ignore this error, but in
                     # the longer term we need to implement some form of locking, perhaps.
                     if calendar is None:
-                        log.error("Calendar %s is missing from calendar collection %r" % (name, calresource))
+                        log.error("Calendar {name} is missing from calendar collection {coll!r}", name=name, coll=calresource)
                         continue
 
                     if self.accountingItems is not None:

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/icaldiff.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/icaldiff.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/icaldiff.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -475,7 +475,7 @@
 
         result = len(propdiff) == 0
         if not result:
-            log.debug("VCALENDAR properties differ: %s" % (propdiff,))
+            log.debug("VCALENDAR properties differ: {diff}", diff=propdiff)
         return result
 
 
@@ -497,7 +497,7 @@
 
         # Possible case where one ATTENDEE prop is missing - this happens with a "fake" master sometimes
         if serverAttendee is None or clientAttendee is None:
-            log.error("ATTENDEE for user making an attendee change is missing: %s" % (self.attendee,))
+            log.error("ATTENDEE for user making an attendee change is missing: {attendee}", attendee=self.attendee)
             return False, False
 
         if serverAttendee.parameterValue("PARTSTAT", "NEEDS-ACTION") != clientAttendee.parameterValue("PARTSTAT", "NEEDS-ACTION"):
@@ -642,13 +642,13 @@
             invalidChanges = []
             propNames = ("DTSTART", "DTEND", "DUE", "RRULE", "RDATE", "EXDATE")
             invalidChanges = [propName for ctr, propName in enumerate(propNames) if serverProps[ctr] != clientProps[ctr]]
-            log.debug("Critical properties do not match: %s" % (", ".join(invalidChanges),))
+            log.debug("Critical properties do not match: {props}", props=", ".join(invalidChanges))
             return False
         elif serverProps[-1] != clientProps[-1]:
             # Bad if EXDATEs have been removed
             missing = serverProps[-1] - clientProps[-1]
             if missing:
-                log.debug("EXDATEs missing: %s" % (", ".join([exdate.getText() for exdate in missing]),))
+                log.debug("EXDATEs missing: {exdates}", exdates=", ".join([exdate.getText() for exdate in missing]))
                 return False
             declines.extend(clientProps[-1] - serverProps[-1])
             return True
@@ -744,7 +744,7 @@
 
         # Possible case where ATTENDEE prop is missing - this happens with a "fake" master sometimes
         if attendee is None:
-            log.error("ATTENDEE for user making an attendee change is missing: %s" % (self.attendee,))
+            log.error("ATTENDEE for user making an attendee change is missing: {attendee}", attendee=self.attendee)
             return False
 
         partstatChanged = attendee.parameterValue("PARTSTAT", "NEEDS-ACTION") != "DECLINED"
@@ -939,7 +939,7 @@
         assert isinstance(comp1, Component) and isinstance(comp2, Component)
 
         if comp1.name() != comp2.name():
-            log.debug("Component names are different: '%s' and '%s'" % (comp1.name(), comp2.name()))
+            log.debug("Component names are different: '{comp1}' and '{comp2}'", comp1=comp1.name(), comp2=comp2.name())
             return
 
         # Duplicate then normalize for comparison
@@ -1031,4 +1031,4 @@
             loggedUID = "Unknown"
         loggedName = accounting.emitAccounting("Implicit Errors", loggedUID, logstr)
         if loggedName:
-            log.error("Generating Implicit Error accounting at path: %s" % (loggedName,))
+            log.error("Generating Implicit Error accounting at path: {name}", name=loggedName)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/inbound.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/inbound.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/inbound.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -21,12 +21,13 @@
 from twext.enterprise.dal.record import fromTable
 from twext.enterprise.jobs.workitem import WorkItem, RegeneratingWorkItem
 from twext.internet.gaiendpoint import GAIEndpoint
-from twext.python.log import Logger, LegacyLogger
+from twext.python.log import Logger
 
 from twisted.application import service
 from twisted.internet import protocol, defer, ssl
 from twisted.internet.defer import inlineCallbacks, returnValue, succeed
 from twisted.mail import pop3client, imap4
+from twisted.python.log import LogPublisher
 
 from twistedcaldav.config import config
 from twistedcaldav.ical import Property, Component
@@ -48,7 +49,7 @@
 # Monkey patch imap4.log so it doesn't emit useless logging,
 # specifically, "Unhandled unsolicited response" nonsense.
 #
-class IMAPLogger(LegacyLogger):
+class IMAPLogger(LogPublisher):
     def msg(self, *message, **kwargs):
         if message and message[0].startswith("Unhandled unsolicited response:"):
             return
@@ -182,7 +183,7 @@
 def scheduleNextMailPoll(store, seconds):
     txn = store.newTransaction(label="scheduleNextMailPoll")
     notBefore = datetime.datetime.utcnow() + datetime.timedelta(seconds=seconds)
-    log.debug("Scheduling next mail poll: %s" % (notBefore,))
+    log.debug("Scheduling next mail poll: {time}", time=notBefore)
     yield txn.enqueue(IMIPPollingWork, notBefore=notBefore)
     yield txn.commit()
 
@@ -287,7 +288,7 @@
         organizer = calendar.getOrganizer()
         token = self._extractToken(organizer)
         if not token:
-            log.error("Mail gateway can't find token in DSN %s" % (msgId,))
+            log.error("Mail gateway can't find token in DSN {msgid}", msgid=msgId)
             return
 
         txn = self.store.newTransaction(label="MailReceiver.processDSN")
@@ -299,8 +300,9 @@
         except:
             # This isn't a token we recognize
             log.error(
-                "Mail gateway found a token (%s) but didn't recognize it in message %s"
-                % (token, msgId))
+                "Mail gateway found a token ({token}) but didn't recognize it in message {msgid}",
+                token=token, msgid=msgId,
+            )
             returnValue(self.UNKNOWN_TOKEN)
 
         calendar.removeAllButOneAttendee(record.attendee)
@@ -316,7 +318,7 @@
             # TODO: what to do in this case?
             pass
 
-        log.warn("Mail gateway processing DSN %s" % (msgId,))
+        log.warn("Mail gateway processing DSN {msgid}", msgid=msgId)
         txn = self.store.newTransaction(label="MailReceiver.processDSN")
         yield txn.enqueue(
             IMIPReplyWork,
@@ -338,12 +340,14 @@
             if not token:
                 log.error(
                     "Mail gateway didn't find a token in message "
-                    "%s (%s)" % (msg['Message-ID'], msg['To']))
+                    "{msgid} ({to})", msgid=msg['Message-ID'], to=msg['To']
+                )
                 returnValue(self.NO_TOKEN)
         else:
             log.error(
-                "Mail gateway couldn't parse To: address (%s) in "
-                "message %s" % (msg['To'], msg['Message-ID']))
+                "Mail gateway couldn't parse To: address ({tp}) in "
+                "message {msgid}", to=msg['To'], msgid=msg['Message-ID']
+            )
             returnValue(self.MALFORMED_TO_ADDRESS)
 
         txn = self.store.newTransaction(label="MailReceiver.processReply")
@@ -355,9 +359,10 @@
         except:
             # This isn't a token we recognize
             log.info(
-                "Mail gateway found a token (%s) but didn't "
-                "recognize it in message %s"
-                % (token, msg['Message-ID']))
+                "Mail gateway found a token ({token}) but didn't "
+                "recognize it in message {msgid}",
+                token=token, msgid=msg['Message-ID']
+            )
             # Any email with an unknown token which was sent over 72 hours ago
             # is deleted.  If we can't parse the date we leave it in the inbox.
             dateString = msg.get("Date")
@@ -366,9 +371,8 @@
                     dateSent = dateutil.parser.parse(dateString)
                 except Exception, e:
                     log.info(
-                        "Could not parse date in IMIP email '{}' ({})".format(
-                            dateString, e
-                        )
+                        "Could not parse date in IMIP email '{date}' ({ex})",
+                        date=dateString, ex=e,
                     )
                     returnValue(self.UNKNOWN_TOKEN)
                 now = datetime.datetime.now(dateutil.tz.tzutc())
@@ -384,7 +388,8 @@
             # No icalendar attachment
             log.warn(
                 "Mail gateway didn't find an icalendar attachment "
-                "in message %s" % (msg['Message-ID'],))
+                "in message {msgid}", msgid=msg['Message-ID']
+            )
 
             toAddr = None
             fromAddr = record.attendee[7:]
@@ -401,8 +406,7 @@
 
             if toAddr is None:
                 log.error(
-                    "Don't have an email address for the organizer; "
-                    "ignoring reply.")
+                    "Don't have an email address for the organizer; ignoring reply.")
                 returnValue(self.NO_ORGANIZER_ADDRESS)
 
             settings = config.Scheduling["iMIP"]["Sending"]
@@ -433,8 +437,9 @@
         if organizerProperty is None:
             # ORGANIZER is required per rfc2446 section 3.2.3
             log.warn(
-                "Mail gateway didn't find an ORGANIZER in REPLY %s"
-                % (msg['Message-ID'],))
+                "Mail gateway didn't find an ORGANIZER in REPLY {msgid}",
+                msgid=msg['Message-ID']
+            )
             event.addProperty(Property("ORGANIZER", record.organizer))
         else:
             organizerProperty.setValue(record.organizer)
@@ -498,8 +503,9 @@
                 else:
                     # It's a DSN without enough to go on
                     log.error(
-                        "Mail gateway can't process DSN %s"
-                        % (msg['Message-ID'],))
+                        "Mail gateway can't process DSN {msgid}",
+                        msgid=msg['Message-ID']
+                    )
                     return succeed(self.INCOMPLETE_DSN)
 
             log.info(
@@ -551,8 +557,9 @@
 
     def cbLoginFailed(self, reason):
         self.log.error(
-            "POP3 login failed for %s" %
-            (self.factory.settings["Username"],))
+            "POP3 login failed for {user}",
+            user=self.factory.settings["Username"]
+        )
         return self.quit()
 
 
@@ -571,17 +578,17 @@
 
     @inlineCallbacks
     def cbDownloaded(self, lines, id):
-        self.log.debug("POP downloaded message %d" % (id,))
+        self.log.debug("POP downloaded message {msgid}", msgid=id)
         actionTaken = (yield self.factory.handleMessage("\r\n".join(lines)))
 
         if self.factory.deleteAllMail:
             # Delete all mail we see
-            self.log.debug("POP deleting message %d" % (id,))
+            self.log.debug("POP deleting message {msgid}", msgid=id)
             self.delete(id)
         else:
             # Delete only mail we've processed
             if actionTaken == MailReceiver.INJECTION_SUBMITTED:
-                self.log.debug("POP deleting message %d" % (id,))
+                self.log.debug("POP deleting message {msgid}", msgid=id)
                 self.delete(id)
 
 
@@ -695,8 +702,9 @@
             nextUID = self.messageUIDs.pop(0)
             messageListToFetch = imap4.MessageSet(nextUID)
             self.log.debug(
-                "Downloading message %d of %d (%s)" %
-                (self.messageCount - len(self.messageUIDs), self.messageCount, nextUID))
+                "Downloading message {count} of {total} ({next})",
+                count=self.messageCount - len(self.messageUIDs), total=self.messageCount, next=nextUID
+            )
             self.fetchMessage(messageListToFetch, True).addCallback(
                 self.cbGotMessage, messageListToFetch).addErrback(
                     self.ebLogError)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/outbound.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/outbound.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/imip/outbound.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -380,15 +380,19 @@
 
             if record is None:
                 record = (yield txn.imipCreateToken(originator, toAddr.lower(), icaluid))
-                self.log.debug("Mail gateway created token %s for %s "
-                               "(originator), %s (recipient) and %s (icaluid)"
-                               % (record.token, originator, toAddr, icaluid))
+                self.log.debug(
+                    "Mail gateway created token {token}for {orig} "
+                    "(originator), {recip} (recipient) and {uid} (icaluid)",
+                    token=record.token, orig=originator, recip=toAddr, uid=icaluid,
+                )
                 inviteState = "new"
 
             else:
-                self.log.debug("Mail gateway reusing token %s for %s "
-                               "(originator), %s (recipient) and %s (icaluid)"
-                               % (record.token, originator, toAddr, icaluid))
+                self.log.debug(
+                    "Mail gateway reusing token {token} for {orig} "
+                    "(originator), {recip} (recipient) and {uid} (icaluid)",
+                    token=record.token, orig=originator, recip=toAddr, uid=icaluid,
+                )
                 inviteState = "update"
             token = record.token
 
@@ -489,7 +493,7 @@
                 fromAddr, toAddr, msgId, message))
             returnValue(success)
         except Exception, e:
-            self.log.error("Failed to send IMIP message (%s)" % (str(e),))
+            self.log.error("Failed to send IMIP message ({ex})", ex=str(e))
             returnValue(False)
 
 
@@ -597,8 +601,7 @@
         # always be present (i.e., timezones-by-reference is not allowed in iMIP).
         calendarText = calendar.getTextWithTimezones(includeTimezones=True)
 
-        self.log.debug("Mail gateway sending calendar body: %s"
-                       % (calendarText,))
+        self.log.debug("Mail gateway sending calendar body: {body}", body=calendarText)
         msgIcal = MIMEText(calendarText, "calendar", "UTF-8")
         method = calendar.propertyValue("METHOD").lower()
         msgIcal.set_param("method", method)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/implicit.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/implicit.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -1631,7 +1631,7 @@
                             break
 
                     if found_old:
-                        log.error("valid-attendee-change: Cannot remove ATTENDEE: UID:%s" % (self.uid,))
+                        log.error("valid-attendee-change: Cannot remove ATTENDEE: UID: {uid}", uid=self.uid)
                         raise HTTPError(ErrorResponse(
                             responsecode.FORBIDDEN,
                             (caldav_namespace, "valid-attendee-change"),

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/ischedule/dkim.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/ischedule/dkim.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/ischedule/dkim.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -423,8 +423,8 @@
         dkim_header = "; ".join(["%s=%s" % item for item in dkim_tags])
         self.headers.addRawHeader(DKIM_SIGNATURE, dkim_header)
 
-        log.debug("DKIM: Generated header: DKIM-Signature:%s" % (dkim_header,))
-        log.debug("DKIM: Signed headers:\n%s" % (headers,))
+        log.debug("DKIM: Generated header: DKIM-Signature:{hdr}", hdr=dkim_header)
+        log.debug("DKIM: Signed headers:\n{hdrs}", hdrs=headers)
 
         returnValue(signature)
 
@@ -556,7 +556,7 @@
 
         # Extract the set of canonicalized headers being signed
         headers = self.extractSignedHeaders()
-        log.debug("DKIM: Signed headers:\n%s" % (headers,))
+        log.debug("DKIM: Signed headers:\n{hdrs}", hdrs=headers)
 
         # Locate the public key
         pubkey = (yield self.locatePublicKey())
@@ -577,7 +577,7 @@
 Public key used:
 %s
 """ % (self.headers.getRawHeaders(DKIM_SIGNATURE)[0], headers, pubkey._original_data,)
-            log.debug("DKIM: %s:%s" % (msg, _debug_msg,))
+            log.debug("DKIM: {msg}:{debug}", msg=msg, debug=_debug_msg)
             if self._debug:
                 msg = "%s:%s" % (msg, _debug_msg,)
             raise DKIMVerificationError(msg)
@@ -595,7 +595,7 @@
 Base64 encoded body:
 %s
 """ % (self.headers.getRawHeaders(DKIM_SIGNATURE), self.hash_method.__name__, base64.b64encode(body),)
-            log.debug("DKIM: %s:%s" % (msg, _debug_msg,))
+            log.debug("DKIM: {msg}:{debug}", msg=msg, debug=_debug_msg)
             if self._debug:
                 msg = "%s:%s" % (msg, _debug_msg,)
             raise DKIMVerificationError(msg)
@@ -612,16 +612,16 @@
         dkim = self.headers.getRawHeaders(DKIM_SIGNATURE)
         if dkim is None:
             msg = "No DKIM-Signature header present in the request"
-            log.debug("DKIM: " + msg)
+            log.debug("DKIM: {msg}", msg=msg)
             raise DKIMMissingError(msg)
         if len(dkim) != 1:
             # TODO: This might need to be changed if we ever support forwarding of iSchedule messages - the forwarder
             # might also sign the message and add its own header
             msg = "Only one DKIM-Signature allowed in the request"
-            log.debug("DKIM: " + msg)
+            log.debug("DKIM: {msg}", msg=msg)
             raise DKIMVerificationError(msg)
         dkim = dkim[0]
-        log.debug("DKIM: Found header: DKIM-Signature:%s" % (dkim,))
+        log.debug("DKIM: Found header: DKIM-Signature:{hdr}", hdr=dkim)
 
         # Extract tags from the header
         self.dkim_tags = DKIMUtils.extractTags(dkim)
@@ -631,7 +631,7 @@
         for tag in required_tags:
             if tag not in self.dkim_tags:
                 msg = "Missing DKIM-Signature tag: %s" % (tag,)
-                log.debug("DKIM: " + msg)
+                log.debug("DKIM: {msg}", msg=msg)
                 raise DKIMVerificationError(msg)
 
         check_values = {
@@ -652,7 +652,7 @@
             for item in test:
                 if item not in values:
                     msg = "Tag: %s has incorrect value: %s" % (tag, self.dkim_tags[tag],)
-                    log.debug("DKIM: " + msg)
+                    log.debug("DKIM: {msg}", msg=msg)
                     raise DKIMVerificationError(msg)
 
         # Check time stamp
@@ -660,7 +660,7 @@
             diff_time = self.time - int(self.dkim_tags["t"])
             if diff_time < -360:
                 msg = "Signature time too far in the future: %d seconds" % (diff_time,)
-                log.debug("DKIM: " + msg)
+                log.debug("DKIM: {msg}", msg=msg)
                 raise DKIMVerificationError(msg)
 
         # Check expiration
@@ -668,7 +668,7 @@
             diff_time = self.time - int(self.dkim_tags["x"])
             if diff_time > 0:
                 msg = "Signature expired: %d seconds" % (diff_time,)
-                log.debug("DKIM: " + msg)
+                log.debug("DKIM: {msg}", msg=msg)
                 raise DKIMVerificationError(msg)
 
         # Base64 encoded tags might include WSP which we need to ignore
@@ -797,7 +797,7 @@
 
             return self._makeKey(pkey)
 
-        log.debug("DKIM: No valid public key: %s %s" % (self._getSelectorKey(), pubkeys,))
+        log.debug("DKIM: No valid public key: {sel} {keys}", sel=self._getSelectorKey(), keys=pubkeys)
         return None
 
 
@@ -818,7 +818,7 @@
             key._original_data = key_data
             return key
         except:
-            log.debug("DKIM: Unable to make public key:\n%s" % (key_data,))
+            log.debug("DKIM: Unable to make public key:\n{key}", key=key_data)
             return None
 
 
@@ -845,9 +845,9 @@
         """
         Do the key lookup using the actual lookup method.
         """
-        log.debug("DKIM: TXT lookup: %s" % (self._getSelectorKey(),))
+        log.debug("DKIM: TXT lookup: {key}", key=self._getSelectorKey())
         data = (yield lookupDataViaTXT(self._getSelectorKey()))
-        log.debug("DKIM: TXT lookup results: %s\n%s" % (self._getSelectorKey(), "\n".join(data),))
+        log.debug("DKIM: TXT lookup results: {key}\n{data}", key=self._getSelectorKey(), data="\n".join(data))
         returnValue(tuple([DKIMUtils.extractTags(line) for line in data]))
 
 
@@ -875,7 +875,7 @@
         # First we do an SRV lookup for _domainkey to get the public key server host/port
         result = (yield lookupServerViaSRV(self.dkim_tags["d"], service="_domainkey_lookup"))
         if result is None:
-            log.debug("DKIM: SRV _domainkey failed on: %s trying domain directly" % (self.dkim_tags["d"],))
+            log.debug("DKIM: SRV _domainkey failed on: {tag} trying domain directly", tag=self.dkim_tags["d"])
             host = self.dkim_tags["d"]
             port = ""
             scheme = "https"
@@ -899,20 +899,20 @@
         # First we do an SRV lookup for _domainkey to get the public key server URI
         uri = (yield self._getURI())
 
-        log.debug("DKIM: HTTP/.well-known lookup: %s" % (uri,))
+        log.debug("DKIM: HTTP/.well-known lookup: {uri}", uri=uri)
         response = (yield getURL(uri))
         if response is None or response.code / 100 != 2:
-            log.debug("DKIM: Failed http/well-known lookup: %s %s" % (uri, response,))
+            log.debug("DKIM: Failed http/well-known lookup: {uri} {resp}", uri=uri, resp=response)
             returnValue(())
 
         ct = response.headers.getRawHeaders("content-type", ("bogus/type",))[0]
         ct = ct.split(";", 1)
         ct = ct[0].strip()
         if ct not in ("text/plain",):
-            log.debug("DKIM: Failed http/well-known lookup: wrong content-type returned %s %s" % (uri, ct,))
+            log.debug("DKIM: Failed http/well-known lookup: wrong content-type returned {uri} {ct}", uri=uri, ct=ct)
             returnValue(())
 
-        log.debug("DKIM: HTTP/.well-known lookup results: %s\n%s" % (uri, response.data,))
+        log.debug("DKIM: HTTP/.well-known lookup results: {uri}\n{resp}", uri=uri, resp=response.data)
         returnValue(tuple([DKIMUtils.extractTags(line) for line in response.data.splitlines()]))
 
 
@@ -941,19 +941,19 @@
             return succeed(())
         keyfile = os.path.join(PublicKeyLookup_PrivateExchange.directory, self._getSelectorKey())
         if not os.path.exists(keyfile):
-            log.debug("DKIM: Failed private-exchange lookup: no path %s" % (keyfile,))
+            log.debug("DKIM: Failed private-exchange lookup: no path {path}", path=keyfile)
             return succeed(())
 
         # Now read the data
-        log.debug("DKIM: Private exchange lookup: %s" % (keyfile,))
+        log.debug("DKIM: Private exchange lookup: {path}", path=keyfile)
         try:
             with open(keyfile) as f:
                 keys = f.read()
         except IOError, e:
-            log.debug("DKIM: Failed private-exchange lookup: could not read %s %s" % (keyfile, e,))
+            log.debug("DKIM: Failed private-exchange lookup: could not read {path} {ex}", path=keyfile, ex=e)
             return succeed(())
 
-        log.debug("DKIM: Private exchange lookup results: %s\n%s" % (keyfile, keys))
+        log.debug("DKIM: Private exchange lookup results: {path}\n{keys}", path=keyfile, keys=keys)
         return succeed(tuple([DKIMUtils.extractTags(line) for line in keys.splitlines()]))
 
 
@@ -985,14 +985,14 @@
             with open(pubkeyfile) as f:
                 key_data = f.read()
         except IOError, e:
-            log.error("DKIM: Unable to open the public key file: %s because of %s" % (pubkeyfile, e,))
+            log.error("DKIM: Unable to open the public key file: {path} because of {ex}", path=pubkeyfile, ex=e)
             raise
 
         # Make sure we can parse a valid public key
         try:
             RSA.importKey(key_data)
         except:
-            log.error("DKIM: Invalid public key file: %s" % (pubkeyfile,))
+            log.error("DKIM: Invalid public key file: {path}", path=pubkeyfile)
             raise
 
         # Make the TXT record

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/itip.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/itip.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/itip.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -443,7 +443,7 @@
                 if match_component is not None:
                     itip_message.addComponent(match_component)
                 else:
-                    log.error("Ignoring instance: %s missing in iTIP REPLY for: %s" % (rid, itip_message.resourceUID()))
+                    log.error("Ignoring instance: {rid} missing in iTIP REPLY for: {uid}", rid=rid, uid=itip_message.resourceUID())
                     continue
 
         # Now do all overridden ones (sort by RECURRENCE-ID)
@@ -470,7 +470,7 @@
                 if match_component is not None:
                     calendar.addComponent(match_component)
                 else:
-                    log.error("Ignoring instance: %s in iTIP REPLY for: %s" % (rid, itip_message.resourceUID()))
+                    log.error("Ignoring instance: {rid} in iTIP REPLY for: {uid}", rid=rid, uid=itip_message.resourceUID())
                     continue
 
             attendee, partstat, private_comment = iTipProcessing.updateAttendeeDataFromReply(itip_component, match_component)
@@ -486,7 +486,7 @@
         elif len_attendees == 1:
             return True, (attendees.pop(), rids)
         else:
-            log.error("ATTENDEE property in a REPLY must be the same in all components\n%s" % (str(itip_message),))
+            log.error("ATTENDEE property in a REPLY must be the same in all components\n{msg}", msg=str(itip_message))
             return False, None
 
 
@@ -521,7 +521,7 @@
         # Get attendee in reply_component - there MUST be only one
         attendees = tuple(reply_component.getRecipientProperties())
         if len(attendees) != 1:
-            log.error("There must be one and only one ATTENDEE property in a REPLY\n%s" % (str(reply_component),))
+            log.error("There must be one and only one ATTENDEE property in a REPLY\n{msg}", msg=str(reply_component))
             return None, False, False
 
         attendee = attendees[0]

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/processing.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/processing.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/processing.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -121,21 +121,21 @@
                 # to match that of the organizer assuming we have the organizer's full data available, then
                 # we try the processing operation again.
                 log.failure("{processor}.doImplicitAttendee()", processor=self)
-                log.error("ImplicitProcessing - originator '%s' to recipient '%s' with UID: '%s' - exception raised will try to fix: %s" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid, e))
+                log.error("ImplicitProcessing - originator '{orig}' to recipient '{recip}' with UID: '{uid}' - exception raised will try to fix: {ex}", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid, ex=e)
                 result = (yield self.doImplicitAttendeeEventFix(e))
                 if result:
-                    log.error("ImplicitProcessing - originator '%s' to recipient '%s' with UID: '%s' - restored organizer's copy" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                    log.error("ImplicitProcessing - originator '{orig}' to recipient '{recip}' with UID: '{uid}' - restored organizer's copy", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                     try:
                         result = (yield self.doImplicitAttendee())
                     except Exception, e:
                         log.failure("{processor}.doImplicitAttendee()", processor=self)
-                        log.error("ImplicitProcessing - originator '%s' to recipient '%s' with UID: '%s' - exception raised after fix: %s" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid, e))
+                        log.error("ImplicitProcessing - originator '{orig}' to recipient '{recip}' with UID: '{uid}' - exception raised after fix: {ex}", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid, ex=e)
                         raise ImplicitProcessorException("5.1;Service unavailable")
                 else:
-                    log.error("ImplicitProcessing - originator '%s' to recipient '%s' with UID: '%s' - could not fix" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                    log.error("ImplicitProcessing - originator '{orig}' to recipient '{recip}' with UID: '{uid}' - could not fix", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                     raise ImplicitProcessorException("5.1;Service unavailable")
         else:
-            log.error("METHOD:%s not supported for implicit scheduling." % (self.method,))
+            log.error("METHOD:{metgod} not supported for implicit scheduling.", method=self.method)
             raise ImplicitProcessorException("3.14;Unsupported capability")
 
         returnValue(result)
@@ -180,7 +180,7 @@
         # Locate the organizer's copy of the event.
         yield self.getRecipientsCopy()
         if self.recipient_calendar is None or self.recipient_in_trash:
-            log.debug("ImplicitProcessing - originator '%s' to recipient '%s' ignoring UID: '%s' - organizer has no copy" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+            log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' ignoring UID: '{uid}' - organizer has no copy", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
             returnValue((True, True, False, None,))
 
         # Handle new items differently than existing ones.
@@ -209,7 +209,7 @@
             self.recipient_calendar.noInstanceIndexing = True
 
             # Update the organizer's copy of the event
-            log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:REPLY, UID: '%s' - updating event" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+            log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:REPLY, UID: '{uid}' - updating event", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
             self.organizer_calendar_resource = (yield self.writeCalendarResource(None, self.recipient_calendar_resource, self.recipient_calendar))
             self.organizer_uid = self.organizer_calendar_resource.parentCollection().ownerHome().uid()
             self.organizer_calendar_resource_id = self.organizer_calendar_resource.id()
@@ -317,7 +317,7 @@
         @param only_attendees: list of attendees to refresh (C{None} - refresh all)
         @type only_attendees: C{tuple}
         """
-        log.debug("ImplicitProcessing - refreshing UID: '%s', Attendees: %s" % (self.uid, ", ".join(only_attendees) if only_attendees else "all"))
+        log.debug("ImplicitProcessing - refreshing UID: '{uid}', Attendees: {att}", uid=self.uid, att=", ".join(only_attendees) if only_attendees else "all")
         from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
         scheduler = ImplicitScheduler()
         yield scheduler.refreshAllAttendeesExceptSome(
@@ -341,7 +341,7 @@
         # If we get a CANCEL and we don't have a matching resource already stored, simply
         # ignore the CANCEL.
         if self.new_resource and self.method == "CANCEL":
-            log.debug("ImplicitProcessing - originator '%s' to recipient '%s' ignoring METHOD:CANCEL, UID: '%s' - attendee has no copy" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+            log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' ignoring METHOD:CANCEL, UID: '{uid}' - attendee has no copy", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
             result = (True, True, False, None)
         else:
             result = (yield self.doImplicitAttendeeUpdate())
@@ -366,7 +366,7 @@
                 # Additional check - if the existing organizer is missing and the originator
                 # is local to the server - then allow the change
                 if not (existing_organizer == "" and self.originator.hosted()):
-                    log.debug("ImplicitProcessing - originator '%s' to recipient '%s' ignoring UID: '%s' - organizer has no copy" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                    log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' ignoring UID: '{uid}' - organizer has no copy", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                     # If the recipient copy is in the trash, just remove it
                     if self.recipient_in_trash:
                         yield self.deleteCalendarResource(self.recipient_calendar_resource)
@@ -400,7 +400,7 @@
         if self.message.hasProperty("X-CALENDARSERVER-SPLIT-OLDER-UID"):
             if config.Scheduling.Options.Splitting.Enabled:
                 # Tell the existing resource to split
-                log.debug("ImplicitProcessing - originator '%s' to recipient '%s' splitting UID: '%s'" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' splitting UID: '{uid}'", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                 split = (yield self.doImplicitAttendeeSplit())
                 if split:
                     returnValue((True, False, False, None,))
@@ -410,7 +410,7 @@
 
         elif self.message.hasProperty("X-CALENDARSERVER-SPLIT-NEWER-UID"):
             if config.Scheduling.Options.Splitting.Enabled:
-                log.debug("ImplicitProcessing - originator '%s' to recipient '%s' ignoring UID: '%s' - split already done" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' ignoring UID: '{uid}' - split already done", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                 returnValue((True, False, False, None,))
             else:
                 self.message.removeProperty("X-CALENDARSERVER-SPLIT-NEWER-UID")
@@ -479,16 +479,16 @@
             # Check if the incoming data has the recipient declined in all instances. In that case we will not create
             # a new resource as chances are the recipient previously deleted the resource and we want to keep it deleted.
             if all_declined:
-                log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:REQUEST, UID: '%s' - ignoring all declined" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:REQUEST, UID: '{uid}' - ignoring all declined", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                 returnValue((True, False, False, None,))
 
             # Check for default calendar
             default = (yield self.recipient.inbox.viewerHome().defaultCalendar(self.message.mainType()))
             if default is None:
-                log.error("No default calendar for recipient: '%s'." % (self.recipient.cuaddr,))
+                log.error("No default calendar for recipient: '{recip}'.", recip=self.recipient.cuaddr)
                 raise ImplicitProcessorException(iTIPRequestStatus.NO_USER_SUPPORT)
 
-            log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:REQUEST, UID: '%s' - new processed" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+            log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:REQUEST, UID: '{uid}' - new processed", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
             new_calendar = iTipProcessing.processNewRequest(self.message, self.recipient.cuaddr, creating=True)
 
             # Handle auto-reply behavior
@@ -516,7 +516,7 @@
 
             if send_reply:
                 # Track outstanding auto-reply processing
-                log.debug("ImplicitProcessing - recipient '%s' processing UID: '%s' - auto-reply queued" % (self.recipient.cuaddr, self.uid,))
+                log.debug("ImplicitProcessing - recipient '{recip}' processing UID: '{uid}' - auto-reply queued", recip=self.recipient.cuaddr, uid=self.uid)
                 yield ScheduleAutoReplyWork.autoReply(self.txn, new_resource, partstat)
 
             # Build the schedule-changes XML element
@@ -560,13 +560,13 @@
                     new_calendar.noInstanceIndexing = not send_reply
 
                 # Update the attendee's copy of the event
-                log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:REQUEST, UID: '%s' - updating event" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:REQUEST, UID: '{uid}' - updating event", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
 
                 new_resource = (yield self.writeCalendarResource(None, self.recipient_calendar_resource, new_calendar))
 
                 if send_reply:
                     # Track outstanding auto-reply processing
-                    log.debug("ImplicitProcessing - recipient '%s' processing UID: '%s' - auto-reply queued" % (self.recipient.cuaddr, self.uid,))
+                    log.debug("ImplicitProcessing - recipient '{recip}' processing UID: '{uid}' - auto-reply queued", recip=self.recipient.cuaddr, uid=self.uid)
                     yield ScheduleAutoReplyWork.autoReply(self.txn, new_resource, partstat)
 
                 # Build the schedule-changes XML element
@@ -604,7 +604,7 @@
 
             else:
                 # Request needs to be ignored
-                log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:REQUEST, UID: '%s' - ignoring" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:REQUEST, UID: '{uid}' - ignoring", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                 result = (True, False, False, None,)
 
         returnValue(result)
@@ -627,7 +627,7 @@
         # the cancelled meeting.
         if self.recipient_in_trash:
             # Note we should never get here as this test is
-            log.debug("ImplicitProcessing - originator '%s' to recipient '%s' ignoring METHOD:CANCEL, UID: '%s' - attendee copy in trash" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+            log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' ignoring METHOD:CANCEL, UID: '{uid}' - attendee copy in trash", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
             yield self.deleteCalendarResource(self.recipient_calendar_resource)
             result = (True, True, False, None)
         else:
@@ -658,7 +658,7 @@
                 if delete_original:
 
                     # Delete the attendee's copy of the event
-                    log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:CANCEL, UID: '%s' - deleting entire event" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                    log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:CANCEL, UID: '{uid}' - deleting entire event", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                     yield self.deleteCalendarResource(self.recipient_calendar_resource)
 
                     # Build the schedule-changes XML element
@@ -673,7 +673,7 @@
                 else:
 
                     # Update the attendee's copy of the event
-                    log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:CANCEL, UID: '%s' - updating event" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                    log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:CANCEL, UID: '{uid}' - updating event", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                     yield self.writeCalendarResource(None, self.recipient_calendar_resource, self.recipient_calendar)
 
                     # Build the schedule-changes XML element
@@ -689,7 +689,7 @@
                     )
                     result = (True, autoprocessed, store_inbox, changes)
             else:
-                log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:CANCEL, UID: '%s' - ignoring" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+                log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:CANCEL, UID: '{uid}' - ignoring", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
                 result = (True, False, False, None)
 
         returnValue(result)
@@ -703,7 +703,7 @@
         """
         # If there is no existing copy, then we must fail
         if self.new_resource:
-            log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:POLLSTATUS, UID: '%s' - attendee has no copy" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+            log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:POLLSTATUS, UID: '{uid}' - attendee has no copy", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
             returnValue((True, False, False, None,))
 
         processed_message = iTipProcessing.processPollStatus(self.message, self.recipient_calendar)
@@ -713,7 +713,7 @@
         processed_message.noInstanceIndexing = True
 
         # Update the attendee's copy of the event
-        log.debug("ImplicitProcessing - originator '%s' to recipient '%s' processing METHOD:POLLSTATUS, UID: '%s' - updating poll" % (self.originator.cuaddr, self.recipient.cuaddr, self.uid))
+        log.debug("ImplicitProcessing - originator '{orig}' to recipient '{recip}' processing METHOD:POLLSTATUS, UID: '{uid}' - updating poll", orig=self.originator.cuaddr, recip=self.recipient.cuaddr, uid=self.uid)
         yield self.writeCalendarResource(None, self.recipient_calendar_resource, processed_message)
 
         returnValue((True, False, False, None,))
@@ -762,8 +762,8 @@
                 AutoScheduleMode.acceptIfFreeDeclineIfBusy
             )
 
-        log.debug("ImplicitProcessing - recipient '%s' processing UID: '%s' - checking for auto-reply with mode: %s" % (self.recipient.cuaddr, self.uid, automode.name,))
 
+        log.debug("ImplicitProcessing - recipient '{recip}' processing UID: '{uid}' - checking for auto-reply with mode: {mode}", recip=self.recipient.cuaddr, uid=self.uid, mode=automode.name)
         cuas = self.recipient.record.calendarUserAddresses
 
         # First expand current one to get instances (only go 1 year into the future)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/utils.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/utils.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/utils.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -41,7 +41,7 @@
 
         if len(objectResources) > 1:
             # Delete all but the first one
-            log.debug("Should only have zero or one scheduling object resource with UID '%s' in calendar home: %s" % (uid, calendar_home,))
+            log.debug("Should only have zero or one scheduling object resource with UID '{uid}' in calendar home: {home!r}", uid=uid, home=calendar_home)
             for resource in objectResources[1:]:
                 yield resource._removeInternal(internal_state=ComponentRemoveState.INTERNAL, useTrash=False)
             objectResources = objectResources[:1]

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/work.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/work.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/scheduling/work.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -698,7 +698,7 @@
         scheduler = CalDAVScheduler(self.transaction, home.uid())
 
         # Do the PUT processing
-        log.info("Implicit REPLY - attendee: '%s' to organizer: '%s', UID: '%s'" % (originator, recipient, itipmsg.resourceUID(),))
+        log.info("Implicit REPLY - attendee: '{att}' to organizer: '{org}', UID: '{uid}'", att=originator, org=recipient, uid=itipmsg.resourceUID())
         response = (yield scheduler.doSchedulingViaPUT(originator, (recipient,), itipmsg, internal_request=True))
         returnValue(response)
 
@@ -1095,15 +1095,15 @@
                 yield NamedLock.acquire(self.transaction, "ImplicitUIDLock:%s" % (hashlib.md5(resource.uid()).hexdigest(),))
 
                 # Send out a reply
-                log.debug("ImplicitProcessing - recipient '%s' processing UID: '%s' - auto-reply: %s" % (home.uid(), resource.uid(), self.partstat))
+                log.debug("ImplicitProcessing - recipient '{recip}' processing UID: '{uid}' - auto-reply: {partstat}", recip=home.uid(), uid=resource.uid(), partstat=self.partstat)
                 from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler
                 scheduler = ImplicitScheduler()
                 yield scheduler.sendAttendeeReply(self.transaction, resource)
             except Exception, e:
-                log.debug("ImplicitProcessing - auto-reply exception UID: '%s', %s" % (resource.uid(), str(e)))
+                log.debug("ImplicitProcessing - auto-reply exception UID: '{uid}', {ex}", uid=resource.uid(), ex=str(e))
                 raise
             except:
-                log.debug("ImplicitProcessing - auto-reply bare exception UID: '%s'" % (resource.uid(),))
+                log.debug("ImplicitProcessing - auto-reply bare exception UID: '{uid}'", uid=resource.uid())
                 raise
         else:
             log.debug("ImplicitProcessing - skipping auto-reply of missing ID: '{rid}'", rid=self.resourceID)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -196,9 +196,9 @@
             ).on(txn))
             total = len(rows)
             count = 0
-            log.warn("{0} dropbox ids to migrate".format(total,))
+            log.warn("{total} dropbox ids to migrate", total=total)
         except RuntimeError, e:
-            log.error("Dropbox migration failed when cleaning out dropbox ids: {0}".format(e,))
+            log.error("Dropbox migration failed when cleaning out dropbox ids: {ex}", ex=e)
             yield txn.abort()
             raise
         else:
@@ -221,9 +221,9 @@
                     for dropbox_id in rows:
                         (yield self._upgradeDropbox(txn, dropbox_id))
                     count += len(rows)
-                    log.warn("{0} of {1} dropbox ids migrated".format(count, total,))
+                    log.warn("{count} of {total} dropbox ids migrated", count=count, total=total)
             except RuntimeError, e:
-                log.error("Dropbox migration failed for '{0}': {1}".format(dropbox_id, e,))
+                log.error("Dropbox migration failed for '{id}': {ex}", id=dropbox_id, ex=e)
                 yield txn.abort()
                 raise
             else:
@@ -243,11 +243,11 @@
         @type dropbox_id: C{str}
         """
 
-        log.debug("Processing dropbox id: {0}".format(dropbox_id,))
+        log.debug("Processing dropbox id: {id}", id=dropbox_id)
 
         # Get all affected calendar objects
         cobjs = (yield self._loadCalendarObjectsForDropboxID(txn, dropbox_id))
-        log.debug("  {0} affected calendar objects".format(len(cobjs),))
+        log.debug("  {len} affected calendar objects", len=len(cobjs))
 
         # Get names of each matching attachment
         at = Attachment._attachmentSchema
@@ -256,18 +256,18 @@
             From=at,
             Where=at.DROPBOX_ID == dropbox_id,
         ).on(txn))
-        log.debug("  {0} associated attachment objects".format(len(names),))
+        log.debug("  {len} associated attachment objects", len=len(names))
 
         # For each attachment, update each calendar object
         for name in names:
             name = name[0]
-            log.debug("  processing attachment object: {0}".format(name,))
+            log.debug("  processing attachment object: {name}", name=name)
             attachment = (yield DropBoxAttachment.load(txn, dropbox_id, name))
 
             # Check for orphans
             if len(cobjs) == 0:
                 # Just remove the attachment
-                log.warn("Orphaned dropbox id removed: {0}".format(attachment._path,))
+                log.warn("Orphaned dropbox id removed: {path}", path=attachment._path)
                 yield attachment.remove()
                 continue
 
@@ -278,35 +278,35 @@
                 if cobj._parentCollection.ownerHome()._resourceID == attachment._ownerHomeID:
                     owners.append(cobj)
                 cobj_by_UID[cobj.uid()].append(cobj)
-            log.debug("    {0} owner calendar objects".format(len(owners),))
-            log.debug("    {0} UIDs".format(len(cobj_by_UID),))
-            log.debug("    {0} total calendar objects".format(sum([len(items) for items in cobj_by_UID.values()]),))
+            log.debug("    {len} owner calendar objects", len=len(owners))
+            log.debug("    {len} UIDs", len=len(cobj_by_UID))
+            log.debug("    {len} total calendar objects", len=sum([len(items) for items in cobj_by_UID.values()]))
 
             if owners:
                 # Create the managed attachment without references to calendar objects.
                 managed = (yield attachment.convertToManaged())
-                log.debug("    converted attachment: {0!r}".format(attachment,))
+                log.debug("    converted attachment: {att!r}", att=attachment)
 
                 # Do conversion for each owner object
                 for owner_obj in owners:
 
                     # Add a reference to the managed attachment
                     mattachment = (yield managed.newReference(owner_obj._resourceID))
-                    log.debug("    added reference for: {0!r}".format(owner_obj,))
+                    log.debug("    added reference for: {owner!r}", owner=owner_obj)
 
                     # Rewrite calendar data
                     for cobj in cobj_by_UID[owner_obj.uid()]:
                         (yield cobj.convertAttachments(attachment, mattachment))
-                        log.debug("    re-wrote calendar object: {0!r}".format(cobj,))
+                        log.debug("    re-wrote calendar object: {owner!r}", owner=cobj)
             else:
                 # TODO: look for cobjs that were not changed and remove their ATTACH properties.
                 # These could happen if the owner object no longer exists.
                 # For now just remove the attachment
-                log.warn("Unowned dropbox id removed: {0}".format(attachment._path,))
+                log.warn("Unowned dropbox id removed: {path}", path=attachment._path)
                 yield attachment.remove()
                 continue
 
-        log.debug("  finished dropbox id: {0}".format(dropbox_id,))
+        log.debug("  finished dropbox id: {id}", id=dropbox_id)
 
 
     @inlineCallbacks
@@ -805,7 +805,7 @@
         """
 
         # Make sure the loop does not operate on any new calendars created during the loop
-        self.log.warn("Splitting calendars for user {0}".format(self._ownerUID,))
+        self.log.warn("Splitting calendars for user {uid}", uid=self._ownerUID)
         calendars = yield self.calendars()
         for calendar in calendars:
 
@@ -816,7 +816,7 @@
             if calendar.isTrash():
                 continue
             split_count = yield calendar.splitCollectionByComponentTypes()
-            self.log.warn("  Calendar: '{0}', split into {1}".format(calendar.name(), split_count + 1,))
+            self.log.warn("  Calendar: '{name}', split into {count}", name=calendar.name(), count=split_count + 1)
 
         yield self.ensureDefaultCalendarsExist()
 
@@ -1927,7 +1927,7 @@
 
         # Actually expand recurrence max
         for name in names:
-            self.log.info("Search falls outside range of index for {0} {1} to {2}".format(name, minDate, maxDate))
+            self.log.info("Search falls outside range of index for {name} {min} to {max}", name=name, min=minDate, max=maxDate)
             yield self.reExpandResource(name, minDate, maxDate)
 
 
@@ -4016,8 +4016,9 @@
                 recurrenceLimit = instances.limit
                 recurrenceLowerLimit = instances.lowerLimit
             except InvalidOverriddenInstanceError, e:
-                self.log.error("Invalid instance {0} when indexing {1} in {2}".format(
-                    e.rid, self._name, self._calendar,)
+                self.log.error(
+                    "Invalid instance {rid} when indexing {name} in {cal!r}",
+                    rid=e.rid, name=self._name, cal=self._calendar,
                 )
 
                 if txn._migrating:
@@ -4293,16 +4294,14 @@
 
             if unfixed:
                 self.log.error(
-                    "Calendar data id={0} had unfixable problems:\n  {1}".format(
-                        self._resourceID, "\n  ".join(unfixed),
-                    )
+                    "Calendar data id={id} had unfixable problems:\n  {problems}",
+                    id=self._resourceID, problems="\n  ".join(unfixed),
                 )
 
             if fixed:
                 self.log.error(
-                    "Calendar data id={0} had fixable problems:\n  {1}".format(
-                        self._resourceID, "\n  ".join(fixed),
-                    )
+                    "Calendar data id={id} had fixable problems:\n  {problems}",
+                    id=self._resourceID, problems="\n  ".join(fixed),
                 )
 
             # Check for on-demand data upgrade
@@ -4850,7 +4849,7 @@
             t = attachment.store(content_type, filename)
             yield readStream(stream, t.write)
         except Exception, e:
-            self.log.error("Unable to store attachment: {0}".format(e,))
+            self.log.error("Unable to store attachment: {ex}", ex=e)
             raise AttachmentStoreFailed
         yield t.loseConnection()
 
@@ -4908,7 +4907,7 @@
             # Check that this is a proper update
             oldattachment = (yield self.attachmentWithManagedID(managed_id))
             if oldattachment is None:
-                self.log.error("Missing managed attachment even though ATTACHMENT_CALENDAR_OBJECT indicates it is present: {0}".format(managed_id,))
+                self.log.error("Missing managed attachment even though ATTACHMENT_CALENDAR_OBJECT indicates it is present: {mid}", mid=managed_id)
                 raise AttachmentStoreFailed
 
             # We actually create a brand new attachment object for the update, but with the same managed-id. That way, other resources
@@ -4917,7 +4916,7 @@
             t = attachment.store(content_type, filename)
             yield readStream(stream, t.write)
         except Exception, e:
-            self.log.error("Unable to store attachment: {0}".format(e,))
+            self.log.error("Unable to store attachment: {ex}", ex=e)
             raise AttachmentStoreFailed
         yield t.loseConnection()
 
@@ -5266,7 +5265,7 @@
         except (ObjectResourceNameAlreadyExistsError, UIDExistsError, UIDExistsElsewhereError):
             raise InvalidSplit("Chosen UID exists elsewhere.")
         except Exception as e:
-            log.error("Unknown split exception: {}".format(str(e)))
+            log.error("Unknown split exception: {ex}", ex=str(e))
             raise InvalidSplit("Unknown error.")
         returnValue(olderObject)
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_implicit.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_implicit.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_implicit.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -1274,3 +1274,5 @@
         cobjs = (yield calendar2.calendarObjects())
         self.assertTrue(len(cobjs) == 0)
         yield self.commit()
+
+        self.flushLoggedErrors(ValueError)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/test/test_sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -1022,7 +1022,9 @@
         yield d1
         yield d2
 
+        self.flushLoggedErrors()
 
+
     @inlineCallbacks
     def test_transferSharingDetails(self):
         """

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/caldav/datastore/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -111,7 +111,7 @@
     try:
         record = yield recordFunction(cuaddr)
     except Exception, e:
-        log.debug("Lookup of %s failed: %s" % (cuaddr, e))
+        log.debug("Lookup of {cuaddr} failed: {ex}", cuaddr=cuaddr, ex=e)
         record = None
 
     if record is None:
@@ -204,18 +204,21 @@
         try:
             ctype = yield calendarObject.componentType()
         except Exception, e: # Don't stop for any error
-            log.error("  Failed to migrate calendar object: %s/%s/%s (%s)" % (
-                inCalendar.ownerHome().name(),
-                inCalendar.name(),
-                calendarObject.name(),
-                str(e)
-            ))
+            log.error(
+                "  Failed to migrate calendar object: {home}/{cal}/{rsrc} (ex)",
+                home=inCalendar.ownerHome().name(),
+                cal=inCalendar.name(),
+                rsrc=calendarObject.name(),
+                ex=str(e),
+            )
             bad_count += 1
             continue
 
         if ctype not in ("VEVENT", "VTODO"):
-            log.error("Migration skipping unsupported (%s) calendar object %r"
-                      % (ctype, calendarObject))
+            log.error(
+                "Migration skipping unsupported ({ctype}) calendar object {obj!r}",
+                ctype=ctype, obj=calendarObject,
+            )
             continue
         if merge:
             mightConflict = yield outHome.hasCalendarResourceUIDSomewhereElse(
@@ -223,8 +226,8 @@
             )
             if mightConflict is not None:
                 log.warn(
-                    "Not migrating object %s/%s/%s due to potential conflict" %
-                    (outHome.uid(), outCalendar.name(), calendarObject.name())
+                    "Not migrating object {home}/{cal}/{rsrc} due to potential conflict",
+                    home=outHome.uid(), cal=outCalendar.name(), rsrc=calendarObject.name(),
                 )
                 continue
         try:
@@ -264,20 +267,22 @@
             ok_count += 1
 
         except InternalDataStoreError:
-            log.error("  InternalDataStoreError: Failed to migrate calendar object: %s/%s/%s" % (
-                inCalendar.ownerHome().name(),
-                inCalendar.name(),
-                calendarObject.name(),
-            ))
+            log.error(
+                "  InternalDataStoreError: Failed to migrate calendar object: {home}/{cal}/{rsrc}",
+                home=inCalendar.ownerHome().name(),
+                cal=inCalendar.name(),
+                rsrc=calendarObject.name(),
+            )
             bad_count += 1
 
         except Exception, e:
-            log.error("  %s: Failed to migrate calendar object: %s/%s/%s" % (
-                str(e),
-                inCalendar.ownerHome().name(),
-                inCalendar.name(),
-                calendarObject.name(),
-            ))
+            log.error(
+                "  {ex}: Failed to migrate calendar object: {home}/{cal}/{rsrc}",
+                ex=str(e),
+                home=inCalendar.ownerHome().name(),
+                cal=inCalendar.name(),
+                rsrc=calendarObject.name(),
+            )
             bad_count += 1
 
     returnValue((ok_count, bad_count,))
@@ -404,7 +409,7 @@
             yield _migrateCalendar(calendar, outCalendar, getComponent, merge=merge)
         except InternalDataStoreError:
             log.error(
-                "  Failed to migrate calendar: %s/%s" % (inHome.name(), name,)
+                "  Failed to migrate calendar: {home}/{cal}", home=inHome.name(), cal=name,
             )
 
     # No migration for notifications, since they weren't present in earlier

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/file.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/file.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -239,10 +239,10 @@
         fixed, unfixed = component.validVCardData(doFix=True, doRaise=False)
 
         if unfixed:
-            self.log.error("Address data at %s had unfixable problems:\n  %s" % (self._path.path, "\n  ".join(unfixed),))
+            self.log.error("Address data at {path} had unfixable problems:\n  {problems}", path=self._path.path, problems="\n  ".join(unfixed))
 
         if fixed:
-            self.log.error("Address data at %s had fixable problems:\n  %s" % (self._path.path, "\n  ".join(fixed),))
+            self.log.error("Address data at {path} had fixable problems:\n  {problems}", path=self._path.path, problems="\n  ".join(fixed))
 
         return component
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/index_file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/index_file.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/index_file.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -80,10 +80,10 @@
 
     def reserveUID(self, uid):
         uid = uid.encode('utf-8')
-        self.log.debug("Reserving UID %r @ %r" % (
-            uid,
-            self.index.resource.fp.path
-        ))
+        self.log.debug(
+            "Reserving UID {uid} @ {path}",
+            uid=uid, path=self.index.resource.fp.path,
+        )
 
         def _handleFalse(result):
             if result is False:
@@ -101,10 +101,10 @@
 
     def unreserveUID(self, uid):
         uid = uid.encode('utf-8')
-        self.log.debug("Unreserving UID %r @ %r" % (
-            uid,
-            self.index.resource.fp.path
-        ))
+        self.log.debug(
+            "Unreserving UID {uid} @ {path}",
+            uid=uid, path=self.index.resource.fp.path,
+        )
 
         def _handleFalse(result):
             if result is False:
@@ -120,10 +120,10 @@
 
     def isReservedUID(self, uid):
         uid = uid.encode('utf-8')
-        self.log.debug("Is reserved UID %r @ %r" % (
-            uid,
-            self.index.resource.fp.path
-        ))
+        self.log.debug(
+            "Is reserved UID {uid} @ {path}",
+            uid=uid, path=self.index.resource.fp.path,
+        )
 
         def _checkValue((flags, value)):
             if value is None:
@@ -160,7 +160,7 @@
                 % (uid, self.index.resource)
             )
         except sqlite.Error, e:
-            log.error("Unable to reserve UID: %s", (e,))
+            log.error("Unable to reserve UID: {ex}", ex=e)
             self.index._db_rollback()
             raise
 
@@ -184,7 +184,7 @@
                         "delete from RESERVED where UID = :1", uid)
                     self.index._db_commit()
                 except sqlite.Error, e:
-                    log.error("Unable to unreserve UID: %s", (e,))
+                    log.error("Unable to unreserve UID: {ex}", ex=e)
                     self.index._db_rollback()
                     raise
 
@@ -212,7 +212,7 @@
                     self.index._db_execute("delete from RESERVED where UID = :1", uid)
                     self.index._db_commit()
                 except sqlite.Error, e:
-                    log.error("Unable to unreserve UID: %s", (e,))
+                    log.error("Unable to unreserve UID: {ex}", ex=e)
                     self.index._db_rollback()
                     raise
                 return False
@@ -336,7 +336,7 @@
             name_utf8 = name.encode("utf-8")
             if name is not None and self.resource.getChild(name_utf8) is None:
                 # Clean up
-                log.error("Stale resource record found for child %s with UID %s in %s" % (name, uid, self.resource))
+                log.error("Stale resource record found for child {name} with UID {uid} in {rsrc!r}", name=name, uid=uid, rsrc=self.resource)
                 self._delete_from_db(name, uid, False)
                 self._db_commit()
             else:
@@ -507,8 +507,10 @@
             if self.resource.getChild(name.encode("utf-8")):
                 yield row
             else:
-                log.error("vCard resource %s is missing from %s. Removing from index."
-                          % (name, self.resource))
+                log.error(
+                    "vCard resource {name} is missing from {rsrc}. Removing from index.",
+                    name=name, rsrc=self.resource,
+                )
                 self.deleteResource(name, None)
 
 
@@ -527,8 +529,10 @@
             if self.resource.getChild(name.encode("utf-8")):
                 yield row
             else:
-                log.error("AddressBook resource %s is missing from %s. Removing from index."
-                          % (name, self.resource))
+                log.error(
+                    "AddressBook resource {name} is missing from {rsrc}. Removing from index.",
+                    name=name, rsrc=self.resource,
+                )
                 self.deleteResource(name)
 
 
@@ -644,7 +648,7 @@
             try:
                 stream = child.open()
             except (IOError, OSError), e:
-                log.error("Unable to open resource %s: %s" % (name, e))
+                log.error("Unable to open resource {name}: {ex}", name=name, ex=e)
                 continue
 
             try:
@@ -654,9 +658,9 @@
                     vcard.validVCardData()
                     vcard.validForCardDAV()
                 except ValueError:
-                    log.error("Non-addressbook resource: %s" % (name,))
+                    log.error("Non-addressbook resource: {name}", name=name)
                 else:
-                    # log.info("Indexing resource: %s" % (name,))
+                    # log.info("Indexing resource: {name}", name=name)
                     self.addResource(name, vcard, True)
             finally:
                 stream.close()

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -2759,10 +2759,10 @@
                 fixed, unfixed = component.validVCardData(doFix=True, doRaise=False)
 
                 if unfixed:
-                    self.log.error("Address data id=%s had unfixable problems:\n  %s" % (self._resourceID, "\n  ".join(unfixed),))
+                    self.log.error("Address data id={id} had unfixable problems:\n  {problems}", id=self._resourceID, problems="\n  ".join(unfixed))
 
                 if fixed:
-                    self.log.error("Address data id=%s had fixable problems:\n  %s" % (self._resourceID, "\n  ".join(fixed),))
+                    self.log.error("Address data id={id} had fixable problems:\n  {problems}", id=self._resourceID, problems="\n  ".join(fixed))
 
                 if self._kind == _ABO_KIND_GROUP:
                     assert not component.hasProperty("X-ADDRESSBOOKSERVER-MEMBER"), "database group vCard text contains members %s" % (component,)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/test/test_sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/test/test_sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -779,7 +779,9 @@
         yield d1
         yield d2
 
+        self.flushLoggedErrors()
 
+
     @inlineCallbacks
     def test_resourceLock(self):
         """

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/carddav/datastore/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -101,20 +101,22 @@
             ok_count += 1
 
         except InternalDataStoreError:
-            log.error("  InternalDataStoreError: Failed to migrate address book object: %s/%s/%s" % (
-                inAddressbook.ownerHome().name(),
-                inAddressbook.name(),
-                addressbookObject.name(),
-            ))
+            log.error(
+                "  InternalDataStoreError: Failed to migrate address book object: {home}/{adbk}/{rsrc}",
+                home=inAddressbook.ownerHome().name(),
+                adbk=inAddressbook.name(),
+                rsrc=addressbookObject.name(),
+            )
             bad_count += 1
 
         except Exception, e:
-            log.error("  %s: Failed to migrate address book object: %s/%s/%s" % (
-                str(e),
-                inAddressbook.ownerHome().name(),
-                inAddressbook.name(),
-                addressbookObject.name(),
-            ))
+            log.error(
+                "  {ex}: Failed to migrate address book object: {home}/{adbk}/{rsrc}",
+                ex=str(e),
+                home=inAddressbook.ownerHome().name(),
+                adbk=inAddressbook.name(),
+                rsrc=addressbookObject.name(),
+            )
             bad_count += 1
 
     returnValue((ok_count, bad_count,))
@@ -134,4 +136,4 @@
         try:
             yield _migrateAddressbook(addressbook, outAddressbook, getComponent)
         except InternalDataStoreError:
-            log.error("  Failed to migrate address book: %s/%s" % (inHome.name(), name,))
+            log.error("  Failed to migrate address book: {home}/{name}", home=inHome.name(), name=name)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/file.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/file.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/file.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -944,7 +944,7 @@
         old_used = self.quotaUsedBytes()
         new_used = old_used + delta
         if new_used < 0:
-            self.log.error("Fixing quota adjusted below zero to %s by change amount %s" % (new_used, delta,))
+            self.log.error("Fixing quota adjusted below zero to {used} by change amount {delta}", used=new_used, delta=delta)
             new_used = 0
         self.properties()[PropertyName.fromElement(TwistedQuotaUsedProperty)] = TwistedQuotaUsedProperty(str(new_used))
 
@@ -1132,7 +1132,7 @@
                     trash.remove()
                     self.properties()._removeResource()
                 except Exception, e:
-                    self.log.error("Unable to delete trashed child at %s: %s" % (trash.fp, e))
+                    self.log.error("Unable to delete trashed child at {path}: {ex}", path=trash.fp, ex=e)
 
             self._transaction.addOperation(cleanup, "remove child backup %r" % (self._name,))
             def undo():

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/podding/migration/home_sync.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/podding/migration/home_sync.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/podding/migration/home_sync.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -77,7 +77,7 @@
             except Exception as ex:
                 f = Failure()
                 yield txn.abort()
-                log.error("{label} failed: {e}".format(label=label, e=str(ex)))
+                log.error("{label} failed: {e}", label=label, e=str(ex))
                 returnValue(f)
             else:
                 yield txn.commit()

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -3021,7 +3021,6 @@
         if trash is None:
             if verbose:
                 msg = "No trash collection for principal"
-                print(msg)
                 log.info(msg)
             returnValue(None)
 
@@ -3038,14 +3037,12 @@
 
             if verbose:
                 msg = "Collection \"{}\":".format(displayName.encode("utf-8"))
-                print(msg)
                 log.info(msg)
             for child in children:
                 if verbose:
                     component = yield child.component()
                     summary = component.mainComponent().propertyValue("SUMMARY", "<no title>")
                     msg = "   Removing \"{}\"...".format(summary)
-                    print(msg)
                     log.info(msg)
                 yield child.purge(implicitly=False)
             if verbose:
@@ -3059,14 +3056,12 @@
             )
             if verbose:
                 msg = "Collection \"{}\":".format(displayName.encode("utf-8"))
-                print(msg)
                 log.info(msg)
             for child in children:
                 if verbose:
                     component = yield child.component()
                     summary = component.mainComponent().propertyValue("SUMMARY", "<no title>")
                     msg = "   Removing \"{}\"...".format(summary)
-                    print(msg)
                     log.info(msg)
                 yield child.purge(implicitly=False)
             if verbose:
@@ -3075,7 +3070,6 @@
             if collection.whenTrashed() < endTime:
                 if verbose:
                     msg = "Removing collection \"{}\"...".format(displayName.encode("utf-8"))
-                    print(msg)
                     log.info(msg)
                 yield collection.purge()
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/test_sql.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/test_sql.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -235,7 +235,7 @@
         try:
             (yield txn.subtransaction(_test, retries=0))[0][0]
         except AllRetriesFailed:
-            pass
+            self.flushLoggedErrors()
         else:
             self.fail("AllRetriesFailed not raised")
         self.assertEqual(ctr[0], 1)
@@ -264,7 +264,7 @@
         try:
             (yield txn.subtransaction(_test, retries=2))[0][0]
         except AllRetriesFailed:
-            pass
+            self.flushLoggedErrors()
         else:
             self.fail("AllRetriesFailed not raised")
         self.assertEqual(ctr[0], 3)
@@ -294,7 +294,7 @@
         try:
             result = yield started
         except AllRetriesFailed:
-            pass
+            self.flushLoggedErrors()
         else:
             self.fail("AllRetriesFailed not raised, %r returned instead" %
                       (result,))

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/util.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/test/util.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -348,7 +348,7 @@
                     yield cleanupTxn.execSQL("select {}.nextval from dual".format(sequence.name), [])
                     yield cleanupTxn.execSQL("alter sequence {} increment by {}".format(sequence.name, 1), [])
             except:
-                log.failure("setval sequence '{}' failed", sequence=sequence.name)
+                log.failure("setval sequence '{sequence}' failed", sequence=sequence.name)
         yield cleanupTxn.execSQL("update CALENDARSERVER set VALUE = '1' where NAME = 'MIN-VALID-REVISION'", [])
 
         yield cleanupTxn.commit()

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/migrate.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/migrate.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/migrate.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -316,15 +316,12 @@
         """
         migrateFunc, destFunc = homeTypeLookup.get(homeType)
         uid = normalizeUUIDOrNot(fileHome.uid())
-        self.log.warn("Starting migration transaction %s UID %r" %
-                      (homeType, uid))
+        self.log.warn("Starting migration transaction {type} UID {uid}", type=homeType, uid=uid)
         sqlTxn = self.sqlStore.newTransaction(label="UpgradeToDatabaseStep.migrateOneHome")
         homeGetter = destFunc(sqlTxn)
         sqlHome = yield homeGetter(uid, create=False)
         if sqlHome is not None and not self.merge:
-            self.log.warn(
-                "%s home %r already existed not migrating" % (
-                    homeType, uid))
+            self.log.warn("{ty[e} home {uid} already existed not migrating", type=homeType, uid=uid)
             yield sqlTxn.abort()
             returnValue(None)
         try:
@@ -395,7 +392,7 @@
     @inlineCallbacks
     def _upgradeAction(self, fileTxn, fileHome, homeType):
         uid = fileHome.uid()
-        self.log.warn("Migrating %s UID %r" % (homeType, uid))
+        self.log.warn("Migrating {type} UID {uid}", type=homeType, uid=uid)
         yield self.migrateOneHome(fileTxn, homeType, fileHome)
 
 
@@ -432,6 +429,6 @@
             (doCalendarUpgrade_4_to_5, "Calendar data upgrade from v4 to v5"),
             (doAddressbookUpgrade_1_to_2, "Addressbook data upgrade from v1 to v2"),
         ):
-            self.log.warn("Migration extra step: {}".format(description))
+            self.log.warn("Migration extra step: {desc}", desc=description)
             yield upgrade(self.sqlStore)
         self.sqlStore.setUpgrading(False)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/sql/upgrade.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/sql/upgrade.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/upgrade/sql/upgrade.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -126,13 +126,13 @@
         """
         Do a database schema upgrade.
         """
-        self.log.warn("Beginning database %s check." % (self.versionDescriptor,))
+        self.log.warn("Beginning database {vers} check.", vers=self.versionDescriptor)
 
         # Retrieve information from schema and database
         dialect, required_version, actual_version = yield self.getVersions()
 
         if required_version == actual_version:
-            self.log.warn("%s version check complete: no upgrade needed." % (self.versionDescriptor.capitalize(),))
+            self.log.warn("{vers} version check complete: no upgrade needed.", vers=self.versionDescriptor.capitalize())
         elif required_version < actual_version:
             msg = "Actual %s version %s is more recent than the expected version %s. The service cannot be started" % (
                 self.versionDescriptor, actual_version, required_version,
@@ -146,7 +146,7 @@
             yield self.upgradeVersion(actual_version, required_version, dialect)
             self.sqlStore.setUpgrading(False)
 
-        self.log.warn("Database %s check complete." % (self.versionDescriptor,))
+        self.log.warn("Database {vers} check complete.", vers=self.versionDescriptor)
 
         returnValue(None)
 
@@ -167,7 +167,7 @@
             raise RuntimeError(msg)
         else:
             required_version = int(found.group(1))
-            self.log.warn("Required database key %s: %s." % (self.versionKey, required_version,))
+            self.log.warn("Required database key {key}: {vers}.", key=self.versionKey, vers=required_version)
 
         # Get the schema version in the current database
         sqlTxn = self.sqlStore.newTransaction(label="UpgradeDatabaseCoreStep.getVersions")
@@ -178,14 +178,14 @@
             yield sqlTxn.commit()
         except (RuntimeError, ValueError):
             f = Failure()
-            self.log.error("Database key %s cannot be determined." % (self.versionKey,))
+            self.log.error("Database key {key} cannot be determined.", key=self.versionKey)
             yield sqlTxn.abort()
             if self.defaultKeyValue is None:
                 f.raiseException()
             else:
                 actual_version = self.defaultKeyValue
 
-        self.log.warn("Actual database key %s: %s." % (self.versionKey, actual_version,))
+        self.log.warn("Actual database key {key}: {vers}.", key=self.versionKey, vers=actual_version)
 
         returnValue((dialect, required_version, actual_version,))
 
@@ -197,7 +197,7 @@
         looking for upgrade_from_X_to_Y.sql files that cover the full range of upgrades.
         """
 
-        self.log.warn("Starting %s upgrade from version %d to %d." % (self.versionDescriptor, fromVersion, toVersion,))
+        self.log.warn("Starting {vers} upgrade from version {fr} to {to}.", vers=self.versionDescriptor, fr=fromVersion, to=toVersion)
 
         # Scan for all possible upgrade files - returned sorted
         files = self.scanForUpgradeFiles(dialect)
@@ -210,10 +210,10 @@
             for fp in upgrades:
                 yield self.applyUpgrade(fp)
         except RuntimeError:
-            self.log.error("Database %s upgrade failed using: %s" % (self.versionDescriptor, fp.basename(),))
+            self.log.error("Database {vers} upgrade failed using: {path}", vers=self.versionDescriptor, path=fp.basename())
             raise
 
-        self.log.warn("%s upgraded from version %d to %d." % (self.versionDescriptor.capitalize(), fromVersion, toVersion,))
+        self.log.warn("{vers} upgraded from version {fr} to {o}.", vers=self.versionDescriptor.capitalize(), fr=fromVersion, to=toVersion)
 
 
     def getPathToUpgrades(self, dialect):
@@ -312,7 +312,7 @@
         """
         Apply the schema upgrade .sql file to the database.
         """
-        self.log.warn("Applying schema upgrade: %s" % (fp.basename(),))
+        self.log.warn("Applying schema upgrade: {path}", path=fp.basename())
         sqlTxn = self.sqlStore.newTransaction(label="UpgradeDatabaseSchemaStep.applyUpgrade")
         try:
             sql = fp.getContent()
@@ -356,7 +356,7 @@
             self.log.error(msg)
             raise RuntimeError(msg)
 
-        self.log.warn("Applying data upgrade: %s" % (module,))
+        self.log.warn("Applying data upgrade: {module}", module=module)
         yield doUpgrade(self.sqlStore)
 
 
@@ -467,7 +467,7 @@
         """
         Do upgrades.
         """
-        self.log.warn("Beginning database %s check." % (self.versionDescriptor,))
+        self.log.warn("Beginning database {vers} check.", vers=self.versionDescriptor)
 
         # Do each upgrade in our own predefined order
         self.sqlStore.setUpgrading(True)
@@ -477,6 +477,6 @@
 
         self.sqlStore.setUpgrading(False)
 
-        self.log.warn("Database %s check complete." % (self.versionDescriptor,))
+        self.log.warn("Database {vers} check complete.", vers=self.versionDescriptor)
 
         returnValue(None)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/inbox_cleanup.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/inbox_cleanup.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/inbox_cleanup.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -64,8 +64,10 @@
         ).on(self.transaction))[0][0]
 
         if queuedCleanupOneInboxWorkItems:
-            log.error("Inbox cleanup work: Can't schedule per home cleanup because {} work items still queued.".format(
-                queuedCleanupOneInboxWorkItems))
+            log.error(
+                "Inbox cleanup work: Can't schedule per home cleanup because {} work items still queued.",
+                queuedCleanupOneInboxWorkItems
+            )
         else:
             # enumerate provisioned normal calendar homes
             ch = schema.CALENDAR_HOME
@@ -95,8 +97,10 @@
         ))
         if orphanNames:
             home = yield self.transaction.calendarHomeWithResourceID(self.homeID)
-            log.info("Inbox cleanup work in home: {homeUID}, deleting orphaned items: {orphanNames}".format(
-                homeUID=home.uid(), orphanNames=orphanNames))
+            log.info(
+                "Inbox cleanup work in home: {homeUID}, deleting orphaned items: {orphanNames}",
+                homeUID=home.uid(), orphanNames=orphanNames,
+            )
 
         # get old item names
         if float(config.InboxCleanup.ItemLifetimeDays) >= 0: # use -1 to disable; 0 is test case
@@ -107,8 +111,10 @@
             newDeleters = oldItemNames - orphanNames
             if newDeleters:
                 home = yield self.transaction.calendarHomeWithResourceID(self.homeID)
-                log.info("Inbox cleanup work in home: {homeUID}, deleting old items: {newDeleters}".format(
-                    homeUID=home.uid(), newDeleters=newDeleters))
+                log.info(
+                    "Inbox cleanup work in home: {homeUID}, deleting old items: {newDeleters}",
+                    homeUID=home.uid(), newDeleters=newDeleters,
+                )
         else:
             oldItemNames = set()
 
@@ -121,8 +127,10 @@
             newDeleters = itemNamesForOldEvents - oldItemNames - orphanNames
             if newDeleters:
                 home = yield self.transaction.calendarHomeWithResourceID(self.homeID)
-                log.info("Inbox cleanup work in home: {homeUID}, deleting items for old events: {newDeleters}".format(
-                    homeUID=home.uid(), newDeleters=newDeleters))
+                log.info(
+                    "Inbox cleanup work in home: {homeUID}, deleting items for old events: {newDeleters}",
+                    homeUID=home.uid(), newDeleters=newDeleters,
+                )
         else:
             itemNamesForOldEvents = set()
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/load_work.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/load_work.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/common/datastore/work/load_work.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -66,11 +66,11 @@
         All this work does is wait for the specified amount of time.
         """
 
-        log.debug("TestWork started: {}".format(self.jobID))
+        log.debug("TestWork started: {jobid}", jobid=self.jobID)
         if self.delay != 0:
             wait = Deferred()
             def _timedDeferred():
                 wait.callback(True)
             reactor.callLater(self.delay / 1000.0, _timedDeferred)
             yield wait
-        log.debug("TestWork done: {}".format(self.jobID))
+        log.debug("TestWork done: {jobid}", jobid=self.jobID)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/dps/server.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/dps/server.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/dps/server.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -18,11 +18,13 @@
 import datetime
 import uuid
 
+from calendarserver.tap.caldav import ErrorLoggingMultiService
 from calendarserver.tap.util import getDBPool, storeFromConfigWithDPSServer
+
 from twext.python.log import Logger
 from twext.who.expression import MatchType, MatchFlags, Operand
+
 from twisted.application import service
-from twisted.application.service import MultiService
 from twisted.application.strports import service as strPortsService
 from twisted.internet.defer import inlineCallbacks, returnValue
 from twisted.internet.protocol import Factory
@@ -33,6 +35,7 @@
 from twistedcaldav import memcachepool
 from twistedcaldav.config import config
 from twistedcaldav.stdconfig import DEFAULT_CONFIG, DEFAULT_CONFIG_FILE
+
 from txdav.dps.commands import (
     RecordWithShortNameCommand, RecordWithUIDCommand, RecordWithGUIDCommand,
     RecordsWithRecordTypeCommand, RecordsWithEmailAddressCommand,
@@ -45,6 +48,7 @@
     UpdateRecordsCommand, FlushCommand,  # RemoveRecordsCommand,
 )
 from txdav.who.wiki import WikiAccessLevel
+
 from zope.interface import implementer
 
 
@@ -808,7 +812,8 @@
         else:
             setproctitle("CalendarServer Directory Proxy Service")
 
-        multiService = MultiService()
+        # Setup default logging behavior
+        multiService = ErrorLoggingMultiService(False, "", 0, 0, False)
 
         try:
             pool, txnFactory = getDBPool(config)

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/who/delegates.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/who/delegates.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/who/delegates.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -500,11 +500,11 @@
         # Try cache first
         delegateUIDs = yield self._memcacher.getMembers(delegator.uid, readWrite, expanded)
         if delegateUIDs is not None:
-            log.debug("_delegatesOfUIDs cached for: {} and read-write = {} and expanded = {}".format(delegator.uid, readWrite, expanded,))
+            log.debug("_delegatesOfUIDs cached for: {uid} and read-write = {rw} and expanded = {expanded}", uid=delegator.uid, rw=readWrite, expanded=expanded)
             returnValue(delegateUIDs)
 
         # Get from the store
-        log.debug("_delegatesOfUIDs for: {} and read-write = {} and expanded = {}".format(delegator.uid, readWrite, expanded,))
+        log.debug("_delegatesOfUIDs for: {uid} and read-write = {rw} and expanded = {expanded}", uid=delegator.uid, rw=readWrite, expanded=expanded)
         if delegator.thisServer():
             delegateUIDs = yield txn.delegates(delegator.uid, readWrite, expanded=expanded)
 
@@ -539,11 +539,11 @@
         # Try cache first
         delegatorUIDs = yield self._memcacher.getMemberships(delegate.uid, readWrite)
         if delegatorUIDs is not None:
-            log.debug("_delegatedToUIDs cached for: {} and read-write = {}".format(delegate.uid, readWrite,))
+            log.debug("_delegatedToUIDs cached for: {uid} and read-write = {rw}", uid=delegate.uid, rw=readWrite)
             returnValue(delegatorUIDs)
 
         # Get from the store
-        log.debug("_delegatedToUIDs for: {} and read-write = {}".format(delegate.uid, readWrite,))
+        log.debug("_delegatedToUIDs for: {uid} and read-write = {rw}", uid=delegate.uid, rw=readWrite)
         delegatorUIDs = (yield txn.delegators(delegate.uid, readWrite))
         if not onlyThisServer and config.Servers.Enabled:
             delegatorUIDs.update((yield self._podDelegators(txn, delegate, readWrite)))
@@ -583,7 +583,7 @@
         @rtype: a Deferred which fires a set of L{str}
         """
 
-        log.debug("_podDelegates for: {} and read-write = {} and expanded = {}".format(delegator.uid, readWrite, expanded,))
+        log.debug("_podDelegates for: {uid} and read-write = {rw} and expanded = {expanded}", uid=delegator.uid, rw=readWrite, expanded=expanded)
         return txn.store().conduit.send_get_delegates(txn, delegator, readWrite, expanded)
 
 
@@ -601,7 +601,7 @@
         @rtype: a Deferred which fires a set of L{str}
         """
 
-        log.debug("_podDelegators for: {} and read-write = {}".format(delegate.uid, readWrite,))
+        log.debug("_podDelegators for: {uid} and read-write = {rw}", uid=delegate.uid, rw=readWrite)
         results = yield DeferredList([
             txn.store().conduit.send_get_delegators(txn, server, delegate, readWrite) for
             server in txn.directoryService().serversDB().allServersExceptThis()

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/who/directory.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/who/directory.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/who/directory.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -448,12 +448,12 @@
         # if config.EnableSACLs and self.CheckSACL:
         #     username = self.shortNames[0]
         #     if self.CheckSACL(username, "calendar") != 0:
-        #         self.log.debug("%s is not enabled for calendaring due to SACL"
-        #                        % (username,))
+        #         self.log.debug("{user} is not enabled for calendaring due to SACL"
+        #                        user=username)
         #         self.enabledForCalendaring = False
         #     if self.CheckSACL(username, "addressbook") != 0:
-        #         self.log.debug("%s is not enabled for addressbooks due to SACL"
-        #                        % (username,))
+        #         self.log.debug("{user} is not enabled for addressbooks due to SACL"
+        #                        user=username)
         #         self.enabledForAddressBooks = False
 
     @property

Modified: CalendarServer/branches/users/cdaboo/newlog/txdav/xml/base.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txdav/xml/base.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txdav/xml/base.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -214,8 +214,8 @@
             else:
                 if not (isinstance(child, PCDATAElement) and child.isWhitespace()):
                     log.debug(
-                        "Child of type %s is unexpected and therefore ignored in %s element"
-                        % (child.sname(), self.sname())
+                        "Child of type {child} is unexpected and therefore ignored in {this} element",
+                        child=child.sname(), this=self.sname(),
                     )
 
         for qname, (min, max) in allowed_children.items():
@@ -233,7 +233,7 @@
         if self.allowed_attributes:
             for name in attributes:
                 if name not in self.allowed_attributes:
-                    log.debug("Attribute %s is unexpected in %s element" % (name, self.sname()))
+                    log.debug("Attribute {attr} is unexpected in {this} element", attr=name, this=self.sname())
                 my_attributes[name] = attributes[name]
 
             for name, required in self.allowed_attributes.items():
@@ -243,8 +243,7 @@
 
         else:
             if not isinstance(self, WebDAVUnknownElement) and attributes:
-                log.debug("Attributes %s are unexpected in %s element"
-                          % (attributes.keys(), self.sname()))
+                log.debug("Attributes {attrs} are unexpected in {this} element", attrs=attributes.keys(), this=self.sname())
             my_attributes.update(attributes)
 
         self.attributes = my_attributes
@@ -390,7 +389,7 @@
                 try:
                     child.addToDOM(document, element)
                 except:
-                    log.error("Unable to add child %r of element %s to DOM" % (child, self))
+                    log.error("Unable to add child {child!r} of element {this!r} to DOM", child=child, this=self)
                     raise
 
 
@@ -511,7 +510,7 @@
         try:
             parent.appendChild(self.element(document))
         except TypeError:
-            log.error("Invalid PCDATA: %r" % (self.data,))
+            log.error("Invalid PCDATA: {data!r}", data=self.data)
             raise
 
 

Modified: CalendarServer/branches/users/cdaboo/newlog/txweb2/dav/http.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txweb2/dav/http.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txweb2/dav/http.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -191,8 +191,8 @@
 
         if code > 400:  # Error codes only
             log.error(
-                "Error during {method} for {path}: {message}",
-                method=self.method, path=path, message=message
+                "Error during {method} for {path}: {msg}",
+                method=self.method, path=path, msg=message
             )
 
         uri = path[self.path_basename_len:]
@@ -264,8 +264,8 @@
 
         if code > 400:  # Error codes only
             log.error(
-                "Error during {method} for {property}: {message}",
-                method=self.method, property=property, message=message
+                "Error during {method} for {property}: {msg}",
+                method=self.method, property=property, msg=message
             )
 
         children = []

Modified: CalendarServer/branches/users/cdaboo/newlog/txweb2/http.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txweb2/http.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txweb2/http.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -492,7 +492,7 @@
 
     def _error(self, reason):
         if reason.check(error.ConnectionLost):
-            log.info("Request error: {message}", message=reason.getErrorMessage())
+            log.info("Request error: {msg}", msg=reason.getErrorMessage())
         else:
             log.failure("Request error", reason)
             # Only bother with cleanup on errors other than lost connection.

Modified: CalendarServer/branches/users/cdaboo/newlog/txweb2/log.py
===================================================================
--- CalendarServer/branches/users/cdaboo/newlog/txweb2/log.py	2015-11-10 00:00:49 UTC (rev 15300)
+++ CalendarServer/branches/users/cdaboo/newlog/txweb2/log.py	2015-11-10 00:06:26 UTC (rev 15301)
@@ -212,12 +212,12 @@
     def start(self):
         """Start observing log events."""
         # Use the root publisher to bypass log level filtering
-        log.publisher.addObserver(self.emit, filtered=False)
+        log.observer.addObserver(self.emit)
 
 
     def stop(self):
         """Stop observing log events."""
-        log.publisher.removeObserver(self.emit)
+        log.observer.removeObserver(self.emit)
 
 
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20151109/c3ee899d/attachment-0001.html>


More information about the calendarserver-changes mailing list