[CalendarServer-changes] [14512] CalendarServer/branches/users/cdaboo/pod2pod-migration

source_changes at macosforge.org source_changes at macosforge.org
Thu Mar 5 12:20:11 PST 2015


Revision: 14512
          http://trac.calendarserver.org//changeset/14512
Author:   cdaboo at apple.com
Date:     2015-03-05 12:20:10 -0800 (Thu, 05 Mar 2015)
Log Message:
-----------
Merge from trunk.

Modified Paths:
--------------
    CalendarServer/branches/users/cdaboo/pod2pod-migration/calendarserver/tap/util.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/conf/caldavd-apple.plist
    CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/database.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/calendaruserproxy.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/test/test_proxyprincipaldb.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/stdconfig.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/subpostgres.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/test/test_subpostgres.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/caldav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/sql.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/test/test_sql.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/sql_util.py
    CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/test/test_sql.py

Property Changed:
----------------
    CalendarServer/branches/users/cdaboo/pod2pod-migration/


Property changes on: CalendarServer/branches/users/cdaboo/pod2pod-migration
___________________________________________________________________
Modified: svn:mergeinfo
   - /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/release/CalendarServer-5.1-dev:11846
/CalendarServer/branches/release/CalendarServer-5.2-dev:11972,12357-12358,12794,12814
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/cross-pod-sharing:12038-12191
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11607-11871
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/json:11622-11912
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/performance-tweaks:11824-11836
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/reverse-proxy-pods:11875-11900
/CalendarServer/branches/users/cdaboo/scheduling-queue-refresh:11783-12557
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/sharing-in-the-store:11935-12016
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/cleanrevisions:12152-12334
/CalendarServer/branches/users/gaya/groupsharee2:13669-13773
/CalendarServer/branches/users/gaya/sharedgroupfixes:12120-12142
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/log-cleanups:11691-11731
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/whenNotProposed:11881-11897
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/move2who:12819-12860
/CalendarServer/branches/users/sagen/move2who-2:12861-12898
/CalendarServer/branches/users/sagen/move2who-3:12899-12913
/CalendarServer/branches/users/sagen/move2who-4:12914-13157
/CalendarServer/branches/users/sagen/move2who-5:13158-13163
/CalendarServer/branches/users/sagen/newcua:13309-13327
/CalendarServer/branches/users/sagen/newcua-1:13328-13330
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/recordtypes:13648-13656
/CalendarServer/branches/users/sagen/recordtypes-2:13657
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/psycopg2cffi:14427-14439
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:14338-14482
   + /CalDAVTester/trunk:11193-11198
/CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/release/CalendarServer-5.1-dev:11846
/CalendarServer/branches/release/CalendarServer-5.2-dev:11972,12357-12358,12794,12814
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/cross-pod-sharing:12038-12191
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/fix-no-ischedule:11607-11871
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/json:11622-11912
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/performance-tweaks:11824-11836
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/reverse-proxy-pods:11875-11900
/CalendarServer/branches/users/cdaboo/scheduling-queue-refresh:11783-12557
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/sharing-in-the-store:11935-12016
/CalendarServer/branches/users/cdaboo/store-scheduling:10876-11129
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/gaya/cleanrevisions:12152-12334
/CalendarServer/branches/users/gaya/groupsharee2:13669-13773
/CalendarServer/branches/users/gaya/sharedgroupfixes:12120-12142
/CalendarServer/branches/users/gaya/sharedgroups-3:11088-11204
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/enforce-max-requests:11640-11643
/CalendarServer/branches/users/glyph/hang-fix:11465-11491
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/launchd-wrapper-bis:11413-11436
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/log-cleanups:11691-11731
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/start-service-start-loop:11060-11065
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/warning-cleanups:11347-11357
/CalendarServer/branches/users/glyph/whenNotProposed:11881-11897
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/move2who:12819-12860
/CalendarServer/branches/users/sagen/move2who-2:12861-12898
/CalendarServer/branches/users/sagen/move2who-3:12899-12913
/CalendarServer/branches/users/sagen/move2who-4:12914-13157
/CalendarServer/branches/users/sagen/move2who-5:13158-13163
/CalendarServer/branches/users/sagen/newcua:13309-13327
/CalendarServer/branches/users/sagen/newcua-1:13328-13330
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/recordtypes:13648-13656
/CalendarServer/branches/users/sagen/recordtypes-2:13657
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/psycopg2cffi:14427-14439
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:14338-14507

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/calendarserver/tap/util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/calendarserver/tap/util.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/calendarserver/tap/util.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -150,7 +150,6 @@
         options=config.Postgres.Options,
         uid=uid, gid=gid,
         spawnedDBUser=config.SpawnedDBUser,
-        importFileName=config.DBImportFile,
         pgCtl=config.Postgres.Ctl,
         initDB=config.Postgres.Init,
     )
@@ -161,8 +160,8 @@
     """
     Create a postgres DB-API connector from the given configuration.
     """
-    import pgdb
-    return DBAPIConnector(pgdb, postgresPreflight, config.DSN).connect
+    from txdav.base.datastore.subpostgres import postgres
+    return DBAPIConnector(postgres, postgresPreflight, config.DSN).connect
 
 
 

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/conf/caldavd-apple.plist
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/conf/caldavd-apple.plist	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/conf/caldavd-apple.plist	2015-03-05 20:20:10 UTC (rev 14512)
@@ -99,8 +99,6 @@
     <string></string>
     <key>DSN</key>
     <string></string>
-    <key>DBImportFile</key>
-    <string>/Library/Server/Calendar and Contacts/DataDump.sql</string>
     <key>Postgres</key>
     <dict>
         <key>Ctl</key>
@@ -331,7 +329,7 @@
 
     <!-- Log levels -->
     <key>DefaultLogLevel</key>
-    <string>warn</string> <!-- debug, info, warn, error -->
+    <string>info</string> <!-- debug, info, warn, error -->
 
     <!-- Server process ID file -->
     <key>PIDFile</key>

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/database.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/database.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/database.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -14,11 +14,19 @@
 # limitations under the License.
 ##
 
+"""
+Generic ADAPI database access object.
+"""
+
+__all__ = [
+    "AbstractADBAPIDatabase",
+]
+
 import thread
 
 try:
-    import pgdb as postgres
-except:
+    from txdav.base.datastore.subpostgres import postgres
+except ImportError:
     postgres = None
 
 from twisted.enterprise.adbapi import ConnectionPool
@@ -29,15 +37,9 @@
 
 from twistedcaldav.config import ConfigurationError
 
-"""
-Generic ADAPI database access object.
-"""
+log = Logger()
 
-__all__ = [
-    "AbstractADBAPIDatabase",
-]
 
-log = Logger()
 
 class ConnectionClosingThreadPool(ThreadPool):
     """

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/calendaruserproxy.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/calendaruserproxy.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/calendaruserproxy.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -873,9 +873,13 @@
     """
 
     def __init__(self, host, database, user=None, password=None, dbtype=None):
+        from txdav.base.datastore.subpostgres import postgres
 
         ADBAPIPostgreSQLMixin.__init__(self,)
-        ProxyDB.__init__(self, "Proxies", "pgdb", (), host=host, database=database, user=user, password=password,)
+        ProxyDB.__init__(
+            self, "Proxies", postgres.__name__, (),
+            host=host, database=database, user=user, password=password,
+        )
         if dbtype:
             ProxyDB.schema_type = dbtype
 

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/test/test_proxyprincipaldb.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/test/test_proxyprincipaldb.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/directory/test/test_proxyprincipaldb.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -598,7 +598,7 @@
 
 
 try:
-    import pgdb as postgres
+    from txdav.base.datastore.subpostgres import postgres
 except ImportError:
     ProxyPrincipalDBPostgreSQL.skip = True
 else:

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/stdconfig.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/stdconfig.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/twistedcaldav/stdconfig.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -202,8 +202,6 @@
 
     "SpawnedDBUser": "caldav", # The username to use when DBType is empty
 
-    "DBImportFile": "", # File path to SQL file to import at startup (includes schema)
-
     "DSN": "", # Data Source Name.  Used to connect to an external
                # database if DBType is non-empty.  Format varies
                # depending on database type.

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/subpostgres.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/subpostgres.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -38,6 +38,7 @@
 from twisted.internet.defer import Deferred
 from txdav.base.datastore.dbapiclient import DBAPIConnector
 from txdav.base.datastore.dbapiclient import postgresPreflight
+from txdav.common.icommondatastore import InternalDataStoreError
 
 from twisted.application.service import MultiService
 
@@ -48,10 +49,12 @@
 _MAGIC_READY_COOKIE = "database system is ready to accept connections"
 
 
-class _PostgresMonitor(ProcessProtocol):
+
+class PostgresMonitor(ProcessProtocol):
     """
     A monitoring protocol which watches the postgres subprocess.
     """
+    log = Logger()
 
     def __init__(self, svc=None):
         self.lineReceiver = LineReceiver()
@@ -77,18 +80,22 @@
 
 
     def outReceived(self, out):
-        log.warn("received postgres stdout {out!r}", out=out)
+        for line in out.split("\n"):
+            if line:
+                self.log.info("{message}", message=line)
         # self.lineReceiver.dataReceived(out)
 
 
     def errReceived(self, err):
-        log.warn("received postgres stderr {err}", err=err)
+        for line in err.split("\n"):
+            if line:
+                self.log.error("{message}", message=line)
         self.lineReceiver.dataReceived(err)
 
 
     def processEnded(self, reason):
-        log.warn(
-            "postgres process ended with status {status}",
+        self.log.info(
+            "pg_ctl process ended with status={status}",
             status=reason.value.status
         )
         # If pg_ctl exited with zero, we were successful in starting postgres
@@ -98,7 +105,7 @@
         if reason.value.status == 0:
             self.completionDeferred.callback(None)
         else:
-            log.warn("Could not start postgres; see postgres.log")
+            self.log.error("Could not start postgres; see postgres.log")
             self.completionDeferred.errback(reason)
 
 
@@ -161,7 +168,7 @@
         """
         The process is over, fire the Deferred with the output.
         """
-        self.deferred.callback(''.join(self.output))
+        self.deferred.callback("".join(self.output))
 
 
 
@@ -179,7 +186,6 @@
         testMode=False,
         uid=None, gid=None,
         spawnedDBUser="caldav",
-        importFileName=None,
         pgCtl="pg_ctl",
         initDB="initdb",
         reactor=None,
@@ -196,9 +202,6 @@
 
         @param spawnedDBUser: the postgres role
         @type spawnedDBUser: C{str}
-        @param importFileName: path to SQL file containing previous data to
-            import
-        @type importFileName: C{str}
         """
 
         # FIXME: By default there is very little (4MB) shared memory available,
@@ -262,12 +265,20 @@
         self.uid = uid
         self.gid = gid
         self.spawnedDBUser = spawnedDBUser
-        self.importFileName = importFileName
         self.schema = schema
         self.monitor = None
         self.openConnections = []
-        self._pgCtl = pgCtl
-        self._initdb = initDB
+
+        def locateCommand(name, cmd):
+            for found in which(cmd):
+                return found
+
+            raise InternalDataStoreError(
+                "Unable to locate {} command: {}".format(name, cmd)
+            )
+
+        self._pgCtl = locateCommand("pg_ctl", pgCtl)
+        self._initdb = locateCommand("initdb", initDB)
         self._reactor = reactor
         self._postgresPid = None
 
@@ -280,33 +291,22 @@
         return self._reactor
 
 
-    def pgCtl(self):
-        """
-        Locate the path to pg_ctl.
-        """
-        return which(self._pgCtl)[0]
-
-
-    def initdb(self):
-        return which(self._initdb)[0]
-
-
     def activateDelayedShutdown(self):
         """
         Call this when starting database initialization code to
         protect against shutdown.
 
-        Sets the delayedShutdown flag to True so that if reactor
-        shutdown commences, the shutdown will be delayed until
-        deactivateDelayedShutdown is called.
+        Sets the delayedShutdown flag to True so that if reactor shutdown
+        commences, the shutdown will be delayed until deactivateDelayedShutdown
+        is called.
         """
         self.delayedShutdown = True
 
 
     def deactivateDelayedShutdown(self):
         """
-        Call this when database initialization code has completed so
-        that the reactor can shutdown.
+        Call this when database initialization code has completed so that the
+        reactor can shutdown.
         """
         self.delayedShutdown = False
         if self.shutdownDeferred:
@@ -317,24 +317,72 @@
         if databaseName is None:
             databaseName = self.databaseName
 
+        m = getattr(self, "_connectorFor_{}".format(postgres.__name__), None)
+        if m is None:
+            raise InternalDataStoreError(
+                "Unknown Postgres DBM module: {}".format(postgres)
+            )
+
+        return m(databaseName)
+
+
+    def _connectorFor_pgdb(self, databaseName):
+        dsn = "{}:dbname={}".format(self.host, databaseName)
+
         if self.spawnedDBUser:
-            dsn = "{}:dbname={}:{}".format(
-                self.host, databaseName, self.spawnedDBUser
-            )
+            dsn = "{}:{}".format(dsn, self.spawnedDBUser)
         elif self.uid is not None:
-            dsn = "{}:dbname={}:{}".format(
-                self.host, databaseName, pwd.getpwuid(self.uid).pw_name
-            )
-        else:
-            dsn = "{}:dbname={}".format(self.host, databaseName)
+            dsn = "{}:{}".format(dsn, pwd.getpwuid(self.uid).pw_name)
 
         kwargs = {}
         if self.port:
             kwargs["host"] = "{}:{}".format(self.host, self.port)
 
+        log.info(
+            "Connecting to Postgres with dsn={dsn!r} args={args}",
+            dsn=dsn, args=kwargs
+        )
+
         return DBAPIConnector(postgres, postgresPreflight, dsn, **kwargs)
 
 
+    def _connectorFor_pg8000(self, databaseName):
+        kwargs = dict(database=databaseName)
+
+        if self.host.startswith("/"):
+            # We're using a socket file
+            socketFP = CachingFilePath(self.host)
+
+            if socketFP.isdir():
+                # We have been given the directory, not the actual socket file
+                socketFP = socketFP.child(
+                    ".s.PGSQL.{}".format(self.port if self.port else 5432)
+                )
+
+            if not socketFP.isSocket():
+                raise InternalDataStoreError(
+                    "No such socket file: {}".format(socketFP.path)
+                )
+
+            kwargs["host"] = None
+            kwargs["unix_sock"] = socketFP.path
+        else:
+            kwargs["host"] = self.host
+            kwargs["unix_sock"] = None
+
+        if self.port:
+            kwargs["port"] = self.port
+
+        if self.spawnedDBUser:
+            kwargs["user"] = self.spawnedDBUser
+        elif self.uid is not None:
+            kwargs["user"] = pwd.getpwuid(self.uid).pw_name
+
+        log.info("Connecting to Postgres with args={args}", args=kwargs)
+
+        return DBAPIConnector(postgres, postgresPreflight, **kwargs)
+
+
     def produceConnection(self, label="<unlabeled>", databaseName=None):
         """
         Produce a DB-API 2.0 connection pointed at this database.
@@ -348,7 +396,6 @@
         If the database has not been created and there is a dump file,
         then the dump file is imported.
         """
-
         if self.resetSchema:
             try:
                 createDatabaseCursor.execute(
@@ -370,10 +417,6 @@
             # otherwise execute schema
             executeSQL = True
             sqlToExecute = self.schema
-            if self.importFileName:
-                importFilePath = CachingFilePath(self.importFileName)
-                if importFilePath.exists():
-                    sqlToExecute = importFilePath.getContent()
 
         createDatabaseCursor.close()
         createDatabaseConn.close()
@@ -419,7 +462,6 @@
         """
         Start the database and initialize the subservice.
         """
-
         def createConnection():
             try:
                 createDatabaseConn = self.produceConnection(
@@ -427,16 +469,26 @@
                 )
             except postgres.DatabaseError as e:
                 log.error(
-                    "Unable to connect to database for schema creation: {error}",
+                    "Unable to connect to database for schema creation:"
+                    " {error}",
                     error=e
                 )
                 raise
+
             createDatabaseCursor = createDatabaseConn.cursor()
-            createDatabaseCursor.execute("commit")
+
+            if postgres.__name__ == "pg8000":
+                createDatabaseConn.realConnection.autocommit = True
+            elif postgres.__name__ == "pgdb":
+                createDatabaseCursor.execute("commit")
+            else:
+                raise InternalDataStoreError(
+                    "Unknown Postgres DBM module: {}".format(postgres)
+                )
+
             return createDatabaseConn, createDatabaseCursor
 
-        monitor = _PostgresMonitor(self)
-        pgCtl = self.pgCtl()
+        monitor = PostgresMonitor(self)
         # check consistency of initdb and postgres?
 
         options = []
@@ -446,7 +498,7 @@
         )
         if self.socketDir:
             options.append(
-                "-k {}"
+                "-c unix_socket_directories={}"
                 .format(shell_quote(self.socketDir.path))
             )
         if self.port:
@@ -477,23 +529,17 @@
         if self.testMode:
             options.append("-c log_statement=all")
 
-        log.warn(
-            "Requesting postgres start via {cmd} {opts}",
-            cmd=pgCtl, opts=options
-        )
+        args = [
+            self._pgCtl, "start",
+            "--log={}".format(self.logFile),
+            "--timeout=86400",  # Plenty of time for a long cluster upgrade
+            "-w",  # Wait for startup to complete
+            "-o", " ".join(options),  # Options passed to postgres
+        ]
+
+        log.info("Requesting postgres start via: {args}", args=args)
         self.reactor.spawnProcess(
-            monitor, pgCtl,
-            [
-                pgCtl,
-                "start",
-                "-l", self.logFile,
-                "-t 86400",  # Give plenty of time for a long cluster upgrade
-                "-w",
-                # XXX what are the quoting rules for '-o'?  do I need to repr()
-                # the path here?
-                "-o",
-                " ".join(options),
-            ],
+            monitor, self._pgCtl, args,
             env=self.env, path=self.workingDir.path,
             uid=self.uid, gid=self.gid,
         )
@@ -517,12 +563,12 @@
             We started postgres; we're responsible for stopping it later.
             Call pgCtl status to get the pid.
             """
-            log.warn("{cmd} exited", cmd=pgCtl)
+            log.info("{cmd} exited", cmd=self._pgCtl)
             self.shouldStopDatabase = True
             d = Deferred()
             statusMonitor = CapturingProcessProtocol(d, None)
             self.reactor.spawnProcess(
-                statusMonitor, pgCtl, [pgCtl, "status"],
+                statusMonitor, self._pgCtl, [self._pgCtl, "status"],
                 env=self.env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
@@ -537,7 +583,7 @@
             d = Deferred()
             statusMonitor = CapturingProcessProtocol(d, None)
             self.reactor.spawnProcess(
-                statusMonitor, pgCtl, [pgCtl, "status"],
+                statusMonitor, self._pgCtl, [self._pgCtl, "status"],
                 env=self.env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
@@ -548,7 +594,10 @@
             We can't start postgres or connect to a running instance.  Shut
             down.
             """
-            log.failure("Can't start or connect to postgres", f)
+            log.critical(
+                "Can't start or connect to postgres: {failure.value}",
+                failure=f
+            )
             self.deactivateDelayedShutdown()
             self.reactor.stop()
 
@@ -565,11 +614,10 @@
         env.update(PGDATA=clusterDir.path,
                    PGHOST=self.host,
                    PGUSER=self.spawnedDBUser)
-        initdb = self.initdb()
 
         if self.socketDir:
             if not self.socketDir.isdir():
-                log.warn("Creating {dir}", dir=self.socketDir.path)
+                log.info("Creating {dir}", dir=self.socketDir.path)
                 self.socketDir.createDirectory()
 
             if self.uid and self.gid:
@@ -578,11 +626,11 @@
             os.chmod(self.socketDir.path, 0770)
 
         if not self.dataStoreDirectory.isdir():
-            log.warn("Creating {dir}", dir=self.dataStoreDirectory.path)
+            log.info("Creating {dir}", dir=self.dataStoreDirectory.path)
             self.dataStoreDirectory.createDirectory()
 
         if not self.workingDir.isdir():
-            log.warn("Creating {dir}", dir=self.workingDir.path)
+            log.info("Creating {dir}", dir=self.workingDir.path)
             self.workingDir.createDirectory()
 
         if self.uid and self.gid:
@@ -591,11 +639,12 @@
 
         if not clusterDir.isdir():
             # No cluster directory, run initdb
-            log.warn("Running initdb for {dir}", dir=clusterDir.path)
+            log.info("Running initdb for {dir}", dir=clusterDir.path)
             dbInited = Deferred()
             self.reactor.spawnProcess(
                 CapturingProcessProtocol(dbInited, None),
-                initdb, [initdb, "-E", "UTF8", "-U", self.spawnedDBUser],
+                self._initdb,
+                [self._initdb, "-E", "UTF8", "-U", self.spawnedDBUser],
                 env=env, path=self.workingDir.path,
                 uid=self.uid, gid=self.gid,
             )
@@ -603,7 +652,7 @@
             def doCreate(result):
                 if result.find("FATAL:") != -1:
                     log.error(result)
-                    raise RuntimeError(
+                    raise InternalDataStoreError(
                         "Unable to initialize postgres database: {}"
                         .format(result)
                     )
@@ -612,7 +661,7 @@
             dbInited.addCallback(doCreate)
 
         else:
-            log.warn("Cluster already exists at {dir}", dir=clusterDir.path)
+            log.info("Cluster already exists at {dir}", dir=clusterDir.path)
             self.startDatabase()
 
 
@@ -633,12 +682,11 @@
             # If pg_ctl's startup wasn't successful, don't bother to stop the
             # database.  (This also happens in command-line tools.)
             if self.shouldStopDatabase:
-                monitor = _PostgresMonitor()
-                pgCtl = self.pgCtl()
+                monitor = PostgresMonitor()
                 # FIXME: why is this 'logfile' and not self.logfile?
                 self.reactor.spawnProcess(
-                    monitor, pgCtl,
-                    [pgCtl, "-l", "logfile", "stop"],
+                    monitor, self._pgCtl,
+                    [self._pgCtl, "-l", "logfile", "stop"],
                     env=self.env, path=self.workingDir.path,
                     uid=self.uid, gid=self.gid,
                 )

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/test/test_subpostgres.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/test/test_subpostgres.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/base/datastore/test/test_subpostgres.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -139,61 +139,3 @@
         cursor.execute("select * from test_dummy_table")
         values = cursor.fetchall()
         self.assertEquals(map(list, values), [["dummy"]])
-
-
-    @inlineCallbacks
-    def test_startService_withDumpFile(self):
-        """
-        Assuming a properly configured environment ($PATH points at an 'initdb'
-        and 'postgres', $PYTHONPATH includes pgdb), starting a
-        L{PostgresService} will start the service passed to it, after importing
-        an existing dump file.
-        """
-
-        test = self
-
-        class SimpleService1(Service):
-
-            instances = []
-            ready = Deferred()
-
-            def __init__(self, connectionFactory, storageService):
-                self.connection = connectionFactory()
-                test.addCleanup(self.connection.close)
-                self.instances.append(self)
-
-
-            def startService(self):
-                cursor = self.connection.cursor()
-                try:
-                    cursor.execute(
-                        "insert into import_test_table values ('value2')"
-                    )
-                except:
-                    self.ready.errback()
-                else:
-                    self.ready.callback(None)
-                finally:
-                    cursor.close()
-
-        # The SQL in importFile.sql will get executed, including the insertion
-        # of "value1"
-        importFileName = (
-            CachingFilePath(__file__).parent().child("importFile.sql").path
-        )
-        svc = PostgresService(
-            CachingFilePath("postgres_3.pgdb"),
-            SimpleService1,
-            "",
-            databaseName="dummy_db",
-            testMode=True,
-            importFileName=importFileName
-        )
-        svc.startService()
-        self.addCleanup(svc.stopService)
-        yield SimpleService1.ready
-        connection = SimpleService1.instances[0].connection
-        cursor = connection.cursor()
-        cursor.execute("select * from import_test_table")
-        values = cursor.fetchall()
-        self.assertEquals(map(list, values), [["value1"], ["value2"]])

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/caldav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/caldav/datastore/test/test_sql.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/caldav/datastore/test/test_sql.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -2223,7 +2223,37 @@
         yield self.commit()
 
 
+    @inlineCallbacks
+    def test_removeAfterRevisionCleanup(self):
+        """
+        Make sure L{Calendar}'s can be renamed after revision cleanup
+        removes their revision table entry..
+        """
+        yield self.homeUnderTest(name="user01", create=True)
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        self.assertTrue(cal is not None)
+        yield self.commit()
 
+        # Remove the revision
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        yield cal.syncToken()
+        yield self.transactionUnderTest().deleteRevisionsBefore(cal._syncTokenRevision + 1)
+        yield self.commit()
+
+        # Rename the calendar
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        self.assertTrue(cal is not None)
+        yield cal.rename("calendar_renamed")
+        yield self.commit()
+
+        cal = yield self.calendarUnderTest(home="user01", name="calendar")
+        self.assertTrue(cal is None)
+        cal = yield self.calendarUnderTest(home="user01", name="calendar_renamed")
+        self.assertTrue(cal is not None)
+        yield self.commit()
+
+
+
 class SchedulingTests(CommonCommonTests, unittest.TestCase):
     """
     CalendarObject splitting tests

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/sql.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/sql.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -591,6 +591,14 @@
                     self._txn, resourceID=self._resourceID, name=name, id=id))
             if rows:
                 self._syncTokenRevision = rows[0][0]
+            else:
+                # Nothing was matched on the delete so insert a new row
+                self._syncTokenRevision = (
+                    yield self._completelyNewDeletedRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
+
         elif action == "update":
             rows = (
                 yield self._updateBumpTokenQuery.on(
@@ -598,9 +606,14 @@
             if rows:
                 self._syncTokenRevision = rows[0][0]
             else:
-                action = "insert"
+                # Nothing was matched on the update so insert a new row
+                self._syncTokenRevision = (
+                    yield self._completelyNewRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
 
-        if action == "insert":
+        elif action == "insert":
             # Note that an "insert" may happen for a resource that previously
             # existed and then was deleted. In that case an entry in the
             # REVISIONS table still exists so we have to detect that and do db

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/test/test_sql.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/carddav/datastore/test/test_sql.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -917,3 +917,119 @@
         obj = yield self.addressbookObjectUnderTest(name="data1.ics", addressbook_name="addressbook")
         self.assertEqual(obj._dataversion, obj._currentDataVersion)
         yield self.commit()
+
+
+    @inlineCallbacks
+    def test_updateAfterRevisionCleanup(self):
+        """
+        Make sure L{AddressBookObject}'s can be updated or removed after revision cleanup
+        removes their revision table entry..
+        """
+        person = """BEGIN:VCARD
+VERSION:3.0
+N:Thompson;Default1;;;
+FN:Default1 Thompson
+EMAIL;type=INTERNET;type=WORK;type=pref:lthompson1 at example.com
+TEL;type=WORK;type=pref:1-555-555-5555
+TEL;type=CELL:1-444-444-4444
+item1.ADR;type=WORK;type=pref:;;1245 Test;Sesame Street;California;11111;USA
+item1.X-ABADR:us
+UID:uid-person
+X-ADDRESSBOOKSERVER-KIND:person
+END:VCARD
+"""
+        group = """BEGIN:VCARD
+VERSION:3.0
+N:Group;Fancy;;;
+FN:Fancy Group
+UID:uid-group
+X-ADDRESSBOOKSERVER-KIND:group
+X-ADDRESSBOOKSERVER-MEMBER:urn:uuid:uid-person
+END:VCARD
+"""
+        group_update = """BEGIN:VCARD
+VERSION:3.0
+N:Group2;Fancy;;;
+FN:Fancy Group2
+UID:uid-group
+X-ADDRESSBOOKSERVER-KIND:group
+X-ADDRESSBOOKSERVER-MEMBER:urn:uuid:uid-person
+END:VCARD
+"""
+
+        yield self.homeUnderTest()
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.createAddressBookObjectWithName("person.vcf", VCard.fromString(person))
+        yield adbk.createAddressBookObjectWithName("group.vcf", VCard.fromString(group))
+        yield self.commit()
+
+        # Remove the revision
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.syncToken()
+        yield self.transactionUnderTest().deleteRevisionsBefore(adbk._syncTokenRevision + 1)
+        yield self.commit()
+
+        # Update the object
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        yield obj.setComponent(VCard.fromString(group_update))
+        yield self.commit()
+
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        obj = yield self.addressbookObjectUnderTest(name="person.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        yield self.commit()
+
+
+    @inlineCallbacks
+    def test_removeAfterRevisionCleanup(self):
+        """
+        Make sure L{AddressBookObject}'s can be updated or removed after revision cleanup
+        removes their revision table entry..
+        """
+        person = """BEGIN:VCARD
+VERSION:3.0
+N:Thompson;Default1;;;
+FN:Default1 Thompson
+EMAIL;type=INTERNET;type=WORK;type=pref:lthompson1 at example.com
+TEL;type=WORK;type=pref:1-555-555-5555
+TEL;type=CELL:1-444-444-4444
+item1.ADR;type=WORK;type=pref:;;1245 Test;Sesame Street;California;11111;USA
+item1.X-ABADR:us
+UID:uid-person
+X-ADDRESSBOOKSERVER-KIND:person
+END:VCARD
+"""
+        group = """BEGIN:VCARD
+VERSION:3.0
+N:Group;Fancy;;;
+FN:Fancy Group
+UID:uid-group
+X-ADDRESSBOOKSERVER-KIND:group
+X-ADDRESSBOOKSERVER-MEMBER:urn:uuid:uid-person
+END:VCARD
+"""
+
+        yield self.homeUnderTest()
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.createAddressBookObjectWithName("person.vcf", VCard.fromString(person))
+        yield adbk.createAddressBookObjectWithName("group.vcf", VCard.fromString(group))
+        yield self.commit()
+
+        # Remove the revision
+        adbk = yield self.addressbookUnderTest(name="addressbook")
+        yield adbk.syncToken()
+        yield self.transactionUnderTest().deleteRevisionsBefore(adbk._syncTokenRevision + 1)
+        yield self.commit()
+
+        # Remove the object
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        yield obj.remove()
+        yield self.commit()
+
+        obj = yield self.addressbookObjectUnderTest(name="group.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is None)
+        obj = yield self.addressbookObjectUnderTest(name="person.vcf", addressbook_name="addressbook")
+        self.assertTrue(obj is not None)
+        yield self.commit()

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/sql_util.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/sql_util.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/sql_util.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -216,9 +216,13 @@
 
     @inlineCallbacks
     def _renameSyncToken(self):
-        self._syncTokenRevision = (yield self._renameSyncTokenQuery.on(
-            self._txn, name=self._name, resourceID=self._resourceID))[0][0]
-        self._txn.bumpRevisionForObject(self)
+        rows = yield self._renameSyncTokenQuery.on(
+            self._txn, name=self._name, resourceID=self._resourceID)
+        if rows:
+            self._syncTokenRevision = rows[0][0]
+            self._txn.bumpRevisionForObject(self)
+        else:
+            yield self._initSyncToken()
 
 
     @classproperty
@@ -397,6 +401,21 @@
         )
 
 
+    @classproperty
+    def _completelyNewDeletedRevisionQuery(cls):
+        rev = cls._revisionsSchema
+        return Insert(
+            {
+                rev.HOME_RESOURCE_ID: Parameter("homeID"),
+                rev.RESOURCE_ID: Parameter("resourceID"),
+                rev.RESOURCE_NAME: Parameter("name"),
+                rev.REVISION: schema.REVISION_SEQ,
+                rev.DELETED: True
+            },
+            Return=rev.REVISION
+        )
+
+
     @inlineCallbacks
     def _changeRevision(self, action, name):
 
@@ -409,6 +428,13 @@
                     self._txn, resourceID=self._resourceID, name=name))
             if rows:
                 self._syncTokenRevision = rows[0][0]
+            else:
+                self._syncTokenRevision = (
+                    yield self._completelyNewDeletedRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
+
         elif action == "update":
             rows = (
                 yield self._updateBumpTokenQuery.on(
@@ -416,9 +442,13 @@
             if rows:
                 self._syncTokenRevision = rows[0][0]
             else:
-                action = "insert"
+                self._syncTokenRevision = (
+                    yield self._completelyNewRevisionQuery.on(
+                        self._txn, homeID=self.ownerHome()._resourceID,
+                        resourceID=self._resourceID, name=name)
+                )[0][0]
 
-        if action == "insert":
+        elif action == "insert":
             # Note that an "insert" may happen for a resource that previously
             # existed and then was deleted. In that case an entry in the
             # REVISIONS table still exists so we have to detect that and do db

Modified: CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/test/test_sql.py
===================================================================
--- CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/test/test_sql.py	2015-03-05 19:47:06 UTC (rev 14511)
+++ CalendarServer/branches/users/cdaboo/pod2pod-migration/txdav/common/datastore/test/test_sql.py	2015-03-05 20:20:10 UTC (rev 14512)
@@ -347,7 +347,7 @@
         token = yield homeChild.syncToken()
         yield homeChild._changeRevision("delete", "E")
         changed = yield homeChild.resourceNamesSinceToken(token)
-        self.assertEqual(changed, ([], [], [],))
+        self.assertEqual(changed, ([], ["E"], [],))
 
         yield txn.abort()
 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20150305/82028b90/attachment-0001.html>


More information about the calendarserver-changes mailing list