[CalendarServer-changes] [11052] CalendarServer/branches/users/glyph/sharedgroups-2
source_changes at macosforge.org
source_changes at macosforge.org
Tue Apr 16 15:19:46 PDT 2013
Revision: 11052
http://trac.calendarserver.org//changeset/11052
Author: glyph at apple.com
Date: 2013-04-16 15:19:46 -0700 (Tue, 16 Apr 2013)
Log Message:
-----------
Up to 10990.
Modified Paths:
--------------
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/caldav.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/util.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/cmdline.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/gateway.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/principals.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/purge.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/principals/caldavd.plist
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/test_principals.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/util.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/resource.py
CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/test/test_resource.py
CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/augments-test.xml
CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/resources-test.xml
CalendarServer/branches/users/glyph/sharedgroups-2/support/build.sh
CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/queue.py
CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/test/test_queue.py
CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/calendaruserproxy.py
CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/directory.py
CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/scheduling/imip/test/test_inbound.py
CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/test/util.py
CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current.sql
Added Paths:
-----------
CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/oracle-dialect/v17.sql
CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/postgres-dialect/v17.sql
CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_17_to_18.sql
CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_17_to_18.sql
Property Changed:
----------------
CalendarServer/branches/users/glyph/sharedgroups-2/
CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/resources-test.xml
Property changes on: CalendarServer/branches/users/glyph/sharedgroups-2
___________________________________________________________________
Modified: svn:mergeinfo
- /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:9885-10980
+ /CalendarServer/branches/config-separation:4379-4443
/CalendarServer/branches/egg-info-351:4589-4625
/CalendarServer/branches/generic-sqlstore:6167-6191
/CalendarServer/branches/new-store:5594-5934
/CalendarServer/branches/new-store-no-caldavfile:5911-5935
/CalendarServer/branches/new-store-no-caldavfile-2:5936-5981
/CalendarServer/branches/release/CalendarServer-4.3-dev:10180-10190,10192
/CalendarServer/branches/users/cdaboo/batchupload-6699:6700-7198
/CalendarServer/branches/users/cdaboo/cached-subscription-calendars-5692:5693-5702
/CalendarServer/branches/users/cdaboo/component-set-fixes:8130-8346
/CalendarServer/branches/users/cdaboo/directory-cache-on-demand-3627:3628-3644
/CalendarServer/branches/users/cdaboo/implicituidrace:8137-8141
/CalendarServer/branches/users/cdaboo/ischedule-dkim:9747-9979
/CalendarServer/branches/users/cdaboo/managed-attachments:9985-10145
/CalendarServer/branches/users/cdaboo/more-sharing-5591:5592-5601
/CalendarServer/branches/users/cdaboo/partition-4464:4465-4957
/CalendarServer/branches/users/cdaboo/pods:7297-7377
/CalendarServer/branches/users/cdaboo/pycalendar:7085-7206
/CalendarServer/branches/users/cdaboo/pycard:7227-7237
/CalendarServer/branches/users/cdaboo/queued-attendee-refreshes:7740-8287
/CalendarServer/branches/users/cdaboo/relative-config-paths-5070:5071-5105
/CalendarServer/branches/users/cdaboo/shared-calendars-5187:5188-5440
/CalendarServer/branches/users/cdaboo/timezones:7443-7699
/CalendarServer/branches/users/cdaboo/txn-debugging:8730-8743
/CalendarServer/branches/users/glyph/always-abort-txn-on-error:9958-9969
/CalendarServer/branches/users/glyph/case-insensitive-uid:8772-8805
/CalendarServer/branches/users/glyph/conn-limit:6574-6577
/CalendarServer/branches/users/glyph/contacts-server-merge:4971-5080
/CalendarServer/branches/users/glyph/dalify:6932-7023
/CalendarServer/branches/users/glyph/db-reconnect:6824-6876
/CalendarServer/branches/users/glyph/deploybuild:7563-7572
/CalendarServer/branches/users/glyph/digest-auth-redux:10624-10635
/CalendarServer/branches/users/glyph/disable-quota:7718-7727
/CalendarServer/branches/users/glyph/dont-start-postgres:6592-6614
/CalendarServer/branches/users/glyph/imip-and-admin-html:7866-7984
/CalendarServer/branches/users/glyph/ipv6-client:9054-9105
/CalendarServer/branches/users/glyph/linux-tests:6893-6900
/CalendarServer/branches/users/glyph/migrate-merge:8690-8713
/CalendarServer/branches/users/glyph/misc-portability-fixes:7365-7374
/CalendarServer/branches/users/glyph/more-deferreds-6:6322-6368
/CalendarServer/branches/users/glyph/more-deferreds-7:6369-6445
/CalendarServer/branches/users/glyph/multiget-delete:8321-8330
/CalendarServer/branches/users/glyph/new-export:7444-7485
/CalendarServer/branches/users/glyph/one-home-list-api:10048-10073
/CalendarServer/branches/users/glyph/oracle:7106-7155
/CalendarServer/branches/users/glyph/oracle-nulls:7340-7351
/CalendarServer/branches/users/glyph/other-html:8062-8091
/CalendarServer/branches/users/glyph/parallel-sim:8240-8251
/CalendarServer/branches/users/glyph/parallel-upgrade:8376-8400
/CalendarServer/branches/users/glyph/parallel-upgrade_to_1:8571-8583
/CalendarServer/branches/users/glyph/q:9560-9688
/CalendarServer/branches/users/glyph/queue-locking-and-timing:10204-10289
/CalendarServer/branches/users/glyph/quota:7604-7637
/CalendarServer/branches/users/glyph/sendfdport:5388-5424
/CalendarServer/branches/users/glyph/shared-pool-fixes:8436-8443
/CalendarServer/branches/users/glyph/shared-pool-take2:8155-8174
/CalendarServer/branches/users/glyph/sharedpool:6490-6550
/CalendarServer/branches/users/glyph/sharing-api:9192-9205
/CalendarServer/branches/users/glyph/skip-lonely-vtimezones:8524-8535
/CalendarServer/branches/users/glyph/sql-store:5929-6073
/CalendarServer/branches/users/glyph/subtransactions:7248-7258
/CalendarServer/branches/users/glyph/table-alias:8651-8664
/CalendarServer/branches/users/glyph/uidexport:7673-7676
/CalendarServer/branches/users/glyph/unshare-when-access-revoked:10562-10595
/CalendarServer/branches/users/glyph/use-system-twisted:5084-5149
/CalendarServer/branches/users/glyph/uuid-normalize:9268-9296
/CalendarServer/branches/users/glyph/xattrs-from-files:7757-7769
/CalendarServer/branches/users/sagen/applepush:8126-8184
/CalendarServer/branches/users/sagen/inboxitems:7380-7381
/CalendarServer/branches/users/sagen/locations-resources:5032-5051
/CalendarServer/branches/users/sagen/locations-resources-2:5052-5061
/CalendarServer/branches/users/sagen/purge_old_events:6735-6746
/CalendarServer/branches/users/sagen/resource-delegates-4038:4040-4067
/CalendarServer/branches/users/sagen/resource-delegates-4066:4068-4075
/CalendarServer/branches/users/sagen/resources-2:5084-5093
/CalendarServer/branches/users/sagen/testing:10827-10851,10853-10855
/CalendarServer/branches/users/wsanchez/transations:5515-5593
/CalendarServer/trunk:9885-10990
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/caldav.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/caldav.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/caldav.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -78,6 +78,7 @@
from twistedcaldav.upgrade import UpgradeFileSystemFormatService, PostDBImportService
from calendarserver.tap.util import pgServiceFromConfig, getDBPool, MemoryLimitService
+from calendarserver.tap.util import directoryFromConfig
from twext.enterprise.ienterprise import POSTGRES_DIALECT
from twext.enterprise.ienterprise import ORACLE_DIALECT
@@ -1054,9 +1055,10 @@
if observers:
pushDistributor = PushDistributor(observers)
+ directory = result.rootResource.getDirectory()
+
# Optionally set up mail retrieval
if config.Scheduling.iMIP.Enabled:
- directory = result.rootResource.getDirectory()
mailRetriever = MailRetriever(store, directory,
config.Scheduling.iMIP.Receiving)
mailRetriever.setServiceParent(result)
@@ -1445,7 +1447,41 @@
spawner.setServiceParent(multi)
if config.UseMetaFD:
cl.setServiceParent(multi)
+
+ directory = directoryFromConfig(config)
+ rootResource = getRootResource(config, store, [])
+
+ # Optionally set up mail retrieval
+ if config.Scheduling.iMIP.Enabled:
+ mailRetriever = MailRetriever(store, directory,
+ config.Scheduling.iMIP.Receiving)
+ mailRetriever.setServiceParent(multi)
+ else:
+ mailRetriever = None
+
+ # Optionally set up group cacher
+ if config.GroupCaching.Enabled:
+ groupCacher = GroupMembershipCacheUpdater(
+ calendaruserproxy.ProxyDBService,
+ directory,
+ config.GroupCaching.UpdateSeconds,
+ config.GroupCaching.ExpireSeconds,
+ namespace=config.GroupCaching.MemcachedPool,
+ useExternalProxies=config.GroupCaching.UseExternalProxies
+ )
+ else:
+ groupCacher = None
+
+ def decorateTransaction(txn):
+ txn._pushDistributor = None
+ txn._rootResource = rootResource
+ txn._mailRetriever = mailRetriever
+ txn._groupCacher = groupCacher
+
+ store.callWithNewTransactions(decorateTransaction)
+
return multi
+
ssvc = self.storageService(spawnerSvcCreator, uid, gid)
ssvc.setServiceParent(s)
return s
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/util.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/util.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tap/util.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -650,6 +650,7 @@
config.WebCalendarRoot,
root,
directory,
+ newStore,
principalCollections=(principalCollection,),
)
root.putChild("admin", webAdmin)
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/cmdline.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/cmdline.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/cmdline.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -24,9 +24,14 @@
from twext.python.log import StandardIOObserver
from twistedcaldav.config import ConfigurationError
+from twisted.internet.defer import inlineCallbacks
import os
import sys
+from calendarserver.tap.util import getRootResource
+from twisted.application.service import Service
+from errno import ENOENT, EACCES
+from twext.enterprise.queue import NonPerformingQueuer
# TODO: direct unit tests for these functions.
@@ -85,6 +90,10 @@
autoDisableMemcached(config)
maker = serviceMaker()
+
+ # Only perform post-import duties if someone has explicitly said to
+ maker.doPostImport = getattr(maker, "doPostImport", False)
+
options = CalDAVOptions
service = maker.makeService(options)
@@ -98,3 +107,49 @@
return
reactor.run()
+
+
+
+class WorkerService(Service):
+
+ def __init__(self, store):
+ self._store = store
+ # Work can be queued but will not be performed by the command line tool
+ store.queuer = NonPerformingQueuer()
+
+
+ def rootResource(self):
+ try:
+ from twistedcaldav.config import config
+ rootResource = getRootResource(config, self._store)
+ except OSError, e:
+ if e.errno == ENOENT:
+ # Trying to re-write resources.xml but its parent directory does
+ # not exist. The server's never been started, so we're missing
+ # state required to do any work.
+ raise ConfigurationError(
+ "It appears that the server has never been started.\n"
+ "Please start it at least once before running this tool.")
+ elif e.errno == EACCES:
+ # Trying to re-write resources.xml but it is not writable by the
+ # current user. This most likely means we're in a system
+ # configuration and the user doesn't have sufficient privileges
+ # to do the other things the tool might need to do either.
+ raise ConfigurationError("You must run this tool as root.")
+ else:
+ raise
+ return rootResource
+
+ @inlineCallbacks
+ def startService(self):
+ from twisted.internet import reactor
+ try:
+ yield self.doWork()
+ except ConfigurationError, ce:
+ sys.stderr.write("Error: %s\n" % (str(ce),))
+ except Exception, e:
+ sys.stderr.write("Error: %s\n" % (e,))
+ raise
+ finally:
+ reactor.stop()
+
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/gateway.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/gateway.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/gateway.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -28,10 +28,11 @@
from twistedcaldav.directory.directory import DirectoryError
from txdav.xml import element as davxml
-from calendarserver.tools.principals import (
+from calendarserver.tools.util import (
principalForPrincipalID, proxySubprincipal, addProxy, removeProxy,
- getProxies, setProxies, ProxyError, ProxyWarning, updateRecord
+ ProxyError, ProxyWarning
)
+from calendarserver.tools.principals import getProxies, setProxies, updateRecord
from calendarserver.tools.purge import WorkerService, PurgeOldEventsService, DEFAULT_BATCH_SIZE, DEFAULT_RETAIN_DAYS
from calendarserver.tools.cmdline import utilityMain
@@ -212,7 +213,7 @@
readProxies = command.get("ReadProxies", None)
writeProxies = command.get("WriteProxies", None)
principal = principalForPrincipalID(record.guid, directory=self.dir)
- (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
+ (yield setProxies(self.store, principal, readProxies, writeProxies, directory=self.dir))
respondWithRecordsOfType(self.dir, command, "locations")
@@ -260,7 +261,7 @@
readProxies = command.get("ReadProxies", None)
writeProxies = command.get("WriteProxies", None)
principal = principalForPrincipalID(record.guid, directory=self.dir)
- (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
+ (yield setProxies(self.store, principal, readProxies, writeProxies, directory=self.dir))
yield self.command_getLocationAttributes(command)
@@ -300,7 +301,7 @@
readProxies = command.get("ReadProxies", None)
writeProxies = command.get("WriteProxies", None)
principal = principalForPrincipalID(record.guid, directory=self.dir)
- (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
+ (yield setProxies(self.store, principal, readProxies, writeProxies, directory=self.dir))
respondWithRecordsOfType(self.dir, command, "resources")
@@ -328,7 +329,7 @@
readProxies = command.get("ReadProxies", None)
writeProxies = command.get("WriteProxies", None)
principal = principalForPrincipalID(record.guid, directory=self.dir)
- (yield setProxies(principal, readProxies, writeProxies, directory=self.dir))
+ (yield setProxies(self.store, principal, readProxies, writeProxies, directory=self.dir))
yield self.command_getResourceAttributes(command)
@@ -370,7 +371,7 @@
respondWithError("Proxy not found: %s" % (command['Proxy'],))
return
try:
- (yield addProxy(principal, "write", proxy))
+ (yield addProxy(self.root, self.dir, self.store, principal, "write", proxy))
except ProxyError, e:
respondWithError(str(e))
return
@@ -390,7 +391,7 @@
respondWithError("Proxy not found: %s" % (command['Proxy'],))
return
try:
- (yield removeProxy(principal, proxy, proxyTypes=("write",)))
+ (yield removeProxy(self.root, self.dir, self.store, principal, proxy, proxyTypes=("write",)))
except ProxyError, e:
respondWithError(str(e))
return
@@ -419,7 +420,7 @@
respondWithError("Proxy not found: %s" % (command['Proxy'],))
return
try:
- (yield addProxy(principal, "read", proxy))
+ (yield addProxy(self.root, self.dir, self.store, principal, "read", proxy))
except ProxyError, e:
respondWithError(str(e))
return
@@ -439,7 +440,7 @@
respondWithError("Proxy not found: %s" % (command['Proxy'],))
return
try:
- (yield removeProxy(principal, proxy, proxyTypes=("read",)))
+ (yield removeProxy(self.root, self.dir, self.store, principal, proxy, proxyTypes=("read",)))
except ProxyError, e:
respondWithError(str(e))
return
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/principals.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/principals.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/principals.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -20,33 +20,29 @@
import sys
import os
import operator
-import signal
from getopt import getopt, GetoptError
from uuid import UUID
-from pwd import getpwnam
-from grp import getgrnam
-from twisted.python.util import switchUID
from twisted.internet import reactor
-from twisted.internet.defer import inlineCallbacks, returnValue
+from twisted.internet.defer import inlineCallbacks, returnValue, succeed
from txdav.xml import element as davxml
-from twext.python.log import clearLogLevels
-from twext.python.log import StandardIOObserver
-
from txdav.xml.base import decodeXMLName, encodeXMLName
-from twistedcaldav.config import config, ConfigurationError
+from twistedcaldav.config import config
from twistedcaldav.directory.directory import UnknownRecordTypeError, DirectoryError
+from twistedcaldav.directory.directory import scheduleNextGroupCachingUpdate
-from calendarserver.tools.util import loadConfig, getDirectory, setupMemcached, booleanArgument, checkDirectory
+from calendarserver.tools.util import (
+ booleanArgument, proxySubprincipal, action_addProxyPrincipal,
+ principalForPrincipalID, prettyPrincipal, ProxyError,
+ action_removeProxyPrincipal
+)
from twistedcaldav.directory.augment import allowedAutoScheduleModes
-__all__ = [
- "principalForPrincipalID", "proxySubprincipal", "addProxy", "removeProxy",
- "ProxyError", "ProxyWarning", "updateRecord"
-]
+from calendarserver.tools.cmdline import utilityMain, WorkerService
+
def usage(e=None):
if e:
if isinstance(e, UnknownRecordTypeError):
@@ -99,6 +95,27 @@
else:
sys.exit(0)
+
+class PrincipalService(WorkerService):
+ """
+ Executes principals-related functions in a context which has access to the store
+ """
+
+ function = None
+ params = []
+
+ @inlineCallbacks
+ def doWork(self):
+ """
+ Calls the function that's been assigned to "function" and passes the root
+ resource, directory, store, and whatever has been assigned to "params".
+ """
+ if self.function is not None:
+ rootResource = self.rootResource()
+ directory = rootResource.getDirectory()
+ yield self.function(rootResource, directory, self._store, *self.params)
+
+
def main():
try:
(optargs, args) = getopt(
@@ -243,54 +260,7 @@
else:
raise NotImplementedError(opt)
- #
- # Get configuration
- #
- try:
- loadConfig(configFileName)
- # Do this first, because modifying the config object will cause
- # some logging activity at whatever log level the plist says
- clearLogLevels()
-
-
- config.DefaultLogLevel = "debug" if verbose else "error"
-
- #
- # Send logging output to stdout
- #
- observer = StandardIOObserver()
- observer.start()
-
- # Create the DataRoot directory before shedding privileges
- if config.DataRoot.startswith(config.ServerRoot + os.sep):
- checkDirectory(
- config.DataRoot,
- "Data root",
- access=os.W_OK,
- create=(0750, config.UserName, config.GroupName),
- )
-
- # Shed privileges
- if config.UserName and config.GroupName and os.getuid() == 0:
- uid = getpwnam(config.UserName).pw_uid
- gid = getgrnam(config.GroupName).gr_gid
- switchUID(uid, uid, gid)
-
- os.umask(config.umask)
-
- # Configure memcached client settings prior to setting up resource
- # hierarchy (in getDirectory)
- setupMemcached(config)
-
- try:
- config.directory = getDirectory()
- except DirectoryError, e:
- abort(e)
-
- except ConfigurationError, e:
- abort(e)
-
#
# List principals
#
@@ -298,10 +268,9 @@
if args:
usage("Too many arguments")
- for recordType in config.directory.recordTypes():
- print(recordType)
+ function = runListPrincipalTypes
+ params = ()
- return
elif addType:
@@ -322,7 +291,8 @@
else:
shortNames = ()
- params = (runAddPrincipal, addType, guid, shortNames, fullName)
+ function = runAddPrincipal
+ params = (addType, guid, shortNames, fullName)
elif listPrincipals:
@@ -336,19 +306,13 @@
if args:
usage("Too many arguments")
- try:
- records = list(config.directory.listRecords(listPrincipals))
- if records:
- printRecordList(records)
- else:
- print("No records of type %s" % (listPrincipals,))
- except UnknownRecordTypeError, e:
- usage(e)
+ function = runListPrincipals
+ params = (listPrincipals,)
- return
elif searchPrincipals:
- params = (runSearch, searchPrincipals)
+ function = runSearch
+ params = (searchPrincipals,)
else:
#
@@ -364,178 +328,115 @@
except ValueError, e:
abort(e)
- params = (runPrincipalActions, args, principalActions)
+ function = runPrincipalActions
+ params = (args, principalActions)
- #
- # Start the reactor
- #
- reactor.callLater(0, *params)
- reactor.run()
+ PrincipalService.function = function
+ PrincipalService.params = params
+ utilityMain(configFileName, PrincipalService, verbose=verbose)
- at inlineCallbacks
-def runPrincipalActions(principalIDs, actions):
- try:
- for principalID in principalIDs:
- # Resolve the given principal IDs to principals
- try:
- principal = principalForPrincipalID(principalID)
- except ValueError:
- principal = None
+def runListPrincipalTypes(service, rootResource, directory, store):
+ for recordType in directory.recordTypes():
+ print(recordType)
+ return succeed(None)
- if principal is None:
- sys.stderr.write("Invalid principal ID: %s\n" % (principalID,))
- continue
- # Performs requested actions
- for action in actions:
- (yield action[0](principal, *action[1:]))
- print("")
-
- finally:
- #
- # Stop the reactor
- #
- reactor.stop()
-
- at inlineCallbacks
-def runSearch(searchTerm):
-
+def runListPrincipals(service, rootResource, directory, store, listPrincipals):
try:
- fields = []
- for fieldName in ("fullName", "firstName", "lastName", "emailAddresses"):
- fields.append((fieldName, searchTerm, True, "contains"))
-
- records = list((yield config.directory.recordsMatchingTokens(searchTerm.strip().split())))
+ records = list(directory.listRecords(listPrincipals))
if records:
- records.sort(key=operator.attrgetter('fullName'))
- print("%d matches found:" % (len(records),))
- for record in records:
- print("\n%s (%s)" % (record.fullName,
- { "users" : "User",
- "groups" : "Group",
- "locations" : "Place",
- "resources" : "Resource",
- }.get(record.recordType),
- ))
- print(" GUID: %s" % (record.guid,))
- print(" Record name(s): %s" % (", ".join(record.shortNames),))
- if record.authIDs:
- print(" Auth ID(s): %s" % (", ".join(record.authIDs),))
- if record.emailAddresses:
- print(" Email(s): %s" % (", ".join(record.emailAddresses),))
+ printRecordList(records)
else:
- print("No matches found")
+ print("No records of type %s" % (listPrincipals,))
+ except UnknownRecordTypeError, e:
+ usage(e)
+ return succeed(None)
- print("")
- finally:
- #
- # Stop the reactor
- #
- reactor.stop()
-
@inlineCallbacks
-def runAddPrincipal(addType, guid, shortNames, fullName):
- try:
+def runPrincipalActions(service, rootResource, directory, store, principalIDs,
+ actions):
+ for principalID in principalIDs:
+ # Resolve the given principal IDs to principals
try:
- yield updateRecord(True, config.directory, addType, guid=guid,
- shortNames=shortNames, fullName=fullName)
- print("Added '%s'" % (fullName,))
- except DirectoryError, e:
- print(e)
+ principal = principalForPrincipalID(principalID, directory=directory)
+ except ValueError:
+ principal = None
- finally:
- #
- # Stop the reactor
- #
- reactor.stop()
+ if principal is None:
+ sys.stderr.write("Invalid principal ID: %s\n" % (principalID,))
+ continue
+ # Performs requested actions
+ for action in actions:
+ (yield action[0](rootResource, directory, store, principal,
+ *action[1:]))
+ print("")
-def principalForPrincipalID(principalID, checkOnly=False, directory=None):
-
- # Allow a directory parameter to be passed in, but default to config.directory
- # But config.directory isn't set right away, so only use it when we're doing more
- # than checking.
- if not checkOnly and not directory:
- directory = config.directory
- if principalID.startswith("/"):
- segments = principalID.strip("/").split("/")
- if (len(segments) == 3 and
- segments[0] == "principals" and segments[1] == "__uids__"):
- uid = segments[2]
- else:
- raise ValueError("Can't resolve all paths yet")
+ at inlineCallbacks
+def runSearch(service, rootResource, directory, store, searchTerm):
- if checkOnly:
- return None
+ fields = []
+ for fieldName in ("fullName", "firstName", "lastName", "emailAddresses"):
+ fields.append((fieldName, searchTerm, True, "contains"))
- return directory.principalCollection.principalForUID(uid)
+ records = list((yield directory.recordsMatchingTokens(searchTerm.strip().split())))
+ if records:
+ records.sort(key=operator.attrgetter('fullName'))
+ print("%d matches found:" % (len(records),))
+ for record in records:
+ print("\n%s (%s)" % (record.fullName,
+ { "users" : "User",
+ "groups" : "Group",
+ "locations" : "Place",
+ "resources" : "Resource",
+ }.get(record.recordType),
+ ))
+ print(" GUID: %s" % (record.guid,))
+ print(" Record name(s): %s" % (", ".join(record.shortNames),))
+ if record.authIDs:
+ print(" Auth ID(s): %s" % (", ".join(record.authIDs),))
+ if record.emailAddresses:
+ print(" Email(s): %s" % (", ".join(record.emailAddresses),))
+ else:
+ print("No matches found")
+ print("")
- if principalID.startswith("("):
- try:
- i = principalID.index(")")
- if checkOnly:
- return None
-
- recordType = principalID[1:i]
- shortName = principalID[i+1:]
-
- if not recordType or not shortName or "(" in recordType:
- raise ValueError()
-
- return directory.principalCollection.principalForShortName(recordType, shortName)
-
- except ValueError:
- pass
-
- if ":" in principalID:
- if checkOnly:
- return None
-
- recordType, shortName = principalID.split(":", 1)
-
- return directory.principalCollection.principalForShortName(recordType, shortName)
-
+ at inlineCallbacks
+def runAddPrincipal(service, rootResource, directory, store, addType, guid,
+ shortNames, fullName):
try:
- UUID(principalID)
+ yield updateRecord(True, directory, addType, guid=guid,
+ shortNames=shortNames, fullName=fullName)
+ print("Added '%s'" % (fullName,))
+ except DirectoryError, e:
+ print(e)
- if checkOnly:
- return None
- x = directory.principalCollection.principalForUID(principalID)
- return x
- except ValueError:
- pass
-
- raise ValueError("Invalid principal identifier: %s" % (principalID,))
-
-def proxySubprincipal(principal, proxyType):
- return principal.getChild("calendar-proxy-" + proxyType)
-
-def action_removePrincipal(principal):
+def action_removePrincipal(rootResource, directory, store, principal):
record = principal.record
fullName = record.fullName
shortName = record.shortNames[0]
guid = record.guid
- config.directory.destroyRecord(record.recordType, guid=guid)
+ directory.destroyRecord(record.recordType, guid=guid)
print("Removed '%s' %s %s" % (fullName, shortName, guid))
@inlineCallbacks
-def action_readProperty(resource, qname):
+def action_readProperty(rootResource, directory, store, resource, qname):
property = (yield resource.readProperty(qname, None))
print("%r on %s:" % (encodeXMLName(*qname), resource))
print("")
print(property.toxml())
@inlineCallbacks
-def action_listProxies(principal, *proxyTypes):
+def action_listProxies(rootResource, directory, store, principal, *proxyTypes):
for proxyType in proxyTypes:
subPrincipal = proxySubprincipal(principal, proxyType)
if subPrincipal is None:
@@ -553,7 +454,7 @@
records = []
for member in membersProperty.children:
proxyPrincipal = principalForPrincipalID(str(member),
- directory=config.directory)
+ directory=directory)
records.append(proxyPrincipal.record)
printRecordList(records)
@@ -563,59 +464,21 @@
prettyPrincipal(principal)))
@inlineCallbacks
-def action_addProxy(principal, proxyType, *proxyIDs):
+def action_addProxy(rootResource, directory, store, principal, proxyType, *proxyIDs):
for proxyID in proxyIDs:
- proxyPrincipal = principalForPrincipalID(proxyID)
+ proxyPrincipal = principalForPrincipalID(proxyID, directory=directory)
if proxyPrincipal is None:
print("Invalid principal ID: %s" % (proxyID,))
else:
- (yield action_addProxyPrincipal(principal, proxyType, proxyPrincipal))
+ (yield action_addProxyPrincipal(rootResource, directory, store,
+ principal, proxyType, proxyPrincipal))
- at inlineCallbacks
-def action_addProxyPrincipal(principal, proxyType, proxyPrincipal):
- try:
- (yield addProxy(principal, proxyType, proxyPrincipal))
- print("Added %s as a %s proxy for %s" % (
- prettyPrincipal(proxyPrincipal), proxyType,
- prettyPrincipal(principal)))
- except ProxyError, e:
- print("Error:", e)
- except ProxyWarning, e:
- print(e)
- at inlineCallbacks
-def addProxy(principal, proxyType, proxyPrincipal):
- proxyURL = proxyPrincipal.url()
- subPrincipal = proxySubprincipal(principal, proxyType)
- if subPrincipal is None:
- raise ProxyError("Unable to edit %s proxies for %s\n" % (proxyType,
- prettyPrincipal(principal)))
- membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
- for memberURL in membersProperty.children:
- if str(memberURL) == proxyURL:
- raise ProxyWarning("%s is already a %s proxy for %s" % (
- prettyPrincipal(proxyPrincipal), proxyType,
- prettyPrincipal(principal)))
-
- else:
- memberURLs = list(membersProperty.children)
- memberURLs.append(davxml.HRef(proxyURL))
- membersProperty = davxml.GroupMemberSet(*memberURLs)
- (yield subPrincipal.writeProperty(membersProperty, None))
-
- proxyTypes = ["read", "write"]
- proxyTypes.remove(proxyType)
-
- (yield action_removeProxyPrincipal(principal, proxyPrincipal, proxyTypes=proxyTypes))
-
- triggerGroupCacherUpdate(config)
-
-
@inlineCallbacks
-def setProxies(principal, readProxyPrincipals, writeProxyPrincipals, directory=None):
+def setProxies(store, principal, readProxyPrincipals, writeProxyPrincipals, directory=None):
"""
Set read/write proxies en masse for a principal
@param principal: DirectoryPrincipalResource
@@ -640,8 +503,9 @@
proxyURL = proxyPrincipal.url()
memberURLs.append(davxml.HRef(proxyURL))
membersProperty = davxml.GroupMemberSet(*memberURLs)
- (yield subPrincipal.writeProperty(membersProperty, None))
- triggerGroupCacherUpdate(config)
+ yield subPrincipal.writeProperty(membersProperty, None)
+ if store is not None:
+ yield scheduleNextGroupCachingUpdate(store, 0)
@inlineCallbacks
@@ -668,63 +532,20 @@
@inlineCallbacks
-def action_removeProxy(principal, *proxyIDs, **kwargs):
+def action_removeProxy(rootResource, directory, store, principal, *proxyIDs, **kwargs):
for proxyID in proxyIDs:
- proxyPrincipal = principalForPrincipalID(proxyID)
+ proxyPrincipal = principalForPrincipalID(proxyID, directory=directory)
if proxyPrincipal is None:
print("Invalid principal ID: %s" % (proxyID,))
else:
- (yield action_removeProxyPrincipal(principal, proxyPrincipal, **kwargs))
+ (yield action_removeProxyPrincipal(rootResource, directory, store,
+ principal, proxyPrincipal, **kwargs))
- at inlineCallbacks
-def action_removeProxyPrincipal(principal, proxyPrincipal, **kwargs):
- try:
- removed = (yield removeProxy(principal, proxyPrincipal, **kwargs))
- if removed:
- print("Removed %s as a proxy for %s" % (
- prettyPrincipal(proxyPrincipal),
- prettyPrincipal(principal)))
- except ProxyError, e:
- print("Error:", e)
- except ProxyWarning, e:
- print(e)
- at inlineCallbacks
-def removeProxy(principal, proxyPrincipal, **kwargs):
- removed = False
- proxyTypes = kwargs.get("proxyTypes", ("read", "write"))
- for proxyType in proxyTypes:
- proxyURL = proxyPrincipal.url()
- subPrincipal = proxySubprincipal(principal, proxyType)
- if subPrincipal is None:
- raise ProxyError("Unable to edit %s proxies for %s\n" % (proxyType,
- prettyPrincipal(principal)))
-
- membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
-
- memberURLs = [
- m for m in membersProperty.children
- if str(m) != proxyURL
- ]
-
- if len(memberURLs) == len(membersProperty.children):
- # No change
- continue
- else:
- removed = True
-
- membersProperty = davxml.GroupMemberSet(*memberURLs)
- (yield subPrincipal.writeProperty(membersProperty, None))
-
- if removed:
- triggerGroupCacherUpdate(config)
- returnValue(removed)
-
-
@inlineCallbacks
-def action_setAutoSchedule(principal, autoSchedule):
+def action_setAutoSchedule(rootResource, directory, store, principal, autoSchedule):
if principal.record.recordType == "groups":
print("Enabling auto-schedule for %s is not allowed." % (principal,))
@@ -737,7 +558,7 @@
prettyPrincipal(principal),
))
- (yield updateRecord(False, config.directory,
+ (yield updateRecord(False, directory,
principal.record.recordType,
guid=principal.record.guid,
shortNames=principal.record.shortNames,
@@ -746,7 +567,7 @@
**principal.record.extras
))
-def action_getAutoSchedule(principal):
+def action_getAutoSchedule(rootResource, directory, store, principal):
autoSchedule = principal.getAutoSchedule()
print("Auto-schedule for %s is %s" % (
prettyPrincipal(principal),
@@ -754,7 +575,7 @@
))
@inlineCallbacks
-def action_setAutoScheduleMode(principal, autoScheduleMode):
+def action_setAutoScheduleMode(rootResource, directory, store, principal, autoScheduleMode):
if principal.record.recordType == "groups":
print("Setting auto-schedule mode for %s is not allowed." % (principal,))
@@ -767,7 +588,7 @@
prettyPrincipal(principal),
))
- (yield updateRecord(False, config.directory,
+ (yield updateRecord(False, directory,
principal.record.recordType,
guid=principal.record.guid,
shortNames=principal.record.shortNames,
@@ -776,7 +597,7 @@
**principal.record.extras
))
-def action_getAutoScheduleMode(principal):
+def action_getAutoScheduleMode(rootResource, directory, store, principal):
autoScheduleMode = principal.getAutoScheduleMode()
if not autoScheduleMode:
autoScheduleMode = "automatic"
@@ -786,7 +607,7 @@
))
@inlineCallbacks
-def action_setAutoAcceptGroup(principal, autoAcceptGroup):
+def action_setAutoAcceptGroup(rootResource, directory, store, principal, autoAcceptGroup):
if principal.record.recordType == "groups":
print("Setting auto-accept-group for %s is not allowed." % (principal,))
@@ -794,7 +615,7 @@
print("Setting auto-accept-group for %s is not allowed." % (principal,))
else:
- groupPrincipal = principalForPrincipalID(autoAcceptGroup)
+ groupPrincipal = principalForPrincipalID(autoAcceptGroup, directory=directory)
if groupPrincipal is None or groupPrincipal.record.recordType != "groups":
print("Invalid principal ID: %s" % (autoAcceptGroup,))
else:
@@ -803,7 +624,7 @@
prettyPrincipal(principal),
))
- (yield updateRecord(False, config.directory,
+ (yield updateRecord(False, directory,
principal.record.recordType,
guid=principal.record.guid,
shortNames=principal.record.shortNames,
@@ -812,12 +633,12 @@
**principal.record.extras
))
-def action_getAutoAcceptGroup(principal):
+def action_getAutoAcceptGroup(rootResource, directory, store, principal):
autoAcceptGroup = principal.getAutoAcceptGroup()
if autoAcceptGroup:
- record = config.directory.recordWithGUID(autoAcceptGroup)
+ record = directory.recordWithGUID(autoAcceptGroup)
if record is not None:
- groupPrincipal = config.directory.principalCollection.principalForUID(record.uid)
+ groupPrincipal = directory.principalCollection.principalForUID(record.uid)
if groupPrincipal is not None:
print("Auto-accept-group for %s is %s" % (
prettyPrincipal(principal),
@@ -837,17 +658,7 @@
pass
sys.exit(status)
-class ProxyError(Exception):
- """
- Raised when proxy assignments cannot be performed
- """
-class ProxyWarning(Exception):
- """
- Raised for harmless proxy assignment failures such as trying to add a
- duplicate or remove a non-existent assignment.
- """
-
def parseCreationArgs(args):
"""
Look at the command line arguments for --add, and figure out which
@@ -900,10 +711,6 @@
for fullName, shortName, guid in results:
print(format % (fullName, shortName, guid))
-def prettyPrincipal(principal):
- record = principal.record
- return "\"%s\" (%s:%s)" % (record.fullName, record.recordType,
- record.shortNames[0])
@inlineCallbacks
@@ -984,28 +791,6 @@
returnValue(record)
-def triggerGroupCacherUpdate(config, killMethod=None):
- """
- Look up the pid of the group cacher sidecar and HUP it to trigger an update
- """
- if killMethod is None:
- killMethod = os.kill
- pidFilename = os.path.join(config.RunRoot, "groupcacher.pid")
- if os.path.exists(pidFilename):
- pidFile = open(pidFilename, "r")
- pid = pidFile.read().strip()
- pidFile.close()
- try:
- pid = int(pid)
- except ValueError:
- return
- try:
- killMethod(pid, signal.SIGHUP)
- except OSError:
- pass
-
-
-
if __name__ == "__main__":
main()
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/purge.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/purge.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/purge.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -18,12 +18,10 @@
from __future__ import print_function
from calendarserver.tap.util import FakeRequest
-from calendarserver.tap.util import getRootResource
from calendarserver.tools import tables
-from calendarserver.tools.cmdline import utilityMain
-from calendarserver.tools.principals import removeProxy
+from calendarserver.tools.cmdline import utilityMain, WorkerService
+from calendarserver.tools.util import removeProxy
-from errno import ENOENT, EACCES
from getopt import getopt, GetoptError
from pycalendar.datetime import PyCalendarDateTime
@@ -31,13 +29,10 @@
from twext.python.log import Logger
from twext.web2.responsecode import NO_CONTENT
-from twisted.application.service import Service
-from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, returnValue
from twistedcaldav import caldavxml
from twistedcaldav.caldavxml import TimeRange
-from twistedcaldav.config import config, ConfigurationError
from twistedcaldav.datafilters.peruserdata import PerUserDataFilter
from twistedcaldav.directory.directory import DirectoryRecord
from twistedcaldav.method.put_common import StoreCalendarObjectResource
@@ -45,6 +40,7 @@
from txdav.xml import element as davxml
+
import collections
import os
import sys
@@ -54,49 +50,9 @@
DEFAULT_BATCH_SIZE = 100
DEFAULT_RETAIN_DAYS = 365
-class WorkerService(Service):
- def __init__(self, store):
- self._store = store
- def rootResource(self):
- try:
- rootResource = getRootResource(config, self._store)
- except OSError, e:
- if e.errno == ENOENT:
- # Trying to re-write resources.xml but its parent directory does
- # not exist. The server's never been started, so we're missing
- # state required to do any work. (Plus, what would be the point
- # of purging stuff from a server that's completely empty?)
- raise ConfigurationError(
- "It appears that the server has never been started.\n"
- "Please start it at least once before purging anything.")
- elif e.errno == EACCES:
- # Trying to re-write resources.xml but it is not writable by the
- # current user. This most likely means we're in a system
- # configuration and the user doesn't have sufficient privileges
- # to do the other things the tool might need to do either.
- raise ConfigurationError("You must run this tool as root.")
- else:
- raise
- return rootResource
-
-
- @inlineCallbacks
- def startService(self):
- try:
- yield self.doWork()
- except ConfigurationError, ce:
- sys.stderr.write("Error: %s\n" % (str(ce),))
- except Exception, e:
- sys.stderr.write("Error: %s\n" % (e,))
- raise
- finally:
- reactor.stop()
-
-
-
class PurgeOldEventsService(WorkerService):
cutoff = None
@@ -1163,9 +1119,8 @@
return cls.CANCELEVENT_NOT_MODIFIED
- @classmethod
@inlineCallbacks
- def _purgeProxyAssignments(cls, principal):
+ def _purgeProxyAssignments(self, principal):
assignments = []
@@ -1174,7 +1129,7 @@
proxyFor = (yield principal.proxyFor(proxyType == "write"))
for other in proxyFor:
assignments.append((principal.record.uid, proxyType, other.record.uid))
- (yield removeProxy(other, principal))
+ (yield removeProxy(self.root, self.directory, self._store, other, principal))
subPrincipal = principal.getChild("calendar-proxy-" + proxyType)
proxies = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/principals/caldavd.plist
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/principals/caldavd.plist 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/principals/caldavd.plist 2013-04-16 22:19:46 UTC (rev 11052)
@@ -79,23 +79,23 @@
<!-- Data root -->
<key>DataRoot</key>
- <string>Data</string>
+ <string>%(DataRoot)s</string>
<!-- Document root -->
<key>DocumentRoot</key>
- <string>Documents</string>
+ <string>%(DocumentRoot)s</string>
<!-- Configuration root -->
<key>ConfigRoot</key>
- <string>/etc/caldavd</string>
+ <string>Config</string>
<!-- Log root -->
<key>LogRoot</key>
- <string>/var/log/caldavd</string>
+ <string>%(LogRoot)s</string>
<!-- Run root -->
<key>RunRoot</key>
- <string>/var/run</string>
+ <string>%(LogRoot)s</string>
<!-- Child aliases -->
<key>Aliases</key>
@@ -279,7 +279,7 @@
-->
<key>ProxyLoadFromFile</key>
- <string>conf/auth/proxies-test.xml</string>
+ <string></string>
<!--
Special principals
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/test_principals.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/test_principals.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/test/test_principals.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -15,7 +15,6 @@
##
import os
-import signal
import sys
from twext.python.filepath import CachingFilePath as FilePath
@@ -31,8 +30,7 @@
from calendarserver.tap.util import directoryFromConfig
from calendarserver.tools.principals import (parseCreationArgs, matchStrings,
- updateRecord, principalForPrincipalID, getProxies, setProxies,
- triggerGroupCacherUpdate)
+ updateRecord, principalForPrincipalID, getProxies, setProxies)
class ManagePrincipalsTestCase(TestCase):
@@ -53,6 +51,9 @@
newConfig = template % {
"ServerRoot" : os.path.abspath(config.ServerRoot),
+ "DataRoot" : os.path.abspath(config.DataRoot),
+ "DocumentRoot" : os.path.abspath(config.DocumentRoot),
+ "LogRoot" : os.path.abspath(config.LogRoot),
}
configFilePath = FilePath(os.path.join(config.ConfigRoot, "caldavd.plist"))
configFilePath.setContent(newConfig)
@@ -339,36 +340,13 @@
self.assertEquals(readProxies, []) # initially empty
self.assertEquals(writeProxies, []) # initially empty
- (yield setProxies(principal, ["users:user03", "users:user04"], ["users:user05"], directory=directory))
+ (yield setProxies(None, principal, ["users:user03", "users:user04"], ["users:user05"], directory=directory))
readProxies, writeProxies = (yield getProxies(principal, directory=directory))
self.assertEquals(set(readProxies), set(["user03", "user04"]))
self.assertEquals(set(writeProxies), set(["user05"]))
# Using None for a proxy list indicates a no-op
- (yield setProxies(principal, [], None, directory=directory))
+ (yield setProxies(None, principal, [], None, directory=directory))
readProxies, writeProxies = (yield getProxies(principal, directory=directory))
self.assertEquals(readProxies, []) # now empty
self.assertEquals(set(writeProxies), set(["user05"])) # unchanged
-
-
- def test_triggerGroupCacherUpdate(self):
- """
- Verify triggerGroupCacherUpdate can read a pidfile and send a SIGHUP
- """
-
- self.calledArgs = None
- def killMethod(pid, sig):
- self.calledArgs = (pid, sig)
-
- class StubConfig(object):
- def __init__(self, runRootPath):
- self.RunRoot = runRootPath
-
- runRootDir = FilePath(self.mktemp())
- runRootDir.createDirectory()
- pidFile = runRootDir.child("groupcacher.pid")
- pidFile.setContent("1234")
- testConfig = StubConfig(runRootDir.path)
- triggerGroupCacherUpdate(testConfig, killMethod=killMethod)
- self.assertEquals(self.calledArgs, (1234, signal.SIGHUP))
- runRootDir.remove()
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/util.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/util.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/tools/util.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -31,21 +31,27 @@
import socket
from pwd import getpwnam
from grp import getgrnam
+from uuid import UUID
+from twistedcaldav.config import config, ConfigurationError
+from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
+
+
from twisted.python.filepath import FilePath
from twisted.python.reflect import namedClass
from twext.python.log import Logger
+from twisted.internet.defer import inlineCallbacks, returnValue
+from txdav.xml import element as davxml
from calendarserver.provision.root import RootResource
from twistedcaldav import memcachepool
-from twistedcaldav.config import config, ConfigurationError
from twistedcaldav.directory import calendaruserproxy
from twistedcaldav.directory.aggregate import AggregateDirectoryService
from twistedcaldav.directory.directory import DirectoryService, DirectoryRecord
+from twistedcaldav.directory.directory import scheduleNextGroupCachingUpdate
from calendarserver.push.notifier import NotifierFactory
-from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from txdav.common.datastore.file import CommonDataStore
@@ -312,3 +318,176 @@
+def principalForPrincipalID(principalID, checkOnly=False, directory=None):
+
+ # Allow a directory parameter to be passed in, but default to config.directory
+ # But config.directory isn't set right away, so only use it when we're doing more
+ # than checking.
+ if not checkOnly and not directory:
+ directory = config.directory
+
+ if principalID.startswith("/"):
+ segments = principalID.strip("/").split("/")
+ if (len(segments) == 3 and
+ segments[0] == "principals" and segments[1] == "__uids__"):
+ uid = segments[2]
+ else:
+ raise ValueError("Can't resolve all paths yet")
+
+ if checkOnly:
+ return None
+
+ return directory.principalCollection.principalForUID(uid)
+
+
+ if principalID.startswith("("):
+ try:
+ i = principalID.index(")")
+
+ if checkOnly:
+ return None
+
+ recordType = principalID[1:i]
+ shortName = principalID[i+1:]
+
+ if not recordType or not shortName or "(" in recordType:
+ raise ValueError()
+
+ return directory.principalCollection.principalForShortName(recordType, shortName)
+
+ except ValueError:
+ pass
+
+ if ":" in principalID:
+ if checkOnly:
+ return None
+
+ recordType, shortName = principalID.split(":", 1)
+
+ return directory.principalCollection.principalForShortName(recordType, shortName)
+
+ try:
+ UUID(principalID)
+
+ if checkOnly:
+ return None
+
+ x = directory.principalCollection.principalForUID(principalID)
+ return x
+ except ValueError:
+ pass
+
+ raise ValueError("Invalid principal identifier: %s" % (principalID,))
+
+def proxySubprincipal(principal, proxyType):
+ return principal.getChild("calendar-proxy-" + proxyType)
+
+ at inlineCallbacks
+def action_addProxyPrincipal(rootResource, directory, store, principal, proxyType, proxyPrincipal):
+ try:
+ (yield addProxy(rootResource, directory, store, principal, proxyType, proxyPrincipal))
+ print("Added %s as a %s proxy for %s" % (
+ prettyPrincipal(proxyPrincipal), proxyType,
+ prettyPrincipal(principal)))
+ except ProxyError, e:
+ print("Error:", e)
+ except ProxyWarning, e:
+ print(e)
+
+ at inlineCallbacks
+def action_removeProxyPrincipal(rootResource, directory, store, principal, proxyPrincipal, **kwargs):
+ try:
+ removed = (yield removeProxy(rootResource, directory, store,
+ principal, proxyPrincipal, **kwargs))
+ if removed:
+ print("Removed %s as a proxy for %s" % (
+ prettyPrincipal(proxyPrincipal),
+ prettyPrincipal(principal)))
+ except ProxyError, e:
+ print("Error:", e)
+ except ProxyWarning, e:
+ print(e)
+
+ at inlineCallbacks
+def addProxy(rootResource, directory, store, principal, proxyType, proxyPrincipal):
+ proxyURL = proxyPrincipal.url()
+
+ subPrincipal = proxySubprincipal(principal, proxyType)
+ if subPrincipal is None:
+ raise ProxyError("Unable to edit %s proxies for %s\n" % (proxyType,
+ prettyPrincipal(principal)))
+
+ membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
+
+ for memberURL in membersProperty.children:
+ if str(memberURL) == proxyURL:
+ raise ProxyWarning("%s is already a %s proxy for %s" % (
+ prettyPrincipal(proxyPrincipal), proxyType,
+ prettyPrincipal(principal)))
+
+ else:
+ memberURLs = list(membersProperty.children)
+ memberURLs.append(davxml.HRef(proxyURL))
+ membersProperty = davxml.GroupMemberSet(*memberURLs)
+ (yield subPrincipal.writeProperty(membersProperty, None))
+
+ proxyTypes = ["read", "write"]
+ proxyTypes.remove(proxyType)
+
+ (yield action_removeProxyPrincipal(rootResource, directory, store,
+ principal, proxyPrincipal, proxyTypes=proxyTypes))
+
+ yield scheduleNextGroupCachingUpdate(store, 0)
+
+ at inlineCallbacks
+def removeProxy(rootResource, directory, store, principal, proxyPrincipal, **kwargs):
+ removed = False
+ proxyTypes = kwargs.get("proxyTypes", ("read", "write"))
+ for proxyType in proxyTypes:
+ proxyURL = proxyPrincipal.url()
+
+ subPrincipal = proxySubprincipal(principal, proxyType)
+ if subPrincipal is None:
+ raise ProxyError("Unable to edit %s proxies for %s\n" % (proxyType,
+ prettyPrincipal(principal)))
+
+ membersProperty = (yield subPrincipal.readProperty(davxml.GroupMemberSet, None))
+
+ memberURLs = [
+ m for m in membersProperty.children
+ if str(m) != proxyURL
+ ]
+
+ if len(memberURLs) == len(membersProperty.children):
+ # No change
+ continue
+ else:
+ removed = True
+
+ membersProperty = davxml.GroupMemberSet(*memberURLs)
+ (yield subPrincipal.writeProperty(membersProperty, None))
+
+ if removed:
+ yield scheduleNextGroupCachingUpdate(store, 0)
+ returnValue(removed)
+
+
+
+def prettyPrincipal(principal):
+ record = principal.record
+ return "\"%s\" (%s:%s)" % (record.fullName, record.recordType,
+ record.shortNames[0])
+
+class ProxyError(Exception):
+ """
+ Raised when proxy assignments cannot be performed
+ """
+
+class ProxyWarning(Exception):
+ """
+ Raised for harmless proxy assignment failures such as trying to add a
+ duplicate or remove a non-existent assignment.
+ """
+
+
+
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/resource.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/resource.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/resource.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -28,7 +28,7 @@
import operator
import urlparse
-from calendarserver.tools.principals import (
+from calendarserver.tools.util import (
principalForPrincipalID, proxySubprincipal, action_addProxyPrincipal,
action_removeProxyPrincipal
)
@@ -569,9 +569,10 @@
Web administration HTTP resource.
"""
- def __init__(self, path, root, directory, principalCollections=()):
+ def __init__(self, path, root, directory, store, principalCollections=()):
self.root = root
self.directory = directory
+ self.store = store
super(WebAdminResource, self).__init__(path,
principalCollections=principalCollections)
@@ -642,16 +643,18 @@
# Update the proxies if specified.
for proxyId in removeProxies:
proxy = self.getResourceById(request, proxyId)
- (yield action_removeProxyPrincipal(principal, proxy,
- proxyTypes=["read", "write"]))
+ (yield action_removeProxyPrincipal(self.root, self.directory, self.store,
+ principal, proxy, proxyTypes=["read", "write"]))
for proxyId in makeReadProxies:
proxy = self.getResourceById(request, proxyId)
- (yield action_addProxyPrincipal(principal, "read", proxy))
+ (yield action_addProxyPrincipal(self.root, self.directory, self.store,
+ principal, "read", proxy))
for proxyId in makeWriteProxies:
proxy = self.getResourceById(request, proxyId)
- (yield action_addProxyPrincipal(principal, "write", proxy))
+ (yield action_addProxyPrincipal(self.root, self.directory, self.store,
+ principal, "write", proxy))
@inlineCallbacks
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/test/test_resource.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/test/test_resource.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/calendarserver/webadmin/test/test_resource.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -22,7 +22,7 @@
from functools import partial
-from twisted.trial.unittest import TestCase
+from twistedcaldav.test.util import TestCase
from twisted.web.microdom import parseString, getElementsByTagName
from twisted.web.domhelpers import gatherTextNodes
@@ -66,7 +66,7 @@
def setUp(self):
self.expectedSearches = {}
- self.resource = WebAdminResource(self.mktemp(), None, self)
+ self.resource = WebAdminResource(self.mktemp(), None, self, None)
@inlineCallbacks
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/augments-test.xml
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/augments-test.xml 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/augments-test.xml 2013-04-16 22:19:46 UTC (rev 11052)
@@ -1,21 +1,4 @@
<?xml version="1.0" encoding="utf-8"?>
-
-<!--
-Copyright (c) 2009-2013 Apple Inc. All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
- -->
-
<!DOCTYPE augments SYSTEM "augments.dtd">
<augments>
@@ -111,4 +94,46 @@
<enable-addressbook>false</enable-addressbook>
<auto-schedule>false</auto-schedule>
</record>
+ <record>
+ <uid>03DFF660-8BCC-4198-8588-DD77F776F518</uid>
+ <enable>true</enable>
+ <enable-calendar>true</enable-calendar>
+ <enable-addressbook>true</enable-addressbook>
+ <enable-login>true</enable-login>
+ <auto-schedule>true</auto-schedule>
+ </record>
+ <record>
+ <uid>80689D41-DAF8-4189-909C-DB017B271892</uid>
+ <enable>true</enable>
+ <enable-calendar>true</enable-calendar>
+ <enable-addressbook>true</enable-addressbook>
+ <enable-login>true</enable-login>
+ <auto-schedule>true</auto-schedule>
+ </record>
+ <record>
+ <uid>C38BEE7A-36EE-478C-9DCB-CBF4612AFE65</uid>
+ <enable>true</enable>
+ <enable-calendar>true</enable-calendar>
+ <enable-addressbook>true</enable-addressbook>
+ <enable-login>true</enable-login>
+ <auto-schedule>true</auto-schedule>
+ <auto-schedule-mode>default</auto-schedule-mode>
+ <auto-accept-group>group01</auto-accept-group>
+ </record>
+ <record>
+ <uid>CCE95217-A57B-481A-AC3D-FEC9AB6CE3A9</uid>
+ <enable>true</enable>
+ <enable-calendar>true</enable-calendar>
+ <enable-addressbook>true</enable-addressbook>
+ <enable-login>true</enable-login>
+ <auto-schedule>true</auto-schedule>
+ </record>
+ <record>
+ <uid>0CE0BF31-5F9E-4801-A489-8C70CF287F5F</uid>
+ <enable>true</enable>
+ <enable-calendar>true</enable-calendar>
+ <enable-addressbook>true</enable-addressbook>
+ <enable-login>true</enable-login>
+ <auto-schedule>true</auto-schedule>
+ </record>
</augments>
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/resources-test.xml
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/resources-test.xml 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/resources-test.xml 2013-04-16 22:19:46 UTC (rev 11052)
@@ -1,94 +1,227 @@
-<?xml version="1.0" encoding="utf-8"?>
-
-<!--
-Copyright (c) 2006-2013 Apple Inc. All rights reserved.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
- -->
-
-<!DOCTYPE accounts SYSTEM "accounts.dtd">
-
<accounts realm="Test Realm">
- <location repeat="10">
- <uid>location%02d</uid>
- <guid>location%02d</guid>
- <password>location%02d</password>
- <name>Room %02d</name>
+ <location>
+ <uid>jupiter</uid>
+ <guid>jupiter</guid>
+ <name>Jupiter Conference Room, Building 2, 1st Floor</name>
</location>
- <resource repeat="20">
- <uid>resource%02d</uid>
- <guid>resource%02d</guid>
- <password>resource%02d</password>
- <name>Resource %02d</name>
- </resource>
<location>
+ <uid>uranus</uid>
+ <guid>uranus</guid>
+ <name>Uranus Conference Room, Building 3, 1st Floor</name>
+ </location>
+ <location>
+ <uid>morgensroom</uid>
+ <guid>03DFF660-8BCC-4198-8588-DD77F776F518</guid>
+ <name>Morgen's Room</name>
+ </location>
+ <location>
<uid>mercury</uid>
<guid>mercury</guid>
- <password>test</password>
<name>Mercury Conference Room, Building 1, 2nd Floor</name>
</location>
<location>
- <uid>venus</uid>
- <guid>venus</guid>
- <password>test</password>
- <name>Venus Conference Room, Building 1, 2nd Floor</name>
+ <uid>location09</uid>
+ <guid>location09</guid>
+ <name>Room 09</name>
</location>
<location>
- <uid>Earth</uid>
- <guid>Earth</guid>
- <password>test</password>
- <name>Earth Conference Room, Building 1, 1st Floor</name>
+ <uid>location08</uid>
+ <guid>location08</guid>
+ <name>Room 08</name>
</location>
<location>
+ <uid>location07</uid>
+ <guid>location07</guid>
+ <name>Room 07</name>
+ </location>
+ <location>
+ <uid>location06</uid>
+ <guid>location06</guid>
+ <name>Room 06</name>
+ </location>
+ <location>
+ <uid>location05</uid>
+ <guid>location05</guid>
+ <name>Room 05</name>
+ </location>
+ <location>
+ <uid>location04</uid>
+ <guid>location04</guid>
+ <name>Room 04</name>
+ </location>
+ <location>
+ <uid>location03</uid>
+ <guid>location03</guid>
+ <name>Room 03</name>
+ </location>
+ <location>
+ <uid>location02</uid>
+ <guid>location02</guid>
+ <name>Room 02</name>
+ </location>
+ <location>
+ <uid>location01</uid>
+ <guid>location01</guid>
+ <name>Room 01</name>
+ </location>
+ <location>
+ <uid>delegatedroom</uid>
+ <guid>delegatedroom</guid>
+ <name>Delegated Conference Room</name>
+ </location>
+ <location>
<uid>mars</uid>
<guid>redplanet</guid>
- <password>test</password>
<name>Mars Conference Room, Building 1, 1st Floor</name>
</location>
<location>
- <uid>jupiter</uid>
- <guid>jupiter</guid>
- <password>test</password>
- <name>Jupiter Conference Room, Building 2, 1st Floor</name>
+ <uid>sharissroom</uid>
+ <guid>80689D41-DAF8-4189-909C-DB017B271892</guid>
+ <name>Shari's Room</name>
</location>
<location>
- <uid>neptune</uid>
- <guid>neptune</guid>
- <password>test</password>
- <name>Neptune Conference Room, Building 2, 1st Floor</name>
- </location>
- <location>
<uid>pluto</uid>
<guid>pluto</guid>
- <password>test</password>
<name>Pluto Conference Room, Building 2, 1st Floor</name>
</location>
<location>
<uid>saturn</uid>
<guid>saturn</guid>
- <password>test</password>
<name>Saturn Conference Room, Building 2, 1st Floor</name>
</location>
<location>
- <uid>uranus</uid>
- <guid>uranus</guid>
- <password>test</password>
- <name>Uranus Conference Room, Building 3, 1st Floor</name>
+ <uid>location10</uid>
+ <guid>location10</guid>
+ <name>Room 10</name>
</location>
<location>
- <uid>delegatedroom</uid>
- <guid>delegatedroom</guid>
- <password>delegatedroom</password>
- <name>Delegated Conference Room</name>
+ <uid>neptune</uid>
+ <guid>neptune</guid>
+ <name>Neptune Conference Room, Building 2, 1st Floor</name>
</location>
+ <location>
+ <uid>Earth</uid>
+ <guid>Earth</guid>
+ <name>Earth Conference Room, Building 1, 1st Floor</name>
+ </location>
+ <location>
+ <uid>venus</uid>
+ <guid>venus</guid>
+ <name>Venus Conference Room, Building 1, 2nd Floor</name>
+ </location>
+ <resource>
+ <uid>sharisotherresource</uid>
+ <guid>CCE95217-A57B-481A-AC3D-FEC9AB6CE3A9</guid>
+ <name>Shari's Other Resource</name>
+ </resource>
+ <resource>
+ <uid>resource15</uid>
+ <guid>resource15</guid>
+ <name>Resource 15</name>
+ </resource>
+ <resource>
+ <uid>resource14</uid>
+ <guid>resource14</guid>
+ <name>Resource 14</name>
+ </resource>
+ <resource>
+ <uid>resource17</uid>
+ <guid>resource17</guid>
+ <name>Resource 17</name>
+ </resource>
+ <resource>
+ <uid>resource16</uid>
+ <guid>resource16</guid>
+ <name>Resource 16</name>
+ </resource>
+ <resource>
+ <uid>resource11</uid>
+ <guid>resource11</guid>
+ <name>Resource 11</name>
+ </resource>
+ <resource>
+ <uid>resource10</uid>
+ <guid>resource10</guid>
+ <name>Resource 10</name>
+ </resource>
+ <resource>
+ <uid>resource13</uid>
+ <guid>resource13</guid>
+ <name>Resource 13</name>
+ </resource>
+ <resource>
+ <uid>resource12</uid>
+ <guid>resource12</guid>
+ <name>Resource 12</name>
+ </resource>
+ <resource>
+ <uid>resource19</uid>
+ <guid>resource19</guid>
+ <name>Resource 19</name>
+ </resource>
+ <resource>
+ <uid>resource18</uid>
+ <guid>resource18</guid>
+ <name>Resource 18</name>
+ </resource>
+ <resource>
+ <uid>sharisresource</uid>
+ <guid>C38BEE7A-36EE-478C-9DCB-CBF4612AFE65</guid>
+ <name>Shari's Resource</name>
+ </resource>
+ <resource>
+ <uid>resource20</uid>
+ <guid>resource20</guid>
+ <name>Resource 20</name>
+ </resource>
+ <resource>
+ <uid>resource06</uid>
+ <guid>resource06</guid>
+ <name>Resource 06</name>
+ </resource>
+ <resource>
+ <uid>resource07</uid>
+ <guid>resource07</guid>
+ <name>Resource 07</name>
+ </resource>
+ <resource>
+ <uid>resource04</uid>
+ <guid>resource04</guid>
+ <name>Resource 04</name>
+ </resource>
+ <resource>
+ <uid>resource05</uid>
+ <guid>resource05</guid>
+ <name>Resource 05</name>
+ </resource>
+ <resource>
+ <uid>resource02</uid>
+ <guid>resource02</guid>
+ <name>Resource 02</name>
+ </resource>
+ <resource>
+ <uid>resource03</uid>
+ <guid>resource03</guid>
+ <name>Resource 03</name>
+ </resource>
+ <resource>
+ <uid>resource01</uid>
+ <guid>resource01</guid>
+ <name>Resource 01</name>
+ </resource>
+ <resource>
+ <uid>sharisotherresource1</uid>
+ <guid>0CE0BF31-5F9E-4801-A489-8C70CF287F5F</guid>
+ <name>Shari's Other Resource1</name>
+ </resource>
+ <resource>
+ <uid>resource08</uid>
+ <guid>resource08</guid>
+ <name>Resource 08</name>
+ </resource>
+ <resource>
+ <uid>resource09</uid>
+ <guid>resource09</guid>
+ <name>Resource 09</name>
+ </resource>
</accounts>
Property changes on: CalendarServer/branches/users/glyph/sharedgroups-2/conf/auth/resources-test.xml
___________________________________________________________________
Added: svn:executable
+ *
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/support/build.sh
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/support/build.sh 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/support/build.sh 2013-04-16 22:19:46 UTC (rev 11052)
@@ -775,7 +775,7 @@
"${pypi}/p/python-ldap/${ld}.tar.gz";
# XXX actually PyCalendar should be imported in-place.
- py_dependency -fe -i "src" -r 10554 \
+ py_dependency -fe -i "src" -r 10988 \
"pycalendar" "pycalendar" "pycalendar" \
"${svn_uri_base}/PyCalendar/trunk";
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/queue.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/queue.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/queue.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -87,7 +87,7 @@
from twisted.application.service import MultiService
from twisted.internet.protocol import Factory
from twisted.internet.defer import (
- inlineCallbacks, returnValue, Deferred, passthru
+ inlineCallbacks, returnValue, Deferred, passthru, succeed
)
from twisted.internet.endpoints import TCP4ClientEndpoint
from twisted.protocols.amp import AMP, Command, Integer, Argument, String
@@ -865,6 +865,9 @@
+
+
+
class WorkerFactory(Factory, object):
"""
Factory, to be used as the client to connect from the worker to the
@@ -1446,4 +1449,41 @@
"""
Choose to perform the work locally.
"""
- return LocalPerformer(self.txnFactory)
\ No newline at end of file
+ return LocalPerformer(self.txnFactory)
+
+
+
+class NonPerformer(object):
+ """
+ Implementor of C{performWork} that doesn't actual perform any work. This
+ is used in the case where you want to be able to enqueue work for someone
+ else to do, but not take on any work yourself (such as a command line tool).
+ """
+ implements(_IWorkPerformer)
+
+ def performWork(self, table, workID):
+ """
+ Don't perform work.
+ """
+ return succeed(None)
+
+
+class NonPerformingQueuer(_BaseQueuer):
+ """
+ When work is enqueued with this queuer, it is never executed locally.
+ It's expected that the polling machinery will find the work and perform it.
+ """
+ implements(IQueuer)
+
+ def __init__(self, reactor=None):
+ super(NonPerformingQueuer, self).__init__()
+ if reactor is None:
+ from twisted.internet import reactor
+ self.reactor = reactor
+
+
+ def choosePerformer(self):
+ """
+ Choose to perform the work locally.
+ """
+ return NonPerformer()
\ No newline at end of file
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/test/test_queue.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/test/test_queue.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/twext/enterprise/test/test_queue.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -54,7 +54,7 @@
from zope.interface.verify import verifyObject
from twisted.test.proto_helpers import StringTransport
-from twext.enterprise.queue import _BaseQueuer
+from twext.enterprise.queue import _BaseQueuer, NonPerformingQueuer
import twext.enterprise.queue
class Clock(_Clock):
@@ -654,3 +654,14 @@
queuer.enqueueWork(None, None)
self.assertNotEqual(self.proposal, None)
+
+class NonPerformingQueuerTests(TestCase):
+
+ @inlineCallbacks
+ def test_choosePerformer(self):
+ queuer = NonPerformingQueuer()
+ performer = queuer.choosePerformer()
+ result = (yield performer.performWork(None, None))
+ self.assertEquals(result, None)
+
+
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/calendaruserproxy.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/calendaruserproxy.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/calendaruserproxy.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -594,7 +594,8 @@
@param principalUID: the UID of the principal to remove.
"""
-
+ # FIXME: This method doesn't appear to be used anywhere. Still needed?
+
if delay:
# We are going to remove the principal only after <delay> seconds
# has passed since we first chose to remove it, to protect against
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/directory.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/directory.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/directory/directory.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -49,7 +49,6 @@
from twext.python.log import Logger, LoggingMixIn
from twistedcaldav.config import config
-from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE
from twistedcaldav.directory.idirectory import IDirectoryService, IDirectoryRecord
from twistedcaldav.directory.util import uuidFromName, normalizeUUID
@@ -935,7 +934,7 @@
# Delete all other work items
yield Delete(From=self.table, Where=None).on(self.transaction)
- groupCacher = self.transaction._groupCacher
+ groupCacher = getattr(self.transaction, "_groupCacher", None)
if groupCacher is not None:
try:
yield groupCacher.updateCache()
@@ -947,6 +946,12 @@
log.debug("Scheduling next group cacher update: %s" % (notBefore,))
yield self.transaction.enqueue(GroupCacherPollingWork,
notBefore=notBefore)
+ else:
+ notBefore = (datetime.datetime.utcnow() +
+ datetime.timedelta(seconds=10))
+ log.debug("Rescheduling group cacher update: %s" % (notBefore,))
+ yield self.transaction.enqueue(GroupCacherPollingWork,
+ notBefore=notBefore)
@inlineCallbacks
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/scheduling/imip/test/test_inbound.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/scheduling/imip/test/test_inbound.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/scheduling/imip/test/test_inbound.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -15,6 +15,7 @@
##
+from twistedcaldav.test.util import TestCase
import email
from twisted.internet.defer import inlineCallbacks
from twisted.python.modules import getModule
@@ -24,7 +25,6 @@
from twistedcaldav.scheduling.imip.inbound import injectMessage
from twistedcaldav.scheduling.imip.inbound import IMIPReplyWork
from twistedcaldav.scheduling.itip import iTIPRequestStatus
-from twistedcaldav.test.util import TestCase
from twistedcaldav.test.util import xmlFile
from txdav.common.datastore.test.util import buildStore
from calendarserver.tap.util import getRootResource
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/test/util.py
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/test/util.py 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/twistedcaldav/test/util.py 2013-04-16 22:19:46 UTC (rev 11052)
@@ -19,7 +19,7 @@
import os
import xattr
-from calendarserver.provision.root import RootResource
+from twistedcaldav.stdconfig import config
from twisted.python.failure import Failure
from twisted.internet.base import DelayedCall
@@ -34,7 +34,6 @@
from twistedcaldav import memcacher
from twistedcaldav.bind import doBind
-from twistedcaldav.config import config
from twistedcaldav.directory import augment
from twistedcaldav.directory.addressbook import DirectoryAddressBookHomeProvisioningResource
from twistedcaldav.directory.calendar import (
@@ -48,6 +47,8 @@
from txdav.common.datastore.test.util import deriveQuota
from txdav.common.datastore.file import CommonDataStore
+from calendarserver.provision.root import RootResource
+
from twext.python.log import Logger
log = Logger()
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current-oracle-dialect.sql
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current-oracle-dialect.sql 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current-oracle-dialect.sql 2013-04-16 22:19:46 UTC (rev 11052)
@@ -300,12 +300,17 @@
"PUSH_ID" nvarchar2(255)
);
+create table GROUP_CACHER_POLLING_WORK (
+ "WORK_ID" integer primary key not null,
+ "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
create table CALENDARSERVER (
"NAME" nvarchar2(255) primary key,
"VALUE" nvarchar2(255)
);
-insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '17');
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '18');
insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '3');
insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '1');
create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
Modified: CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current.sql
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current.sql 2013-04-16 22:19:02 UTC (rev 11051)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/current.sql 2013-04-16 22:19:46 UTC (rev 11052)
@@ -637,6 +637,6 @@
VALUE varchar(255)
);
-insert into CALENDARSERVER values ('VERSION', '17');
+insert into CALENDARSERVER values ('VERSION', '18');
insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '3');
insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '1');
Copied: CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/oracle-dialect/v17.sql (from rev 10990, CalendarServer/trunk/txdav/common/datastore/sql_schema/old/oracle-dialect/v17.sql)
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/oracle-dialect/v17.sql (rev 0)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/oracle-dialect/v17.sql 2013-04-16 22:19:46 UTC (rev 11052)
@@ -0,0 +1,399 @@
+create sequence RESOURCE_ID_SEQ;
+create sequence INSTANCE_ID_SEQ;
+create sequence ATTACHMENT_ID_SEQ;
+create sequence REVISION_SEQ;
+create sequence WORKITEM_SEQ;
+create table NODE_INFO (
+ "HOSTNAME" nvarchar2(255),
+ "PID" integer not null,
+ "PORT" integer not null,
+ "TIME" timestamp default CURRENT_TIMESTAMP at time zone 'UTC' not null,
+ primary key("HOSTNAME", "PORT")
+);
+
+create table NAMED_LOCK (
+ "LOCK_NAME" nvarchar2(255) primary key
+);
+
+create table CALENDAR_HOME (
+ "RESOURCE_ID" integer primary key,
+ "OWNER_UID" nvarchar2(255) unique,
+ "DATAVERSION" integer default 0 not null
+);
+
+create table CALENDAR_HOME_METADATA (
+ "RESOURCE_ID" integer primary key references CALENDAR_HOME on delete cascade,
+ "QUOTA_USED_BYTES" integer default 0 not null,
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table CALENDAR (
+ "RESOURCE_ID" integer primary key
+);
+
+create table CALENDAR_METADATA (
+ "RESOURCE_ID" integer primary key references CALENDAR on delete cascade,
+ "SUPPORTED_COMPONENTS" nvarchar2(255) default null,
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table NOTIFICATION_HOME (
+ "RESOURCE_ID" integer primary key,
+ "OWNER_UID" nvarchar2(255) unique
+);
+
+create table NOTIFICATION (
+ "RESOURCE_ID" integer primary key,
+ "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME,
+ "NOTIFICATION_UID" nvarchar2(255),
+ "XML_TYPE" nvarchar2(255),
+ "XML_DATA" nclob,
+ "MD5" nchar(32),
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ unique("NOTIFICATION_UID", "NOTIFICATION_HOME_RESOURCE_ID")
+);
+
+create table CALENDAR_BIND (
+ "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+ "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+ "CALENDAR_RESOURCE_NAME" nvarchar2(255),
+ "BIND_MODE" integer not null,
+ "BIND_STATUS" integer not null,
+ "MESSAGE" nclob,
+ primary key("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_ID"),
+ unique("CALENDAR_HOME_RESOURCE_ID", "CALENDAR_RESOURCE_NAME")
+);
+
+create table CALENDAR_BIND_MODE (
+ "ID" integer primary key,
+ "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('own', 0);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('write', 2);
+insert into CALENDAR_BIND_MODE (DESCRIPTION, ID) values ('direct', 3);
+create table CALENDAR_BIND_STATUS (
+ "ID" integer primary key,
+ "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invited', 0);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('accepted', 1);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('declined', 2);
+insert into CALENDAR_BIND_STATUS (DESCRIPTION, ID) values ('invalid', 3);
+create table CALENDAR_OBJECT (
+ "RESOURCE_ID" integer primary key,
+ "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+ "RESOURCE_NAME" nvarchar2(255),
+ "ICALENDAR_TEXT" nclob,
+ "ICALENDAR_UID" nvarchar2(255),
+ "ICALENDAR_TYPE" nvarchar2(255),
+ "ATTACHMENTS_MODE" integer default 0 not null,
+ "DROPBOX_ID" nvarchar2(255),
+ "ORGANIZER" nvarchar2(255),
+ "RECURRANCE_MIN" date,
+ "RECURRANCE_MAX" date,
+ "ACCESS" integer default 0 not null,
+ "SCHEDULE_OBJECT" integer default 0,
+ "SCHEDULE_TAG" nvarchar2(36) default null,
+ "SCHEDULE_ETAGS" nclob default null,
+ "PRIVATE_COMMENTS" integer default 0 not null,
+ "MD5" nchar(32),
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ unique("CALENDAR_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MO (
+ "ID" integer primary key,
+ "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('none', 0);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('read', 1);
+insert into CALENDAR_OBJECT_ATTACHMENTS_MO (DESCRIPTION, ID) values ('write', 2);
+create table CALENDAR_ACCESS_TYPE (
+ "ID" integer primary key,
+ "DESCRIPTION" nvarchar2(32) unique
+);
+
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('', 0);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('public', 1);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('private', 2);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('confidential', 3);
+insert into CALENDAR_ACCESS_TYPE (DESCRIPTION, ID) values ('restricted', 4);
+create table TIME_RANGE (
+ "INSTANCE_ID" integer primary key,
+ "CALENDAR_RESOURCE_ID" integer not null references CALENDAR on delete cascade,
+ "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+ "FLOATING" integer not null,
+ "START_DATE" timestamp not null,
+ "END_DATE" timestamp not null,
+ "FBTYPE" integer not null,
+ "TRANSPARENT" integer not null
+);
+
+create table FREE_BUSY_TYPE (
+ "ID" integer primary key,
+ "DESCRIPTION" nvarchar2(16) unique
+);
+
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('unknown', 0);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('free', 1);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy', 2);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-unavailable', 3);
+insert into FREE_BUSY_TYPE (DESCRIPTION, ID) values ('busy-tentative', 4);
+create table TRANSPARENCY (
+ "TIME_RANGE_INSTANCE_ID" integer not null references TIME_RANGE on delete cascade,
+ "USER_ID" nvarchar2(255),
+ "TRANSPARENT" integer not null
+);
+
+create table ATTACHMENT (
+ "ATTACHMENT_ID" integer primary key,
+ "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+ "DROPBOX_ID" nvarchar2(255),
+ "CONTENT_TYPE" nvarchar2(255),
+ "SIZE" integer not null,
+ "MD5" nchar(32),
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "PATH" nvarchar2(1024)
+);
+
+create table ATTACHMENT_CALENDAR_OBJECT (
+ "ATTACHMENT_ID" integer not null references ATTACHMENT on delete cascade,
+ "MANAGED_ID" nvarchar2(255),
+ "CALENDAR_OBJECT_RESOURCE_ID" integer not null references CALENDAR_OBJECT on delete cascade,
+ primary key("ATTACHMENT_ID", "CALENDAR_OBJECT_RESOURCE_ID"),
+ unique("MANAGED_ID", "CALENDAR_OBJECT_RESOURCE_ID")
+);
+
+create table RESOURCE_PROPERTY (
+ "RESOURCE_ID" integer not null,
+ "NAME" nvarchar2(255),
+ "VALUE" nclob,
+ "VIEWER_UID" nvarchar2(255),
+ primary key("RESOURCE_ID", "NAME", "VIEWER_UID")
+);
+
+create table ADDRESSBOOK_HOME (
+ "RESOURCE_ID" integer primary key,
+ "OWNER_UID" nvarchar2(255) unique,
+ "DATAVERSION" integer default 0 not null
+);
+
+create table ADDRESSBOOK_HOME_METADATA (
+ "RESOURCE_ID" integer primary key references ADDRESSBOOK_HOME on delete cascade,
+ "QUOTA_USED_BYTES" integer default 0 not null,
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table ADDRESSBOOK (
+ "RESOURCE_ID" integer primary key
+);
+
+create table ADDRESSBOOK_METADATA (
+ "RESOURCE_ID" integer primary key references ADDRESSBOOK on delete cascade,
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table ADDRESSBOOK_BIND (
+ "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+ "ADDRESSBOOK_RESOURCE_ID" integer not null references ADDRESSBOOK on delete cascade,
+ "ADDRESSBOOK_RESOURCE_NAME" nvarchar2(255),
+ "BIND_MODE" integer not null,
+ "BIND_STATUS" integer not null,
+ "MESSAGE" nclob,
+ primary key("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_ID"),
+ unique("ADDRESSBOOK_HOME_RESOURCE_ID", "ADDRESSBOOK_RESOURCE_NAME")
+);
+
+create table ADDRESSBOOK_OBJECT (
+ "RESOURCE_ID" integer primary key,
+ "ADDRESSBOOK_RESOURCE_ID" integer not null references ADDRESSBOOK on delete cascade,
+ "RESOURCE_NAME" nvarchar2(255),
+ "VCARD_TEXT" nclob,
+ "VCARD_UID" nvarchar2(255),
+ "MD5" nchar(32),
+ "CREATED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "MODIFIED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ unique("ADDRESSBOOK_RESOURCE_ID", "RESOURCE_NAME"),
+ unique("ADDRESSBOOK_RESOURCE_ID", "VCARD_UID")
+);
+
+create table CALENDAR_OBJECT_REVISIONS (
+ "CALENDAR_HOME_RESOURCE_ID" integer not null references CALENDAR_HOME,
+ "CALENDAR_RESOURCE_ID" integer references CALENDAR,
+ "CALENDAR_NAME" nvarchar2(255) default null,
+ "RESOURCE_NAME" nvarchar2(255),
+ "REVISION" integer not null,
+ "DELETED" integer not null
+);
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+ "ADDRESSBOOK_HOME_RESOURCE_ID" integer not null references ADDRESSBOOK_HOME,
+ "ADDRESSBOOK_RESOURCE_ID" integer references ADDRESSBOOK,
+ "ADDRESSBOOK_NAME" nvarchar2(255) default null,
+ "RESOURCE_NAME" nvarchar2(255),
+ "REVISION" integer not null,
+ "DELETED" integer not null
+);
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+ "NOTIFICATION_HOME_RESOURCE_ID" integer not null references NOTIFICATION_HOME on delete cascade,
+ "RESOURCE_NAME" nvarchar2(255),
+ "REVISION" integer not null,
+ "DELETED" integer not null,
+ unique("NOTIFICATION_HOME_RESOURCE_ID", "RESOURCE_NAME")
+);
+
+create table APN_SUBSCRIPTIONS (
+ "TOKEN" nvarchar2(255),
+ "RESOURCE_KEY" nvarchar2(255),
+ "MODIFIED" integer not null,
+ "SUBSCRIBER_GUID" nvarchar2(255),
+ "USER_AGENT" nvarchar2(255) default null,
+ "IP_ADDR" nvarchar2(255) default null,
+ primary key("TOKEN", "RESOURCE_KEY")
+);
+
+create table IMIP_TOKENS (
+ "TOKEN" nvarchar2(255),
+ "ORGANIZER" nvarchar2(255),
+ "ATTENDEE" nvarchar2(255),
+ "ICALUID" nvarchar2(255),
+ "ACCESSED" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ primary key("ORGANIZER", "ATTENDEE", "ICALUID")
+);
+
+create table IMIP_INVITATION_WORK (
+ "WORK_ID" integer primary key not null,
+ "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "FROM_ADDR" nvarchar2(255),
+ "TO_ADDR" nvarchar2(255),
+ "ICALENDAR_TEXT" nclob
+);
+
+create table IMIP_POLLING_WORK (
+ "WORK_ID" integer primary key not null,
+ "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+create table IMIP_REPLY_WORK (
+ "WORK_ID" integer primary key not null,
+ "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "ORGANIZER" nvarchar2(255),
+ "ATTENDEE" nvarchar2(255),
+ "ICALENDAR_TEXT" nclob
+);
+
+create table PUSH_NOTIFICATION_WORK (
+ "WORK_ID" integer primary key not null,
+ "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC',
+ "PUSH_ID" nvarchar2(255)
+);
+
+create table CALENDARSERVER (
+ "NAME" nvarchar2(255) primary key,
+ "VALUE" nvarchar2(255)
+);
+
+insert into CALENDARSERVER (NAME, VALUE) values ('VERSION', '17');
+insert into CALENDARSERVER (NAME, VALUE) values ('CALENDAR-DATAVERSION', '3');
+insert into CALENDARSERVER (NAME, VALUE) values ('ADDRESSBOOK-DATAVERSION', '1');
+create index NOTIFICATION_NOTIFICA_f891f5f9 on NOTIFICATION (
+ NOTIFICATION_HOME_RESOURCE_ID
+);
+
+create index CALENDAR_BIND_RESOURC_e57964d4 on CALENDAR_BIND (
+ CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_CALEN_a9a453a9 on CALENDAR_OBJECT (
+ CALENDAR_RESOURCE_ID,
+ ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_CALEN_96e83b73 on CALENDAR_OBJECT (
+ CALENDAR_RESOURCE_ID,
+ RECURRANCE_MAX
+);
+
+create index CALENDAR_OBJECT_ICALE_82e731d5 on CALENDAR_OBJECT (
+ ICALENDAR_UID
+);
+
+create index CALENDAR_OBJECT_DROPB_de041d80 on CALENDAR_OBJECT (
+ DROPBOX_ID
+);
+
+create index TIME_RANGE_CALENDAR_R_beb6e7eb on TIME_RANGE (
+ CALENDAR_RESOURCE_ID
+);
+
+create index TIME_RANGE_CALENDAR_O_acf37bd1 on TIME_RANGE (
+ CALENDAR_OBJECT_RESOURCE_ID
+);
+
+create index TRANSPARENCY_TIME_RAN_5f34467f on TRANSPARENCY (
+ TIME_RANGE_INSTANCE_ID
+);
+
+create index ATTACHMENT_CALENDAR_H_0078845c on ATTACHMENT (
+ CALENDAR_HOME_RESOURCE_ID
+);
+
+create index ADDRESSBOOK_BIND_RESO_205aa75c on ADDRESSBOOK_BIND (
+ ADDRESSBOOK_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_3a3956c4 on CALENDAR_OBJECT_REVISIONS (
+ CALENDAR_HOME_RESOURCE_ID,
+ CALENDAR_RESOURCE_ID
+);
+
+create index CALENDAR_OBJECT_REVIS_2643d556 on CALENDAR_OBJECT_REVISIONS (
+ CALENDAR_RESOURCE_ID,
+ RESOURCE_NAME
+);
+
+create index CALENDAR_OBJECT_REVIS_265c8acf on CALENDAR_OBJECT_REVISIONS (
+ CALENDAR_RESOURCE_ID,
+ REVISION
+);
+
+create index ADDRESSBOOK_OBJECT_RE_f460d62d on ADDRESSBOOK_OBJECT_REVISIONS (
+ ADDRESSBOOK_HOME_RESOURCE_ID,
+ ADDRESSBOOK_RESOURCE_ID
+);
+
+create index ADDRESSBOOK_OBJECT_RE_9a848f39 on ADDRESSBOOK_OBJECT_REVISIONS (
+ ADDRESSBOOK_RESOURCE_ID,
+ RESOURCE_NAME
+);
+
+create index ADDRESSBOOK_OBJECT_RE_cb101e6b on ADDRESSBOOK_OBJECT_REVISIONS (
+ ADDRESSBOOK_RESOURCE_ID,
+ REVISION
+);
+
+create index NOTIFICATION_OBJECT_R_036a9cee on NOTIFICATION_OBJECT_REVISIONS (
+ NOTIFICATION_HOME_RESOURCE_ID,
+ REVISION
+);
+
+create index APN_SUBSCRIPTIONS_RES_9610d78e on APN_SUBSCRIPTIONS (
+ RESOURCE_KEY
+);
+
+create index IMIP_TOKENS_TOKEN_e94b918f on IMIP_TOKENS (
+ TOKEN
+);
+
Copied: CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/postgres-dialect/v17.sql (from rev 10990, CalendarServer/trunk/txdav/common/datastore/sql_schema/old/postgres-dialect/v17.sql)
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/postgres-dialect/v17.sql (rev 0)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/old/postgres-dialect/v17.sql 2013-04-16 22:19:46 UTC (rev 11052)
@@ -0,0 +1,572 @@
+-- -*- test-case-name: txdav.caldav.datastore.test.test_sql,txdav.carddav.datastore.test.test_sql -*-
+
+----
+-- Copyright (c) 2010-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+-----------------
+-- Resource ID --
+-----------------
+
+create sequence RESOURCE_ID_SEQ;
+
+-------------------------
+-- Cluster Bookkeeping --
+-------------------------
+
+-- Information about a process connected to this database.
+
+-- Note that this must match the node info schema in twext.enterprise.queue.
+create table NODE_INFO (
+ HOSTNAME varchar(255) not null,
+ PID integer not null,
+ PORT integer not null,
+ TIME timestamp not null default timezone('UTC', CURRENT_TIMESTAMP),
+
+ primary key (HOSTNAME, PORT)
+);
+
+-- Unique named locks. This table should always be empty, but rows are
+-- temporarily created in order to prevent undesirable concurrency.
+create table NAMED_LOCK (
+ LOCK_NAME varchar(255) primary key
+);
+
+
+-------------------
+-- Calendar Home --
+-------------------
+
+create table CALENDAR_HOME (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+ OWNER_UID varchar(255) not null unique, -- implicit index
+ DATAVERSION integer default 0 not null
+);
+
+----------------------------
+-- Calendar Home Metadata --
+----------------------------
+
+create table CALENDAR_HOME_METADATA (
+ RESOURCE_ID integer primary key references CALENDAR_HOME on delete cascade, -- implicit index
+ QUOTA_USED_BYTES integer default 0 not null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+--------------
+-- Calendar --
+--------------
+
+create table CALENDAR (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ') -- implicit index
+);
+
+
+-----------------------
+-- Calendar Metadata --
+-----------------------
+
+create table CALENDAR_METADATA (
+ RESOURCE_ID integer primary key references CALENDAR on delete cascade, -- implicit index
+ SUPPORTED_COMPONENTS varchar(255) default null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+---------------------------
+-- Sharing Notifications --
+---------------------------
+
+create table NOTIFICATION_HOME (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+ OWNER_UID varchar(255) not null unique -- implicit index
+);
+
+create table NOTIFICATION (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+ NOTIFICATION_HOME_RESOURCE_ID integer not null references NOTIFICATION_HOME,
+ NOTIFICATION_UID varchar(255) not null,
+ XML_TYPE varchar(255) not null,
+ XML_DATA text not null,
+ MD5 char(32) not null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+
+ unique(NOTIFICATION_UID, NOTIFICATION_HOME_RESOURCE_ID) -- implicit index
+);
+
+create index NOTIFICATION_NOTIFICATION_HOME_RESOURCE_ID on
+ NOTIFICATION(NOTIFICATION_HOME_RESOURCE_ID);
+
+-------------------
+-- Calendar Bind --
+-------------------
+
+-- Joins CALENDAR_HOME and CALENDAR
+
+create table CALENDAR_BIND (
+ CALENDAR_HOME_RESOURCE_ID integer not null references CALENDAR_HOME,
+ CALENDAR_RESOURCE_ID integer not null references CALENDAR on delete cascade,
+ CALENDAR_RESOURCE_NAME varchar(255) not null,
+ BIND_MODE integer not null, -- enum CALENDAR_BIND_MODE
+ BIND_STATUS integer not null, -- enum CALENDAR_BIND_STATUS
+ MESSAGE text,
+
+ primary key(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID), -- implicit index
+ unique(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_NAME) -- implicit index
+);
+
+create index CALENDAR_BIND_RESOURCE_ID on CALENDAR_BIND(CALENDAR_RESOURCE_ID);
+
+-- Enumeration of calendar bind modes
+
+create table CALENDAR_BIND_MODE (
+ ID integer primary key,
+ DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_MODE values (0, 'own' );
+insert into CALENDAR_BIND_MODE values (1, 'read' );
+insert into CALENDAR_BIND_MODE values (2, 'write');
+insert into CALENDAR_BIND_MODE values (3, 'direct');
+
+-- Enumeration of statuses
+
+create table CALENDAR_BIND_STATUS (
+ ID integer primary key,
+ DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_BIND_STATUS values (0, 'invited' );
+insert into CALENDAR_BIND_STATUS values (1, 'accepted');
+insert into CALENDAR_BIND_STATUS values (2, 'declined');
+insert into CALENDAR_BIND_STATUS values (3, 'invalid');
+
+
+---------------------
+-- Calendar Object --
+---------------------
+
+create table CALENDAR_OBJECT (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+ CALENDAR_RESOURCE_ID integer not null references CALENDAR on delete cascade,
+ RESOURCE_NAME varchar(255) not null,
+ ICALENDAR_TEXT text not null,
+ ICALENDAR_UID varchar(255) not null,
+ ICALENDAR_TYPE varchar(255) not null,
+ ATTACHMENTS_MODE integer default 0 not null, -- enum CALENDAR_OBJECT_ATTACHMENTS_MODE
+ DROPBOX_ID varchar(255),
+ ORGANIZER varchar(255),
+ RECURRANCE_MIN date, -- minimum date that recurrences have been expanded to.
+ RECURRANCE_MAX date, -- maximum date that recurrences have been expanded to.
+ ACCESS integer default 0 not null,
+ SCHEDULE_OBJECT boolean default false,
+ SCHEDULE_TAG varchar(36) default null,
+ SCHEDULE_ETAGS text default null,
+ PRIVATE_COMMENTS boolean default false not null,
+ MD5 char(32) not null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+
+ unique (CALENDAR_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+
+ -- since the 'inbox' is a 'calendar resource' for the purpose of storing
+ -- calendar objects, this constraint has to be selectively enforced by the
+ -- application layer.
+
+ -- unique(CALENDAR_RESOURCE_ID, ICALENDAR_UID)
+);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_AND_ICALENDAR_UID on
+ CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_CALENDAR_RESOURCE_ID_RECURRANCE_MAX on
+ CALENDAR_OBJECT(CALENDAR_RESOURCE_ID, RECURRANCE_MAX);
+
+create index CALENDAR_OBJECT_ICALENDAR_UID on
+ CALENDAR_OBJECT(ICALENDAR_UID);
+
+create index CALENDAR_OBJECT_DROPBOX_ID on
+ CALENDAR_OBJECT(DROPBOX_ID);
+
+-- Enumeration of attachment modes
+
+create table CALENDAR_OBJECT_ATTACHMENTS_MODE (
+ ID integer primary key,
+ DESCRIPTION varchar(16) not null unique
+);
+
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (0, 'none' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (1, 'read' );
+insert into CALENDAR_OBJECT_ATTACHMENTS_MODE values (2, 'write');
+
+
+-- Enumeration of calendar access types
+
+create table CALENDAR_ACCESS_TYPE (
+ ID integer primary key,
+ DESCRIPTION varchar(32) not null unique
+);
+
+insert into CALENDAR_ACCESS_TYPE values (0, '' );
+insert into CALENDAR_ACCESS_TYPE values (1, 'public' );
+insert into CALENDAR_ACCESS_TYPE values (2, 'private' );
+insert into CALENDAR_ACCESS_TYPE values (3, 'confidential' );
+insert into CALENDAR_ACCESS_TYPE values (4, 'restricted' );
+
+-----------------
+-- Instance ID --
+-----------------
+
+create sequence INSTANCE_ID_SEQ;
+
+
+----------------
+-- Time Range --
+----------------
+
+create table TIME_RANGE (
+ INSTANCE_ID integer primary key default nextval('INSTANCE_ID_SEQ'), -- implicit index
+ CALENDAR_RESOURCE_ID integer not null references CALENDAR on delete cascade,
+ CALENDAR_OBJECT_RESOURCE_ID integer not null references CALENDAR_OBJECT on delete cascade,
+ FLOATING boolean not null,
+ START_DATE timestamp not null,
+ END_DATE timestamp not null,
+ FBTYPE integer not null,
+ TRANSPARENT boolean not null
+);
+
+create index TIME_RANGE_CALENDAR_RESOURCE_ID on
+ TIME_RANGE(CALENDAR_RESOURCE_ID);
+create index TIME_RANGE_CALENDAR_OBJECT_RESOURCE_ID on
+ TIME_RANGE(CALENDAR_OBJECT_RESOURCE_ID);
+
+
+-- Enumeration of free/busy types
+
+create table FREE_BUSY_TYPE (
+ ID integer primary key,
+ DESCRIPTION varchar(16) not null unique
+);
+
+insert into FREE_BUSY_TYPE values (0, 'unknown' );
+insert into FREE_BUSY_TYPE values (1, 'free' );
+insert into FREE_BUSY_TYPE values (2, 'busy' );
+insert into FREE_BUSY_TYPE values (3, 'busy-unavailable');
+insert into FREE_BUSY_TYPE values (4, 'busy-tentative' );
+
+
+------------------
+-- Transparency --
+------------------
+
+create table TRANSPARENCY (
+ TIME_RANGE_INSTANCE_ID integer not null references TIME_RANGE on delete cascade,
+ USER_ID varchar(255) not null,
+ TRANSPARENT boolean not null
+);
+
+create index TRANSPARENCY_TIME_RANGE_INSTANCE_ID on
+ TRANSPARENCY(TIME_RANGE_INSTANCE_ID);
+
+
+----------------
+-- Attachment --
+----------------
+
+create sequence ATTACHMENT_ID_SEQ;
+
+create table ATTACHMENT (
+ ATTACHMENT_ID integer primary key default nextval('ATTACHMENT_ID_SEQ'), -- implicit index
+ CALENDAR_HOME_RESOURCE_ID integer not null references CALENDAR_HOME,
+ DROPBOX_ID varchar(255),
+ CONTENT_TYPE varchar(255) not null,
+ SIZE integer not null,
+ MD5 char(32) not null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ PATH varchar(1024) not null
+);
+
+create index ATTACHMENT_CALENDAR_HOME_RESOURCE_ID on
+ ATTACHMENT(CALENDAR_HOME_RESOURCE_ID);
+
+-- Many-to-many relationship between attachments and calendar objects
+create table ATTACHMENT_CALENDAR_OBJECT (
+ ATTACHMENT_ID integer not null references ATTACHMENT on delete cascade,
+ MANAGED_ID varchar(255) not null,
+ CALENDAR_OBJECT_RESOURCE_ID integer not null references CALENDAR_OBJECT on delete cascade,
+
+ primary key (ATTACHMENT_ID, CALENDAR_OBJECT_RESOURCE_ID), -- implicit index
+ unique (MANAGED_ID, CALENDAR_OBJECT_RESOURCE_ID) --implicit index
+);
+
+
+-----------------------
+-- Resource Property --
+-----------------------
+
+create table RESOURCE_PROPERTY (
+ RESOURCE_ID integer not null, -- foreign key: *.RESOURCE_ID
+ NAME varchar(255) not null,
+ VALUE text not null, -- FIXME: xml?
+ VIEWER_UID varchar(255),
+
+ primary key (RESOURCE_ID, NAME, VIEWER_UID) -- implicit index
+);
+
+
+----------------------
+-- AddressBook Home --
+----------------------
+
+create table ADDRESSBOOK_HOME (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+ OWNER_UID varchar(255) not null unique, -- implicit index
+ DATAVERSION integer default 0 not null
+);
+
+-------------------------------
+-- AddressBook Home Metadata --
+-------------------------------
+
+create table ADDRESSBOOK_HOME_METADATA (
+ RESOURCE_ID integer primary key references ADDRESSBOOK_HOME on delete cascade, -- implicit index
+ QUOTA_USED_BYTES integer default 0 not null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+-----------------
+-- AddressBook --
+-----------------
+
+create table ADDRESSBOOK (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ') -- implicit index
+);
+
+
+--------------------------
+-- AddressBook Metadata --
+--------------------------
+
+create table ADDRESSBOOK_METADATA (
+ RESOURCE_ID integer primary key references ADDRESSBOOK on delete cascade, -- implicit index
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+
+----------------------
+-- AddressBook Bind --
+----------------------
+
+-- Joins ADDRESSBOOK_HOME and ADDRESSBOOK
+
+create table ADDRESSBOOK_BIND (
+ ADDRESSBOOK_HOME_RESOURCE_ID integer not null references ADDRESSBOOK_HOME,
+ ADDRESSBOOK_RESOURCE_ID integer not null references ADDRESSBOOK on delete cascade,
+ ADDRESSBOOK_RESOURCE_NAME varchar(255) not null,
+ BIND_MODE integer not null, -- enum CALENDAR_BIND_MODE
+ BIND_STATUS integer not null, -- enum CALENDAR_BIND_STATUS
+ MESSAGE text, -- FIXME: xml?
+
+ primary key (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_ID), -- implicit index
+ unique (ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_NAME) -- implicit index
+);
+
+create index ADDRESSBOOK_BIND_RESOURCE_ID on
+ ADDRESSBOOK_BIND(ADDRESSBOOK_RESOURCE_ID);
+
+create table ADDRESSBOOK_OBJECT (
+ RESOURCE_ID integer primary key default nextval('RESOURCE_ID_SEQ'), -- implicit index
+ ADDRESSBOOK_RESOURCE_ID integer not null references ADDRESSBOOK on delete cascade,
+ RESOURCE_NAME varchar(255) not null,
+ VCARD_TEXT text not null,
+ VCARD_UID varchar(255) not null,
+ MD5 char(32) not null,
+ CREATED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ MODIFIED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+
+ unique (ADDRESSBOOK_RESOURCE_ID, RESOURCE_NAME), -- implicit index
+ unique (ADDRESSBOOK_RESOURCE_ID, VCARD_UID) -- implicit index
+);
+
+---------------
+-- Revisions --
+---------------
+
+create sequence REVISION_SEQ;
+
+
+---------------
+-- Revisions --
+---------------
+
+create table CALENDAR_OBJECT_REVISIONS (
+ CALENDAR_HOME_RESOURCE_ID integer not null references CALENDAR_HOME,
+ CALENDAR_RESOURCE_ID integer references CALENDAR,
+ CALENDAR_NAME varchar(255) default null,
+ RESOURCE_NAME varchar(255),
+ REVISION integer default nextval('REVISION_SEQ') not null,
+ DELETED boolean not null
+);
+
+create index CALENDAR_OBJECT_REVISIONS_HOME_RESOURCE_ID_CALENDAR_RESOURCE_ID
+ on CALENDAR_OBJECT_REVISIONS(CALENDAR_HOME_RESOURCE_ID, CALENDAR_RESOURCE_ID);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME
+ on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, RESOURCE_NAME);
+
+create index CALENDAR_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+ on CALENDAR_OBJECT_REVISIONS(CALENDAR_RESOURCE_ID, REVISION);
+
+-------------------------------
+-- AddressBook Object Revisions --
+-------------------------------
+
+create table ADDRESSBOOK_OBJECT_REVISIONS (
+ ADDRESSBOOK_HOME_RESOURCE_ID integer not null references ADDRESSBOOK_HOME,
+ ADDRESSBOOK_RESOURCE_ID integer references ADDRESSBOOK,
+ ADDRESSBOOK_NAME varchar(255) default null,
+ RESOURCE_NAME varchar(255),
+ REVISION integer default nextval('REVISION_SEQ') not null,
+ DELETED boolean not null
+);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_HOME_RESOURCE_ID_ADDRESSBOOK_RESOURCE_ID
+ on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_HOME_RESOURCE_ID, ADDRESSBOOK_RESOURCE_ID);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_RESOURCE_ID_RESOURCE_NAME
+ on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_RESOURCE_ID, RESOURCE_NAME);
+
+create index ADDRESSBOOK_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+ on ADDRESSBOOK_OBJECT_REVISIONS(ADDRESSBOOK_RESOURCE_ID, REVISION);
+
+-----------------------------------
+-- Notification Object Revisions --
+-----------------------------------
+
+create table NOTIFICATION_OBJECT_REVISIONS (
+ NOTIFICATION_HOME_RESOURCE_ID integer not null references NOTIFICATION_HOME on delete cascade,
+ RESOURCE_NAME varchar(255),
+ REVISION integer default nextval('REVISION_SEQ') not null,
+ DELETED boolean not null,
+
+ unique(NOTIFICATION_HOME_RESOURCE_ID, RESOURCE_NAME) -- implicit index
+);
+
+create index NOTIFICATION_OBJECT_REVISIONS_RESOURCE_ID_REVISION
+ on NOTIFICATION_OBJECT_REVISIONS(NOTIFICATION_HOME_RESOURCE_ID, REVISION);
+
+-------------------------------------------
+-- Apple Push Notification Subscriptions --
+-------------------------------------------
+
+create table APN_SUBSCRIPTIONS (
+ TOKEN varchar(255) not null,
+ RESOURCE_KEY varchar(255) not null,
+ MODIFIED integer not null,
+ SUBSCRIBER_GUID varchar(255) not null,
+ USER_AGENT varchar(255) default null,
+ IP_ADDR varchar(255) default null,
+
+ primary key (TOKEN, RESOURCE_KEY) -- implicit index
+);
+
+create index APN_SUBSCRIPTIONS_RESOURCE_KEY
+ on APN_SUBSCRIPTIONS(RESOURCE_KEY);
+
+-----------------
+-- IMIP Tokens --
+-----------------
+
+create table IMIP_TOKENS (
+ TOKEN varchar(255) not null,
+ ORGANIZER varchar(255) not null,
+ ATTENDEE varchar(255) not null,
+ ICALUID varchar(255) not null,
+ ACCESSED timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+
+ primary key (ORGANIZER, ATTENDEE, ICALUID) -- implicit index
+);
+
+create index IMIP_TOKENS_TOKEN
+ on IMIP_TOKENS(TOKEN);
+
+----------------
+-- Work Items --
+----------------
+
+create sequence WORKITEM_SEQ;
+
+---------------------------
+-- IMIP Inivitation Work --
+---------------------------
+
+create table IMIP_INVITATION_WORK (
+ WORK_ID integer primary key default nextval('WORKITEM_SEQ') not null,
+ NOT_BEFORE timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ FROM_ADDR varchar(255) not null,
+ TO_ADDR varchar(255) not null,
+ ICALENDAR_TEXT text not null
+);
+
+-----------------------
+-- IMIP Polling Work --
+-----------------------
+
+create table IMIP_POLLING_WORK (
+ WORK_ID integer primary key default nextval('WORKITEM_SEQ') not null,
+ NOT_BEFORE timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+---------------------
+-- IMIP Reply Work --
+---------------------
+
+create table IMIP_REPLY_WORK (
+ WORK_ID integer primary key default nextval('WORKITEM_SEQ') not null,
+ NOT_BEFORE timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ ORGANIZER varchar(255) not null,
+ ATTENDEE varchar(255) not null,
+ ICALENDAR_TEXT text not null
+);
+
+------------------------
+-- Push Notifications --
+------------------------
+
+create table PUSH_NOTIFICATION_WORK (
+ WORK_ID integer primary key default nextval('WORKITEM_SEQ') not null,
+ NOT_BEFORE timestamp default timezone('UTC', CURRENT_TIMESTAMP),
+ PUSH_ID varchar(255) not null
+);
+
+
+--------------------
+-- Schema Version --
+--------------------
+
+create table CALENDARSERVER (
+ NAME varchar(255) primary key, -- implicit index
+ VALUE varchar(255)
+);
+
+insert into CALENDARSERVER values ('VERSION', '17');
+insert into CALENDARSERVER values ('CALENDAR-DATAVERSION', '3');
+insert into CALENDARSERVER values ('ADDRESSBOOK-DATAVERSION', '1');
Copied: CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_17_to_18.sql (from rev 10990, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_17_to_18.sql)
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_17_to_18.sql (rev 0)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/oracle-dialect/upgrade_from_17_to_18.sql 2013-04-16 22:19:46 UTC (rev 11052)
@@ -0,0 +1,35 @@
+----
+-- Copyright (c) 2011-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 17 to 18 --
+---------------------------------------------------
+
+
+-----------------
+-- GroupCacher --
+-----------------
+
+
+
+create table GROUP_CACHER_POLLING_WORK (
+ "WORK_ID" integer primary key not null,
+ "NOT_BEFORE" timestamp default CURRENT_TIMESTAMP at time zone 'UTC'
+);
+
+
+-- Now update the version
+update CALENDARSERVER set VALUE = '18' where NAME = 'VERSION';
Copied: CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_17_to_18.sql (from rev 10990, CalendarServer/trunk/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_17_to_18.sql)
===================================================================
--- CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_17_to_18.sql (rev 0)
+++ CalendarServer/branches/users/glyph/sharedgroups-2/txdav/common/datastore/sql_schema/upgrades/postgres-dialect/upgrade_from_17_to_18.sql 2013-04-16 22:19:46 UTC (rev 11052)
@@ -0,0 +1,31 @@
+----
+-- Copyright (c) 2011-2013 Apple Inc. All rights reserved.
+--
+-- Licensed under the Apache License, Version 2.0 (the "License");
+-- you may not use this file except in compliance with the License.
+-- You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+----
+
+---------------------------------------------------
+-- Upgrade database schema from VERSION 17 to 18 --
+---------------------------------------------------
+
+-----------------
+-- GroupCacher --
+-----------------
+
+create table GROUP_CACHER_POLLING_WORK (
+ WORK_ID integer primary key default nextval('WORKITEM_SEQ') not null,
+ NOT_BEFORE timestamp default timezone('UTC', CURRENT_TIMESTAMP)
+);
+
+-- Now update the version
+update CALENDARSERVER set VALUE = '18' where NAME = 'VERSION';
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.macosforge.org/pipermail/calendarserver-changes/attachments/20130416/e7a57017/attachment-0001.html>
More information about the calendarserver-changes
mailing list