<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head><meta http-equiv="content-type" content="text/html; charset=utf-8" />
<title>[15693] CalendarServer/trunk</title>
</head>
<body>

<style type="text/css"><!--
#msg dl.meta { border: 1px #006 solid; background: #369; padding: 6px; color: #fff; }
#msg dl.meta dt { float: left; width: 6em; font-weight: bold; }
#msg dt:after { content:':';}
#msg dl, #msg dt, #msg ul, #msg li, #header, #footer, #logmsg { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt;  }
#msg dl a { font-weight: bold}
#msg dl a:link    { color:#fc3; }
#msg dl a:active  { color:#ff0; }
#msg dl a:visited { color:#cc6; }
h3 { font-family: verdana,arial,helvetica,sans-serif; font-size: 10pt; font-weight: bold; }
#msg pre { overflow: auto; background: #ffc; border: 1px #fa0 solid; padding: 6px; }
#logmsg { background: #ffc; border: 1px #fa0 solid; padding: 1em 1em 0 1em; }
#logmsg p, #logmsg pre, #logmsg blockquote { margin: 0 0 1em 0; }
#logmsg p, #logmsg li, #logmsg dt, #logmsg dd { line-height: 14pt; }
#logmsg h1, #logmsg h2, #logmsg h3, #logmsg h4, #logmsg h5, #logmsg h6 { margin: .5em 0; }
#logmsg h1:first-child, #logmsg h2:first-child, #logmsg h3:first-child, #logmsg h4:first-child, #logmsg h5:first-child, #logmsg h6:first-child { margin-top: 0; }
#logmsg ul, #logmsg ol { padding: 0; list-style-position: inside; margin: 0 0 0 1em; }
#logmsg ul { text-indent: -1em; padding-left: 1em; }#logmsg ol { text-indent: -1.5em; padding-left: 1.5em; }
#logmsg > ul, #logmsg > ol { margin: 0 0 1em 0; }
#logmsg pre { background: #eee; padding: 1em; }
#logmsg blockquote { border: 1px solid #fa0; border-left-width: 10px; padding: 1em 1em 0 1em; background: white;}
#logmsg dl { margin: 0; }
#logmsg dt { font-weight: bold; }
#logmsg dd { margin: 0; padding: 0 0 0.5em 0; }
#logmsg dd:before { content:'\00bb';}
#logmsg table { border-spacing: 0px; border-collapse: collapse; border-top: 4px solid #fa0; border-bottom: 1px solid #fa0; background: #fff; }
#logmsg table th { text-align: left; font-weight: normal; padding: 0.2em 0.5em; border-top: 1px dotted #fa0; }
#logmsg table td { text-align: right; border-top: 1px dotted #fa0; padding: 0.2em 0.5em; }
#logmsg table thead th { text-align: center; border-bottom: 1px solid #fa0; }
#logmsg table th.Corner { text-align: left; }
#logmsg hr { border: none 0; border-top: 2px dashed #fa0; height: 1px; }
#header, #footer { color: #fff; background: #636; border: 1px #300 solid; padding: 6px; }
#patch { width: 100%; }
#patch h4 {font-family: verdana,arial,helvetica,sans-serif;font-size:10pt;padding:8px;background:#369;color:#fff;margin:0;}
#patch .propset h4, #patch .binary h4 {margin:0;}
#patch pre {padding:0;line-height:1.2em;margin:0;}
#patch .diff {width:100%;background:#eee;padding: 0 0 10px 0;overflow:auto;}
#patch .propset .diff, #patch .binary .diff  {padding:10px 0;}
#patch span {display:block;padding:0 10px;}
#patch .modfile, #patch .addfile, #patch .delfile, #patch .propset, #patch .binary, #patch .copfile {border:1px solid #ccc;margin:10px 0;}
#patch ins {background:#dfd;text-decoration:none;display:block;padding:0 10px;}
#patch del {background:#fdd;text-decoration:none;display:block;padding:0 10px;}
#patch .lines, .info {color:#888;background:#fff;}
--></style>
<div id="msg">
<dl class="meta">
<dt>Revision</dt> <dd><a href="http://trac.calendarserver.org//changeset/15693">15693</a></dd>
<dt>Author</dt> <dd>cdaboo@apple.com</dd>
<dt>Date</dt> <dd>2016-06-23 09:35:07 -0700 (Thu, 23 Jun 2016)</dd>
</dl>

<h3>Log Message</h3>
<pre>Allow sets of plots to be specified with dashtime command line arg. Add other command line args to dashtime. Update help and documentation for dashboard tools.</pre>

<h3>Modified Paths</h3>
<ul>
<li><a href="#CalendarServertrunkcalendarservertoolsdashcollectpy">CalendarServer/trunk/calendarserver/tools/dashcollect.py</a></li>
<li><a href="#CalendarServertrunkcalendarservertoolsdashtimepy">CalendarServer/trunk/calendarserver/tools/dashtime.py</a></li>
<li><a href="#CalendarServertrunkcalendarservertoolsdashviewpy">CalendarServer/trunk/calendarserver/tools/dashview.py</a></li>
</ul>

<h3>Added Paths</h3>
<ul>
<li><a href="#CalendarServertrunkdocAdminDashboardmd">CalendarServer/trunk/doc/Admin/Dashboard.md</a></li>
</ul>

</div>
<div id="patch">
<h3>Diff</h3>
<a id="CalendarServertrunkcalendarservertoolsdashcollectpy"></a>
<div class="modfile"><h4>Modified: CalendarServer/trunk/calendarserver/tools/dashcollect.py (15692 => 15693)</h4>
<pre class="diff"><span>
<span class="info">--- CalendarServer/trunk/calendarserver/tools/dashcollect.py        2016-06-23 14:03:48 UTC (rev 15692)
+++ CalendarServer/trunk/calendarserver/tools/dashcollect.py        2016-06-23 16:35:07 UTC (rev 15693)
</span><span class="lines">@@ -44,11 +44,12 @@
</span><span class="cx"> }
</span><span class="cx"> &quot;&quot;&quot;
</span><span class="cx"> 
</span><ins>+from argparse import HelpFormatter, SUPPRESS, OPTIONAL, ZERO_OR_MORE, \
+    ArgumentParser
</ins><span class="cx"> from collections import OrderedDict
</span><span class="cx"> from datetime import datetime, date
</span><span class="cx"> from threading import Thread
</span><span class="cx"> import SocketServer
</span><del>-import argparse
</del><span class="cx"> import errno
</span><span class="cx"> import json
</span><span class="cx"> import os
</span><span class="lines">@@ -65,6 +66,27 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+class MyHelpFormatter(HelpFormatter):
+    &quot;&quot;&quot;
+    Help message formatter which adds default values to argument help and
+    retains formatting of all help text.
+    &quot;&quot;&quot;
+
+    def _fill_text(self, text, width, indent):
+        return ''.join([indent + line for line in text.splitlines(True)])
+
+
+    def _get_help_string(self, action):
+        help = action.help
+        if '%(default)' not in action.help:
+            if action.default is not SUPPRESS:
+                defaulting_nargs = [OPTIONAL, ZERO_OR_MORE]
+                if action.option_strings or action.nargs in defaulting_nargs:
+                    help += ' (default: %(default)s)'
+        return help
+
+
+
</ins><span class="cx"> def main():
</span><span class="cx">     try:
</span><span class="cx">         # to produce a docstring target
</span><span class="lines">@@ -72,14 +94,16 @@
</span><span class="cx">     except NameError:
</span><span class="cx">         # unlikely but possible...
</span><span class="cx">         thisFile = sys.argv[0]
</span><del>-    parser = argparse.ArgumentParser(
</del><ins>+    parser = ArgumentParser(
+        formatter_class=MyHelpFormatter,
</ins><span class="cx">         description=&quot;Dashboard service for CalendarServer.&quot;,
</span><del>-        epilog=&quot;To view the docstring, run: pydoc {}&quot;.format(thisFile))
-    parser.add_argument(&quot;-f&quot;, help=&quot;Server config file (see docstring for details)&quot;)
-    parser.add_argument(&quot;-l&quot;, help=&quot;Log file directory&quot;)
-    parser.add_argument(&quot;-n&quot;, action=&quot;store_true&quot;, help=&quot;New log file&quot;)
-    parser.add_argument(&quot;-s&quot;, default=&quot;localhost:8200&quot;, help=&quot;Run the dash_thread service on the specified host:port&quot;)
-    parser.add_argument(&quot;-t&quot;, action=&quot;store_true&quot;, help=&quot;Rotate log files every hour [default: once per day]&quot;)
</del><ins>+        epilog=&quot;To view the docstring, run: pydoc {}&quot;.format(thisFile),
+    )
+    parser.add_argument(&quot;-f&quot;, default=SUPPRESS, required=True, help=&quot;Server config file (see docstring for details)&quot;)
+    parser.add_argument(&quot;-l&quot;, default=SUPPRESS, required=True, help=&quot;Log file directory&quot;)
+    parser.add_argument(&quot;-n&quot;, action=&quot;store_true&quot;, help=&quot;Create a new log file when starting, existing log file is deleted&quot;)
+    parser.add_argument(&quot;-s&quot;, default=&quot;localhost:8200&quot;, help=&quot;Make JSON data available on the specified host:port&quot;)
+    parser.add_argument(&quot;-t&quot;, action=&quot;store_true&quot;, help=&quot;Rotate log files every hour, otherwise once per day&quot;)
</ins><span class="cx">     parser.add_argument(&quot;-z&quot;, action=&quot;store_true&quot;, help=&quot;zlib compress json records in log files&quot;)
</span><span class="cx">     parser.add_argument(&quot;-v&quot;, action=&quot;store_true&quot;, help=&quot;Verbose&quot;)
</span><span class="cx">     args = parser.parse_args()
</span></span></pre></div>
<a id="CalendarServertrunkcalendarservertoolsdashtimepy"></a>
<div class="modfile"><h4>Modified: CalendarServer/trunk/calendarserver/tools/dashtime.py (15692 => 15693)</h4>
<pre class="diff"><span>
<span class="info">--- CalendarServer/trunk/calendarserver/tools/dashtime.py        2016-06-23 14:03:48 UTC (rev 15692)
+++ CalendarServer/trunk/calendarserver/tools/dashtime.py        2016-06-23 16:35:07 UTC (rev 15693)
</span><span class="lines">@@ -18,10 +18,11 @@
</span><span class="cx"> Tool that extracts time series data from a dashcollect log.
</span><span class="cx"> &quot;&quot;&quot;
</span><span class="cx"> 
</span><ins>+from argparse import SUPPRESS, OPTIONAL, ZERO_OR_MORE, HelpFormatter, \
+    ArgumentParser
</ins><span class="cx"> from bz2 import BZ2File
</span><span class="cx"> from collections import OrderedDict, defaultdict
</span><span class="cx"> from zlib import decompress
</span><del>-import argparse
</del><span class="cx"> import json
</span><span class="cx"> import matplotlib.pyplot as plt
</span><span class="cx"> import operator
</span><span class="lines">@@ -40,6 +41,27 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+class MyHelpFormatter(HelpFormatter):
+    &quot;&quot;&quot;
+    Help message formatter which adds default values to argument help and
+    retains formatting of all help text.
+    &quot;&quot;&quot;
+
+    def _fill_text(self, text, width, indent):
+        return ''.join([indent + line for line in text.splitlines(True)])
+
+
+    def _get_help_string(self, action):
+        help = action.help
+        if '%(default)' not in action.help:
+            if action.default is not SUPPRESS:
+                defaulting_nargs = [OPTIONAL, ZERO_OR_MORE]
+                if action.option_strings or action.nargs in defaulting_nargs:
+                    help += ' (default: %(default)s)'
+        return help
+
+
+
</ins><span class="cx"> class DataType(object):
</span><span class="cx">     &quot;&quot;&quot;
</span><span class="cx">     Base class for object that can process the different types of data in a
</span><span class="lines">@@ -306,7 +328,9 @@
</span><span class="cx">         result = 0
</span><span class="cx">         for onehost in hosts:
</span><span class="cx">             completed = sum(map(operator.itemgetter(2), stats[onehost][&quot;job_assignments&quot;][&quot;workers&quot;]))
</span><del>-            result += completed - JobsCompletedDataType.lastCompleted[onehost] if JobsCompletedDataType.lastCompleted[onehost] else 0
</del><ins>+            delta = completed - JobsCompletedDataType.lastCompleted[onehost] if JobsCompletedDataType.lastCompleted[onehost] else 0
+            if delta &gt;= 0:
+                result += delta
</ins><span class="cx">             JobsCompletedDataType.lastCompleted[onehost] = completed
</span><span class="cx">         return result
</span><span class="cx"> 
</span><span class="lines">@@ -399,60 +423,63 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> 
</span><del>-def main():
-    parser = argparse.ArgumentParser(
-        description=&quot;Dashboard time series processor.&quot;,
-        epilog=&quot;cpu - CPU use\nreqs - requests per second\nrespt - average response time&quot;,
-    )
-    parser.add_argument(&quot;-l&quot;, help=&quot;Log file to process&quot;)
-    parser.add_argument(&quot;-p&quot;, help=&quot;Name of pod to analyze&quot;)
-    parser.add_argument(&quot;-s&quot;, help=&quot;Name of server to analyze&quot;)
-    parser.add_argument(&quot;-v&quot;, action=&quot;store_true&quot;, help=&quot;Verbose&quot;)
-    args = parser.parse_args()
-    if args.v:
-        global verbose
-        verbose = True
</del><ins>+class Calculator(object):
</ins><span class="cx"> 
</span><del>-    # Get the log file
-    try:
-        if args.l.endswith(&quot;.bz2&quot;):
-            logfile = BZ2File(os.path.expanduser(args.l))
-        else:
-            logfile = open(os.path.expanduser(args.l))
-    except:
-        print(&quot;Failed to open logfile {}&quot;.format(args.l))
</del><ins>+    def __init__(self, args):
+        if args.v:
+            global verbose
+            verbose = True
</ins><span class="cx"> 
</span><del>-    # Start/end lines in log file to process
-    line_start = 0
-    line_count = 10000
</del><ins>+        # Get the log file
+        self.logname = args.l
+        try:
+            if args.l.endswith(&quot;.bz2&quot;):
+                self.logfile = BZ2File(os.path.expanduser(args.l))
+            else:
+                self.logfile = open(os.path.expanduser(args.l))
+        except:
+            print(&quot;Failed to open logfile {}&quot;.format(args.l))
</ins><span class="cx"> 
</span><del>-    # Plot arrays that will be generated
-    x = []
-    y = OrderedDict()
-    titles = {}
-    ymaxes = {}
</del><ins>+        self.pod = getattr(args, &quot;p&quot;, None)
+        self.single_server = getattr(args, &quot;s&quot;, None)
</ins><span class="cx"> 
</span><del>-    def singleHost(valuekeys):
</del><ins>+        self.save = args.save
+        self.noshow = args.noshow
+
+        self.mode = args.mode
+
+        # Start/end lines in log file to process
+        self.line_start = args.start
+        self.line_count = args.count
+
+        # Plot arrays that will be generated
+        self.x = []
+        self.y = OrderedDict()
+        self.titles = {}
+        self.ymaxes = {}
+
+
+    def singleHost(self, valuekeys):
</ins><span class="cx">         &quot;&quot;&quot;
</span><span class="cx">         Generate data for a single host only.
</span><span class="cx"> 
</span><span class="cx">         @param valuekeys: L{DataType} keys to process
</span><span class="cx">         @type valuekeys: L{list} or L{str}
</span><span class="cx">         &quot;&quot;&quot;
</span><del>-        _plotHosts(valuekeys, (args.s,))
</del><ins>+        self._plotHosts(valuekeys, (self.single_server,))
</ins><span class="cx"> 
</span><span class="cx"> 
</span><del>-    def combinedHosts(valuekeys):
</del><ins>+    def combinedHosts(self, valuekeys):
</ins><span class="cx">         &quot;&quot;&quot;
</span><span class="cx">         Generate data for all hosts.
</span><span class="cx"> 
</span><span class="cx">         @param valuekeys: L{DataType} keys to process
</span><span class="cx">         @type valuekeys: L{list} or L{str}
</span><span class="cx">         &quot;&quot;&quot;
</span><del>-        _plotHosts(valuekeys, None)
</del><ins>+        self._plotHosts(valuekeys, None)
</ins><span class="cx"> 
</span><span class="cx"> 
</span><del>-    def _plotHosts(valuekeys, hosts):
</del><ins>+    def _plotHosts(self, valuekeys, hosts):
</ins><span class="cx">         &quot;&quot;&quot;
</span><span class="cx">         Generate data for a the specified list of hosts.
</span><span class="cx"> 
</span><span class="lines">@@ -463,13 +490,13 @@
</span><span class="cx">         &quot;&quot;&quot;
</span><span class="cx"> 
</span><span class="cx">         # For each log file line, process the data for each required measurement
</span><del>-        with logfile:
-            line = logfile.readline()
</del><ins>+        with self.logfile:
+            line = self.logfile.readline()
</ins><span class="cx">             ctr = 0
</span><span class="cx">             while line:
</span><del>-                if ctr &lt; line_start:
</del><ins>+                if ctr &lt; self.line_start:
</ins><span class="cx">                     ctr += 1
</span><del>-                    line = logfile.readline()
</del><ins>+                    line = self.logfile.readline()
</ins><span class="cx">                     continue
</span><span class="cx"> 
</span><span class="cx">                 if line[0] == &quot;\x1e&quot;:
</span><span class="lines">@@ -478,38 +505,40 @@
</span><span class="cx">                     line = decompress(line.decode(&quot;base64&quot;))
</span><span class="cx">                 jline = json.loads(line)
</span><span class="cx"> 
</span><del>-                x.append(ctr)
</del><ins>+                self.x.append(ctr)
</ins><span class="cx">                 ctr += 1
</span><span class="cx"> 
</span><span class="cx">                 # Initialize the plot arrays when we know how many hosts there are
</span><del>-                if len(y) == 0:
</del><ins>+                if len(self.y) == 0:
+                    if self.pod is None:
+                        self.pod = sorted(jline[&quot;pods&quot;].keys())[0]
</ins><span class="cx">                     if hosts is None:
</span><del>-                        hosts = sorted(jline[&quot;pods&quot;][args.p].keys())
</del><ins>+                        hosts = sorted(jline[&quot;pods&quot;][self.pod].keys())
</ins><span class="cx">                     for measurement in valuekeys:
</span><del>-                        y[measurement] = []
-                        titles[measurement] = DataType.getTitle(measurement)
-                        ymaxes[measurement] = DataType.getMaxY(measurement, len(hosts))
</del><ins>+                        self.y[measurement] = []
+                        self.titles[measurement] = DataType.getTitle(measurement)
+                        self.ymaxes[measurement] = DataType.getMaxY(measurement, len(hosts))
</ins><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx">                 for measurement in valuekeys:
</span><del>-                    stats = jline[&quot;pods&quot;][args.p]
</del><ins>+                    stats = jline[&quot;pods&quot;][self.pod]
</ins><span class="cx">                     try:
</span><del>-                        y[measurement].append(DataType.process(measurement, stats, hosts))
</del><ins>+                        self.y[measurement].append(DataType.process(measurement, stats, hosts))
</ins><span class="cx">                     except KeyError:
</span><del>-                        y[measurement].append(None)
</del><ins>+                        self.y[measurement].append(None)
</ins><span class="cx"> 
</span><del>-                line = logfile.readline()
-                if ctr &gt; line_start + line_count:
</del><ins>+                line = self.logfile.readline()
+                if self.line_count != -1 and ctr &gt; self.line_start + self.line_count:
</ins><span class="cx">                     break
</span><span class="cx"> 
</span><span class="cx">         # Offset data that is averaged over the previous minute
</span><span class="cx">         for measurement in valuekeys:
</span><span class="cx">             if DataType.skip(measurement):
</span><del>-                y[measurement] = y[measurement][60:]
-                y[measurement].extend([None] * 60)
</del><ins>+                self.y[measurement] = self.y[measurement][60:]
+                self.y[measurement].extend([None] * 60)
</ins><span class="cx"> 
</span><span class="cx"> 
</span><del>-    def perHost(perhostkeys, combinedkeys):
</del><ins>+    def perHost(self, perhostkeys, combinedkeys):
</ins><span class="cx">         &quot;&quot;&quot;
</span><span class="cx">         Generate a set of per-host plots, together we a set of plots for all-
</span><span class="cx">         host data.
</span><span class="lines">@@ -521,13 +550,13 @@
</span><span class="cx">         &quot;&quot;&quot;
</span><span class="cx"> 
</span><span class="cx">         # For each log file line, process the data for each required measurement
</span><del>-        with logfile:
-            line = logfile.readline()
</del><ins>+        with self.logfile:
+            line = self.logfile.readline()
</ins><span class="cx">             ctr = 0
</span><span class="cx">             while line:
</span><del>-                if ctr &lt; line_start:
</del><ins>+                if ctr &lt; self.line_start:
</ins><span class="cx">                     ctr += 1
</span><del>-                    line = logfile.readline()
</del><ins>+                    line = self.logfile.readline()
</ins><span class="cx">                     continue
</span><span class="cx"> 
</span><span class="cx">                 if line[0] == &quot;\x1e&quot;:
</span><span class="lines">@@ -536,38 +565,40 @@
</span><span class="cx">                     line = decompress(line.decode(&quot;base64&quot;))
</span><span class="cx">                 jline = json.loads(line)
</span><span class="cx"> 
</span><del>-                x.append(ctr)
</del><ins>+                self.x.append(ctr)
</ins><span class="cx">                 ctr += 1
</span><span class="cx"> 
</span><span class="cx">                 # Initialize the plot arrays when we know how many hosts there are
</span><del>-                if len(y) == 0:
-                    hosts = sorted(jline[&quot;pods&quot;][args.p].keys())
</del><ins>+                if len(self.y) == 0:
+                    if self.pod is None:
+                        self.pod = sorted(jline[&quot;pods&quot;].keys())[0]
+                    hosts = sorted(jline[&quot;pods&quot;][self.pod].keys())
</ins><span class="cx"> 
</span><span class="cx">                     for host in hosts:
</span><span class="cx">                         for measurement in perhostkeys:
</span><span class="cx">                             ykey = &quot;{}={}&quot;.format(measurement, host)
</span><del>-                            y[ykey] = []
-                            titles[ykey] = DataType.getTitle(measurement)
-                            ymaxes[ykey] = DataType.getMaxY(measurement, 1)
</del><ins>+                            self.y[ykey] = []
+                            self.titles[ykey] = DataType.getTitle(measurement)
+                            self.ymaxes[ykey] = DataType.getMaxY(measurement, 1)
</ins><span class="cx"> 
</span><span class="cx">                     for measurement in combinedkeys:
</span><del>-                        y[measurement] = []
-                        titles[measurement] = DataType.getTitle(measurement)
-                        ymaxes[measurement] = DataType.getMaxY(measurement, len(hosts))
</del><ins>+                        self.y[measurement] = []
+                        self.titles[measurement] = DataType.getTitle(measurement)
+                        self.ymaxes[measurement] = DataType.getMaxY(measurement, len(hosts))
</ins><span class="cx"> 
</span><span class="cx">                 # Get actual measurement data
</span><span class="cx">                 for host in hosts:
</span><span class="cx">                     for measurement in perhostkeys:
</span><span class="cx">                         ykey = &quot;{}={}&quot;.format(measurement, host)
</span><del>-                        stats = jline[&quot;pods&quot;][args.p]
-                        y[ykey].append(DataType.process(measurement, stats, (host,)))
</del><ins>+                        stats = jline[&quot;pods&quot;][self.pod]
+                        self.y[ykey].append(DataType.process(measurement, stats, (host,)))
</ins><span class="cx"> 
</span><span class="cx">                 for measurement in combinedkeys:
</span><del>-                    stats = jline[&quot;pods&quot;][args.p]
-                    y[measurement].append(DataType.process(measurement, stats, hosts))
</del><ins>+                    stats = jline[&quot;pods&quot;][self.pod]
+                    self.y[measurement].append(DataType.process(measurement, stats, hosts))
</ins><span class="cx"> 
</span><del>-                line = logfile.readline()
-                if ctr &gt; line_start + line_count:
</del><ins>+                line = self.logfile.readline()
+                if self.line_count != -1 and ctr &gt; self.line_start + self.line_count:
</ins><span class="cx">                     break
</span><span class="cx"> 
</span><span class="cx">         # Offset data that is averaged over the previous minute. Also determine
</span><span class="lines">@@ -577,89 +608,176 @@
</span><span class="cx">         for host in hosts:
</span><span class="cx">             for measurement in perhostkeys:
</span><span class="cx">                 ykey = &quot;{}={}&quot;.format(measurement, host)
</span><del>-                overall_ymax[measurement] = max(overall_ymax[measurement], max(y[ykey]))
</del><ins>+                overall_ymax[measurement] = max(overall_ymax[measurement], max(self.y[ykey]))
</ins><span class="cx">                 if DataType.skip(measurement):
</span><del>-                    y[ykey] = y[ykey][60:]
-                    y[ykey].extend([None] * 60)
</del><ins>+                    self.y[ykey] = self.y[ykey][60:]
+                    self.y[ykey].extend([None] * 60)
</ins><span class="cx">         for host in hosts:
</span><span class="cx">             for measurement in perhostkeys:
</span><span class="cx">                 ykey = &quot;{}={}&quot;.format(measurement, host)
</span><del>-                ymaxes[ykey] = overall_ymax[measurement]
</del><ins>+                self.ymaxes[ykey] = overall_ymax[measurement]
</ins><span class="cx"> 
</span><span class="cx">         for measurement in combinedkeys:
</span><span class="cx">             if DataType.skip(measurement):
</span><del>-                y[measurement] = y[measurement][60:]
-                y[measurement].extend([None] * 60)
</del><ins>+                self.y[measurement] = self.y[measurement][60:]
+                self.y[measurement].extend([None] * 60)
</ins><span class="cx"> 
</span><span class="cx"> 
</span><del>-    # Data for a single host, with jobs queued detail for all hosts
-#    singleHost((
-#        CPUDataType.key,
-#        RequestsDataType.key,
-#        ResponseDataType.key,
-#        JobsCompletedDataType.key,
-#        JobQueueDataType.key + &quot;-SCHEDULE&quot;,
-#        JobQueueDataType.key + &quot;-PUSH&quot;,
-#        JobQueueDataType.key,
-#    ))
</del><ins>+    def run(self, mode, *args):
+        getattr(self, mode)(*args)
</ins><span class="cx"> 
</span><del>-    # Data aggregated for all hosts - job detail
-#    combinedHosts((
-#        CPUDataType.key,
-#        RequestsDataType.key,
-#        ResponseDataType.key,
-#        JobsCompletedDataType.key,
-#        JobQueueDataType.key + &quot;-SCHEDULE&quot;,
-#        JobQueueDataType.key + &quot;-PUSH&quot;,
-#        JobQueueDataType.key,
-#    ))
</del><span class="cx"> 
</span><del>-    # Generic aggregated data for all hosts
-    combinedHosts((
-        CPUDataType.key,
-        RequestsDataType.key,
-        ResponseDataType.key,
-        JobsCompletedDataType.key,
-        JobQueueDataType.key,
-    ))
</del><ins>+    def plot(self):
+        # Generate a single stacked plot of the data
+        plotmax = len(self.y.keys())
+        plt.figure(figsize=(18.5, min(5 + len(self.y.keys()), 18)))
+        for plotnum, measurement in enumerate(self.y.keys()):
+            plt.subplot(len(self.y), 1, plotnum + 1)
+            plotSeries(self.titles[measurement], self.x, self.y[measurement], 0, self.ymaxes[measurement], plotnum == plotmax - 1)
+        if self.save:
+            plt.savefig(&quot;.&quot;.join((os.path.expanduser(self.logname), self.mode, &quot;png&quot;,)), orientation=&quot;landscape&quot;, format=&quot;png&quot;)
+        if not self.noshow:
+            plt.show()
</ins><span class="cx"> 
</span><span class="cx"> 
</span><del>-    # Data aggregated for all hosts - method detail
-#    combinedHosts((
-#        CPUDataType.key,
-#        RequestsDataType.key,
-#        ResponseDataType.key,
-#        MethodCountDataType.key + &quot;-PUT ics&quot;,
-#        MethodCountDataType.key + &quot;-REPORT cal-home-sync&quot;,
-#        MethodCountDataType.key + &quot;-PROPFIND Calendar Home&quot;,
-#        MethodCountDataType.key + &quot;-REPORT cal-sync&quot;,
-#        MethodCountDataType.key + &quot;-PROPFIND Calendar&quot;,
-#    ))
</del><span class="cx"> 
</span><del>-    # Per-host CPU, and total CPU
-#    perHost((
-#        RequestsDataType.key,
-#    ), (
-#        CPUDataType.key,
-#    ))
</del><ins>+def main():
</ins><span class="cx"> 
</span><del>-    # Per-host job completion, and total CPU, total jobs queued
-#    perHost((
-#        JobsCompletedDataType.key,
-#    ), (
-#        CPUDataType.key,
-#        JobQueueDataType.key,
-#    ))
</del><ins>+    selectMode = {
+        &quot;basic&quot;:
+            # Generic aggregated data for all hosts
+            (
+                &quot;combinedHosts&quot;,
+                (
+                    CPUDataType.key,
+                    RequestsDataType.key,
+                    ResponseDataType.key,
+                    JobsCompletedDataType.key,
+                    JobQueueDataType.key,
+                )
+            ),
+        &quot;basicjob&quot;:
+            # Data aggregated for all hosts - job detail
+            (
+                &quot;combinedHosts&quot;,
+                (
+                    CPUDataType.key,
+                    RequestsDataType.key,
+                    ResponseDataType.key,
+                    JobsCompletedDataType.key,
+                    JobQueueDataType.key + &quot;-SCHEDULE&quot;,
+                    JobQueueDataType.key + &quot;-PUSH&quot;,
+                    JobQueueDataType.key,
+                ),
+            ),
+        &quot;basicschedule&quot;:
+            # Data aggregated for all hosts - job detail
+            (
+                &quot;combinedHosts&quot;,
+                (
+                    CPUDataType.key,
+                    JobsCompletedDataType.key,
+                    JobQueueDataType.key + &quot;-SCHEDULE_ORGANIZER_WORK&quot;,
+                    JobQueueDataType.key + &quot;-SCHEDULE_ORGANIZER_SEND_WORK&quot;,
+                    JobQueueDataType.key + &quot;-SCHEDULE_REPLY_WORK&quot;,
+                    JobQueueDataType.key + &quot;-SCHEDULE_AUTO_REPLY_WORK&quot;,
+                    JobQueueDataType.key + &quot;-SCHEDULE_REFRESH_WORK&quot;,
+                    JobQueueDataType.key + &quot;-PUSH&quot;,
+                    JobQueueDataType.key,
+                ),
+            ),
+        &quot;basicmethod&quot;:
+            # Data aggregated for all hosts - method detail
+            (
+                &quot;combinedHosts&quot;,
+                (
+                    CPUDataType.key,
+                    RequestsDataType.key,
+                    ResponseDataType.key,
+                    MethodCountDataType.key + &quot;-PUT ics&quot;,
+                    MethodCountDataType.key + &quot;-REPORT cal-home-sync&quot;,
+                    MethodCountDataType.key + &quot;-PROPFIND Calendar Home&quot;,
+                    MethodCountDataType.key + &quot;-REPORT cal-sync&quot;,
+                    MethodCountDataType.key + &quot;-PROPFIND Calendar&quot;,
+                ),
+            ),
</ins><span class="cx"> 
</span><del>-    # Generate a single stacked plot of the data
-    plotmax = len(y.keys())
-    for plotnum, measurement in enumerate(y.keys()):
-        plt.subplot(len(y), 1, plotnum + 1)
-        plotSeries(titles[measurement], x, y[measurement], 0, ymaxes[measurement], plotnum == plotmax - 1)
-    plt.show()
</del><ins>+        &quot;hostrequests&quot;:
+            # Per-host requests, and total requests &amp; CPU
+            (
+                &quot;perHost&quot;,
+                (RequestsDataType.key,),
+                (
+                    RequestsDataType.key,
+                    CPUDataType.key,
+                ),
+            ),
+        &quot;hostcpu&quot;:
+            # Per-host CPU, and total CPU
+            (
+                &quot;perHost&quot;,
+                (CPUDataType.key,),
+                (
+                    RequestsDataType.key,
+                    CPUDataType.key,
+                ),
+            ),
+        &quot;hostcompleted&quot;:
+            # Per-host job completion, and total CPU, total jobs queued
+            (
+                &quot;perHost&quot;,
+                (JobsCompletedDataType.key,),
+                (
+                    CPUDataType.key,
+                    JobQueueDataType.key,
+                ),
+            ),
+    }
</ins><span class="cx"> 
</span><ins>+    parser = ArgumentParser(
+        formatter_class=MyHelpFormatter,
+        description=&quot;Dashboard time series processor.&quot;,
+        epilog=&quot;&quot;&quot;Available modes:
</ins><span class="cx"> 
</span><ins>+basic - stacked plots of total CPU, total request count, total average response
+    time, completed jobs, and job queue size.
</ins><span class="cx"> 
</span><ins>+basicjob - as per basic but with queued SCHEDULE_*_WORK and queued
+    PUSH_NOTIFICATION_WORK plots.
+
+basicschedule - stacked plots of total CPU, completed jobs, each queued
+    SCHEDULE_*_WORK, queued, PUSH_NOTIFICATION_WORK, and overall job queue size.
+
+basicmethod - stacked plots of total CPU, total request count, total average
+    response time, PUT-ics, REPORT cal-home-sync, PROPFIND Calendar Home, REPORT
+    cal-sync, and PROPFIND Calendar.
+
+hostrequests = stacked plots of per-host request counts, total request count,
+    and total CPU.
+
+hostcpu = stacked plots of per-host CPU, total request count, and total CPU.
+
+hostcompleted = stacked plots of per-host completed jobs, total CPU, and job
+    queue size.
+&quot;&quot;&quot;,
+    )
+    parser.add_argument(&quot;-l&quot;, default=SUPPRESS, required=True, help=&quot;Log file to process&quot;)
+    parser.add_argument(&quot;-p&quot;, default=SUPPRESS, help=&quot;Name of pod to analyze&quot;)
+    parser.add_argument(&quot;-s&quot;, default=SUPPRESS, help=&quot;Name of server to analyze&quot;)
+    parser.add_argument(&quot;--save&quot;, action=&quot;store_true&quot;, help=&quot;Save plot PNG image&quot;)
+    parser.add_argument(&quot;--noshow&quot;, action=&quot;store_true&quot;, help=&quot;Don't show the plot on screen&quot;)
+    parser.add_argument(&quot;--start&quot;, type=int, default=0, help=&quot;Log line to start from&quot;)
+    parser.add_argument(&quot;--count&quot;, type=int, default=-1, help=&quot;Number of log lines to process from start&quot;)
+    parser.add_argument(&quot;--mode&quot;, default=&quot;basic&quot;, choices=sorted(selectMode.keys()), help=&quot;Type of plot to produce&quot;)
+    parser.add_argument(&quot;-v&quot;, action=&quot;store_true&quot;, help=&quot;Verbose&quot;)
+    args = parser.parse_args()
+
+    calculator = Calculator(args)
+    calculator.run(*selectMode[args.mode])
+    calculator.plot()
+
+
+
</ins><span class="cx"> def plotSeries(title, x, y, ymin=None, ymax=None, last_subplot=True):
</span><span class="cx">     &quot;&quot;&quot;
</span><span class="cx">     Plot the chosen dataset key for each scanned data file.
</span><span class="lines">@@ -684,9 +802,9 @@
</span><span class="cx">         plt.ylim(ymin=ymin)
</span><span class="cx">     if ymax is not None:
</span><span class="cx">         plt.ylim(ymax=ymax)
</span><del>-    plt.minorticks_on()
</del><ins>+    plt.xlim(min(x), max(x))
+    plt.xticks(range(min(x), max(x) + 1, 60))
</ins><span class="cx">     plt.grid(True, &quot;major&quot;, &quot;x&quot;, alpha=0.5, linewidth=0.5)
</span><del>-    plt.grid(True, &quot;minor&quot;, &quot;x&quot;, alpha=0.5, linewidth=0.5)
</del><span class="cx"> 
</span><span class="cx"> if __name__ == &quot;__main__&quot;:
</span><span class="cx">     main()
</span></span></pre></div>
<a id="CalendarServertrunkcalendarservertoolsdashviewpy"></a>
<div class="modfile"><h4>Modified: CalendarServer/trunk/calendarserver/tools/dashview.py (15692 => 15693)</h4>
<pre class="diff"><span>
<span class="info">--- CalendarServer/trunk/calendarserver/tools/dashview.py        2016-06-23 14:03:48 UTC (rev 15692)
+++ CalendarServer/trunk/calendarserver/tools/dashview.py        2016-06-23 16:35:07 UTC (rev 15693)
</span><span class="lines">@@ -14,6 +14,8 @@
</span><span class="cx"> # See the License for the specific language governing permissions and
</span><span class="cx"> # limitations under the License.
</span><span class="cx"> ##
</span><ins>+from argparse import HelpFormatter, SUPPRESS, OPTIONAL, ZERO_OR_MORE, \
+    ArgumentParser
</ins><span class="cx"> 
</span><span class="cx"> &quot;&quot;&quot;
</span><span class="cx"> A curses (or plain text) based dashboard for viewing various aspects of the
</span><span class="lines">@@ -22,7 +24,6 @@
</span><span class="cx"> 
</span><span class="cx"> from collections import OrderedDict
</span><span class="cx"> from operator import itemgetter
</span><del>-import argparse
</del><span class="cx"> import collections
</span><span class="cx"> import curses.panel
</span><span class="cx"> import errno
</span><span class="lines">@@ -41,8 +42,32 @@
</span><span class="cx"> 
</span><span class="cx"> 
</span><span class="cx"> 
</span><ins>+class MyHelpFormatter(HelpFormatter):
+    &quot;&quot;&quot;
+    Help message formatter which adds default values to argument help and
+    retains formatting of all help text.
+    &quot;&quot;&quot;
+
+    def _fill_text(self, text, width, indent):
+        return ''.join([indent + line for line in text.splitlines(True)])
+
+
+    def _get_help_string(self, action):
+        help = action.help
+        if '%(default)' not in action.help:
+            if action.default is not SUPPRESS:
+                defaulting_nargs = [OPTIONAL, ZERO_OR_MORE]
+                if action.option_strings or action.nargs in defaulting_nargs:
+                    help += ' (default: %(default)s)'
+        return help
+
+
+
</ins><span class="cx"> def main():
</span><del>-    parser = argparse.ArgumentParser(description=&quot;Dashboard collector viewer service for CalendarServer.&quot;)
</del><ins>+    parser = ArgumentParser(
+        formatter_class=MyHelpFormatter,
+        description=&quot;Dashboard collector viewer service for CalendarServer.&quot;,
+    )
</ins><span class="cx">     parser.add_argument(&quot;-s&quot;, default=&quot;localhost:8200&quot;, help=&quot;Dashboard collector service host:port&quot;)
</span><span class="cx">     args = parser.parse_args()
</span><span class="cx"> 
</span><span class="lines">@@ -509,17 +534,6 @@
</span><span class="cx"> 
</span><span class="cx">     @staticmethod
</span><span class="cx">     def aggregator_jobs(serversdata):
</span><del>-#        results = OrderedDict()
-#        for server_data in serversdata:
-#            for job_name, job_details in server_data.items():
-#                if job_name not in results:
-#                    results[job_name] = OrderedDict()
-#                for detail_name, detail_value in job_details.items():
-#                    if detail_name in results[job_name]:
-#                        results[job_name][detail_name] += detail_value
-#                    else:
-#                        results[job_name][detail_name] = detail_value
-#        return results
</del><span class="cx">         return serversdata[0]
</span><span class="cx"> 
</span><span class="cx"> 
</span></span></pre></div>
<a id="CalendarServertrunkdocAdminDashboardmd"></a>
<div class="addfile"><h4>Added: CalendarServer/trunk/doc/Admin/Dashboard.md (0 => 15693)</h4>
<pre class="diff"><span>
<span class="info">--- CalendarServer/trunk/doc/Admin/Dashboard.md                                (rev 0)
+++ CalendarServer/trunk/doc/Admin/Dashboard.md        2016-06-23 16:35:07 UTC (rev 15693)
</span><span class="lines">@@ -0,0 +1,208 @@
</span><ins>+# CalendarServer Dashboard Service
+
+## Overiew
+
+The CalendarServer dashboard service is a way to visualize internal CalendarServer performance data, including HTTP, system, directory, and job queue statistics. At a high level it works as follows:
+
+1. CalendarServer collects statistics internally and makes those available via a &quot;stats socket&quot; that can be read from to get the data.
+2. A `dashcollect` tool is used to periodically read data from one or more CalendarServer hosts or pods and stores that data in a log file, as well as makes the data available to be read via a TCP socket.
+3. A `dashview` tool can be run in a terminal window to show the statistics via multiple tables, using the `curses` terminal protocol.
+4. A `dashtime` tool can be run to process the log file generated by `dashcollect` and display various plots of data over time.
+
+## Detail
+
+### Stats socket
+
+The CalendarServer &quot;stats socket&quot; needs to be enabled in the caldavd.plist in order for the dashboard service to be active. To do that, make sure the following plist key is present:
+
+    &lt;key&gt;Stats&lt;/key&gt;
+    &lt;dict&gt;
+      &lt;key&gt;EnableTCPStatsSocket&lt;/key&gt;
+      &lt;true/&gt;
+    &lt;/dict&gt;
+
+The default port for the &quot;stats socket&quot; is 8100, and can be changed by adding a `TCPStatsPort` item to the above plist key:
+
+    &lt;key&gt;Stats&lt;/key&gt;
+    &lt;dict&gt;
+      &lt;key&gt;EnableTCPStatsSocket&lt;/key&gt;
+      &lt;true/&gt;
+      &lt;key&gt;TCPStatsPort&lt;/key&gt;
+      &lt;integer&gt;8100&lt;/integer&gt;
+    &lt;/dict&gt;
+
+CalendarServer can also provide a unix socket to read stats from, but that is only useful when the `dashcollect` tool is always run locally.
+
+Internally CalendarServer collects the following data:
+
+1. HTTP request data is collected via the access.log entries generated by each HTTP request. Request data is collected during each wall-clock minute, then averaged over periods of 1 minute, 5 minutes, and 1 hour. In addition, a snapshot of the HTTP request handling state of each child process is generated each time the &quot;stats socket&quot; is read from.
+2. System statistics (CPU use, memory use, etc) are collected once per second.
+3. Job queue statistics are collected each time the &quot;stats socket&quot; is read from. These include a snapshot of the overall state of the job queue table, as well as per-host data on how many jobs have been completed and their average execution time on each child process.
+4. Directory statistics are collected as each directory request is executed.
+
+### `dashcollect` tool
+
+The `dashcollect` tool is a command line tool that periodically reads from one or more CalendarServer &quot;stats sockets&quot; and logs the resulting JSON data to a log file as well as making the most recent data available to be read over a TCP socket. The purpose of this tool is to have a single read of the &quot;stats sockets&quot; of a CalendarServer service, rather than having multiple tools reading from the CalendarServer service and creating additional load that could impact client-facing performance. The `dashcollect` data can be read by as many tools as needed without affecting performance. i.e., if ten people want to watch the CalendarServer performance over time, only one process is reading the `stats socket` on CalendarServer, but ten processes are reading from the `dashcollect` socket.
+
+The JSON log file produced by `dashcollect` is in the form of a [JSON text sequence](https://tools.ietf.org/html/rfc7464). In addition, each JSON data text sequence can be compressed using zip and encoded as base64 text (compression greatly reduces the size of the log file and is recommended). The JSON data read from `dashcollect` TCP socket is standard JSON text (all text is utf-8 encoded).
+
+#### Help
+
+        dashcollect.py --help
+        usage: dashcollect.py [-h] -f F -l L [-n] [-s S] [-t] [-z] [-v]
+        
+        Dashboard service for CalendarServer.
+        
+        optional arguments:
+          -h, --help  show this help message and exit
+          -f F        Server config file (see docstring for details)
+          -l L        Log file directory
+          -n          Create a new log file when starting, existing log file is
+                      deleted (default: False)
+          -s S        Make JSON data available on the specified host:port (default:
+                      localhost:8200)
+          -t          Rotate log files every hour, otherwise once per day (default:
+                      False)
+          -z          zlib compress json records in log files (default: False)
+          -v          Verbose (default: False)
+        
+        To view the docstring, run: pydoc calendarserver/tools/dashcollect.py
+
+* The `-f` option must be present and point to a config file (see below).
+* The `-l` option must be present and point to an existing directory where the log files will be written. Log file names have the prefix `dashboard` followed by a timestamp and the file extension `.log`. Log files are rotated once per day, or once per hour as governed by the `-h` option.
+* For CalendarServer services generating lots of data, the `-t` option is recommended to keep each log file to a reasonable size. Without this option there will be one log file per day (with the file name containing the date). With this option there will be one log file per hour (with the file name containing the date and hour).
+* When generating lots of data it is recommended that the `-z` option is used to compress the JSON text sequences in the log files.
+
+#### Config file
+
+The config file (specified via `-f`) is used to define the set of CalendarServer pods and hosts to read stats data from. The file contains JSON data, and example:
+
+    {
+        &quot;title&quot;: &quot;My CalDAV service&quot;,
+        &quot;pods&quot;: {
+            &quot;podA&quot;: {
+                &quot;description&quot;: &quot;Main pod&quot;,
+                &quot;servers&quot;: [
+                    &quot;podAhost1.example.com:8100&quot;,
+                    &quot;podAhost2.example.com:8100&quot;
+                ]
+            },
+            &quot;podB&quot;: {
+                &quot;description&quot;: &quot;Development pod&quot;,
+                &quot;servers&quot;: [
+                    &quot;podBhost1.example.com:8100&quot;,
+                    &quot;podBhost2.example.com:8100&quot;
+                ]
+            }
+        }
+    }
+    
+* The `title` member is a descriptive title for the service.
+* The `pods` object contains one item for each CalendarServer pod being monitored. The names used for the object keys will appear in the logs.
+* The `description` member is a description for each pod.
+* The `servers` object is an array of `host:port` values for each host in the pod, with the port set to the TCP stats socket used by that host.
+
+### `dashview` tool
+
+The `dashview` tool is a command line tool that periodically reads from a `dashcollect` socket and displays the data in a curses-based terminal using different table views for each class of data. The user can control which tables are visible at any time. The tool can show the data for any host in a multi-pod/multi-host CalendarServer service, and in addition can show the aggregated data for all hosts in a pod. This tool typically requires a large terminal window for viewing, and the terminal will need good curses support. This tool replaces the older `dashboard` tool which read stats directly from the CalendarServer hosts and is now considered deprecated since having multiple users using it causes service performance issues.
+
+#### Help
+
+        usage: dashview.py [-h] [-s S]
+        
+        Dashboard collector viewer service for CalendarServer.
+        
+        optional arguments:
+          -h, --help  show this help message and exit
+          -s S        Dashboard collector service host:port (default: localhost:8200)
+
+* The `-s` option specifies the `dashcollect` service host and port where JSON data can be read from.
+
+#### Panels
+
+The visibility of each panel can be controlled via a &quot;hotkey&quot;. In addition there are &quot;hotkey&quot;s that control the visibility of groups of panels:
+
+* `h` toggle display of the _Help_ (hotkeys) panel
+* `s` toggle display of the _System Status_ panel
+* `r` toggle display of the _HTTP Requests_ panel
+* `c` toggle display of the _HTTP Slots_ panel
+* `m` toggle display of the _HTTP Methods_ panel
+* `w` toggle display of the _Job Assignments_ panel
+* `j` toggle display of the _Job Activity_ panel
+* `d` toggle display of the _Directory Service_ panel
+* `H` display all of the HTTP panels only
+* `J` display all of the Jobs panels only
+* `D` display all of the Directory panels only
+* `a` display all panels
+* `n` display no panels
+* ` ` (space) toggle pause
+* `t` toggle Update Speed between 0.1 secs and 1 sec.
+* `x` toggle Aggregate Mode
+* `q` Quit
+
+Arrow keys can be used to select which pod or host data to view:
+
+* `up` &amp; `down` - move between the list of pods
+* `left` &amp; `right` - move between the list of hosts for the current pod
+
+### `dashtime` tool
+
+The `dashtime` tool can produce plots of `dashcollect` logged data, to show how that data changes over time. Note that the Python `matplotlib` module is a requirement (do `pip install matplotlib` if you get an import error running the tool). This tool can produce a set of vertically stacked plots that show the variation of various server stats over time. The stats can be aggregated for all hosts in a pod, or for a single host, or a combination of plots for each host and the aggregated values. Specific plot &quot;modes&quot; have been hard-coded to produce common sets of plots. Plots are by default displayed on screen from where they can be saved, but an option to automatically save a PNG image also exists.
+
+#### Help
+
+        dashtime.py --help
+        usage: dashtime.py [-h] -l L [-p P] [-s S] [--save] [--noshow] [--start START]
+                           [--count COUNT]
+                           [--mode {basic,basicjob,basicmethod,basicschedule,hostcompleted,hostcpu,hostrequests}]
+                           [-v]
+        
+        Dashboard time series processor.
+        
+        optional arguments:
+          -h, --help            show this help message and exit
+          -l L                  Log file to process
+          -p P                  Name of pod to analyze
+          -s S                  Name of server to analyze
+          --save                Save plot PNG image (default: False)
+          --noshow              Don't show the plot on screen (default: False)
+          --start START         Log line to start from (default: 0)
+          --count COUNT         Number of log lines to process from start (default:
+                                -1)
+          --mode {basic,basicjob,basicmethod,basicschedule,hostcompleted,hostcpu,hostrequests}
+                                Type of plot to produce (default: basic)
+          -v                    Verbose (default: False)
+        
+        Available modes:
+        
+        basic - stacked plots of total CPU, total request count, total average response
+            time, completed jobs, and job queue size.
+        
+        basicjob - as per basic but with queued SCHEDULE_*_WORK and queued
+            PUSH_NOTIFICATION_WORK plots.
+        
+        basicschedule - stacked plots of total CPU, completed jobs, each queued
+            SCHEDULE_*_WORK, queued, PUSH_NOTIFICATION_WORK, and overall job queue size.
+        
+        basicmethod - stacked plots of total CPU, total request count, total average
+            response time, PUT-ics, REPORT cal-home-sync, PROPFIND Calendar Home, REPORT
+            cal-sync, and PROPFIND Calendar.
+        
+        hostrequests = stacked plots of per-host request counts, total request count,
+            and total CPU.
+        
+        hostcpu = stacked plots of per-host CPU, total request count, and total CPU.
+        
+        hostcompleted = stacked plots of per-host completed jobs, total CPU, and job
+            queue size.
+
+* The `-l` option must be present and point to a `dashcollect` log file.
+* The `-p` option defines the pod to view data for (if not present the first pod - alphabetically sorted - is used).
+* The `-s` option defines a specific server to view data for (there are currently no modes that use this).
+* The `--save` option, when present, will cause a PNG image of the plots to be saved to disk. The image file has the same name as the log file, but with the mode name and a `.png` suffix appended, and it will be created in the same directory as the log file.
+* The `--noshow` option, when present, supresses display of the plots on screen.
+* The `--start` option specifies which line in the `dashcollect` log file to start reading from (default is the first line).
+* The `--count` option specifies the maximum number of lines to read from the start (default is all lines after the start).
+* The `--mode` option determines the type of data produced in the plots. Each mode is described in the help text above.
+
+Note that the time scale on the plots is typically one second, as that is the polling period used by `dashcollect`. The HTTP-related data comes from one minute averages, so will look &quot;blocky&quot; compared to the once per-second values of the other stats. The one minute average data is shifted lower by 60 seconds to better match it to the time over which the data was actually collected. 
</ins></span></pre>
</div>
</div>

</body>
</html>