Author: justi9
Date: 2010-05-05 11:58:15 -0400 (Wed, 05 May 2010)
New Revision: 3946
Added:
mgmt/newdata/cumin/bin/cumin-admin
mgmt/newdata/cumin/bin/cumin-admin-test
mgmt/newdata/cumin/bin/cumin-data
mgmt/newdata/cumin/bin/cumin-database
mgmt/newdata/cumin/bin/cumin-web
mgmt/newdata/cumin/instance/model
mgmt/newdata/cumin/model/
mgmt/newdata/cumin/model/Makefile
mgmt/newdata/cumin/model/condor.xml
mgmt/newdata/cumin/model/cumin.xml
mgmt/newdata/cumin/model/qpid-acl.xml
mgmt/newdata/cumin/model/qpid-cluster.xml
mgmt/newdata/cumin/model/qpid-store.xml
mgmt/newdata/cumin/model/qpid.xml
mgmt/newdata/cumin/model/rosemary.xml
mgmt/newdata/cumin/model/sesame.xml
mgmt/newdata/cumin/python/cumin/admin.py
mgmt/newdata/cumin/python/cumin/database.py
mgmt/newdata/cumin/python/cumin/session.py
mgmt/newdata/mint/python/mint/session.py
Removed:
mgmt/newdata/bin/devel-reload-database
mgmt/newdata/cumin/instance/sql
mgmt/newdata/cumin/instance/xml
mgmt/newdata/cumin/python/cumin/account/model.py
mgmt/newdata/mint/bin/mint-admin
mgmt/newdata/mint/bin/mint-admin-test
mgmt/newdata/mint/bin/mint-bench
mgmt/newdata/mint/bin/mint-database
mgmt/newdata/mint/bin/mint-demo
mgmt/newdata/mint/bin/mint-server
mgmt/newdata/mint/bin/mint-vacuumdb
mgmt/newdata/mint/python/mint/schema.py
mgmt/newdata/mint/python/mint/schemalocal.py
mgmt/newdata/mint/python/mint/schemaparser.py
mgmt/newdata/mint/python/mint/sql.py
mgmt/newdata/mint/sql/
mgmt/newdata/mint/xml
mgmt/newdata/rosemary/xml/
Modified:
mgmt/newdata/README
mgmt/newdata/bin/devel-check
mgmt/newdata/bin/reschema
mgmt/newdata/cumin/bin/cumin
mgmt/newdata/cumin/instance/etc/cumin.conf
mgmt/newdata/cumin/python/cumin/account/main.py
mgmt/newdata/cumin/python/cumin/account/widgets.py
mgmt/newdata/cumin/python/cumin/config.py
mgmt/newdata/cumin/python/cumin/grid/collector.py
mgmt/newdata/cumin/python/cumin/grid/main.py
mgmt/newdata/cumin/python/cumin/grid/negotiator.py
mgmt/newdata/cumin/python/cumin/grid/pool.py
mgmt/newdata/cumin/python/cumin/grid/scheduler.py
mgmt/newdata/cumin/python/cumin/grid/slot.py
mgmt/newdata/cumin/python/cumin/grid/submission.py
mgmt/newdata/cumin/python/cumin/grid/submitter.py
mgmt/newdata/cumin/python/cumin/inventory/main.py
mgmt/newdata/cumin/python/cumin/inventory/system.py
mgmt/newdata/cumin/python/cumin/main.py
mgmt/newdata/cumin/python/cumin/messaging/binding.py
mgmt/newdata/cumin/python/cumin/messaging/broker.py
mgmt/newdata/cumin/python/cumin/messaging/brokergroup.py
mgmt/newdata/cumin/python/cumin/messaging/brokerlink.py
mgmt/newdata/cumin/python/cumin/messaging/connection.py
mgmt/newdata/cumin/python/cumin/messaging/exchange.py
mgmt/newdata/cumin/python/cumin/messaging/queue.py
mgmt/newdata/cumin/python/cumin/messaging/subscription.py
mgmt/newdata/cumin/python/cumin/messaging/test.py
mgmt/newdata/cumin/python/cumin/model.py
mgmt/newdata/cumin/python/cumin/objectframe.py
mgmt/newdata/cumin/python/cumin/objecttask.py
mgmt/newdata/cumin/python/cumin/parameters.py
mgmt/newdata/cumin/python/cumin/sqladapter.py
mgmt/newdata/cumin/python/cumin/stat.py
mgmt/newdata/cumin/python/cumin/test.py
mgmt/newdata/cumin/python/cumin/tools.py
mgmt/newdata/cumin/python/cumin/usergrid/model.py
mgmt/newdata/cumin/python/cumin/util.py
mgmt/newdata/cumin/python/cumin/widgets.py
mgmt/newdata/mint/python/mint/database.py
mgmt/newdata/mint/python/mint/demo.py
mgmt/newdata/mint/python/mint/expire.py
mgmt/newdata/mint/python/mint/main.py
mgmt/newdata/mint/python/mint/model.py
mgmt/newdata/mint/python/mint/newupdate.py
mgmt/newdata/mint/python/mint/tools.py
mgmt/newdata/mint/python/mint/util.py
mgmt/newdata/mint/python/mint/vacuum.py
mgmt/newdata/parsley/python/parsley/config.py
mgmt/newdata/parsley/python/parsley/threadingex.py
mgmt/newdata/rosemary/python/rosemary/model.py
mgmt/newdata/rosemary/python/rosemary/sqlquery.py
mgmt/newdata/wooly/python/wooly/__init__.py
mgmt/newdata/wooly/python/wooly/server.py
mgmt/newdata/wooly/python/wooly/wsgiserver/__init__.py
Log:
* The console now accepts some qmf 1.1 data
* Cumin no longer uses sqlobject; integrate RosemaryModel as the
implementation of CuminModel; delete old schema stuff and
sqlobject-based tooling
* Reorganize top-level daemons under the names cumin-web and
cumin-data (and eventually, cumin-agent)
* Consolidate config for all daemons in cumin.conf
* Move the model xml files under cumin
* Use a purpose-built qmf session in cumin instead of embedding a
full mint instance
* Many improvements to cumin-database, the tool for setting up the
postgresql instance
* (This one's exciting!) Fix the highly annoying hang on Ctrl-C we
were seeing from the cherrypy wsgiserver
* Improve the thread debugging code
* Repair the change password functionality
* Set a cursor on session intended for any db reads (and reads only)
* Update readme for streamlined installation
* Rename the existing RosemaryClass.get_object to get_object_by_id,
and offer a new get_object that takes criteria and returns a single
result
* We weren't properly vacuuming samples tables; fix that
* Remove ssl config that no longer applies to updated cherrypy
wsgiserver
Modified: mgmt/newdata/README
===================================================================
--- mgmt/newdata/README 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/README 2010-05-05 15:58:15 UTC (rev 3946)
@@ -4,7 +4,7 @@
These instructions assume you have sudo installed. If not, you can
install it (as shown below) or you can instead su to root.
-To install sudo:
+To install sudo (on a yum-driven distro):
$ su -
$ yum install sudo
@@ -17,10 +17,9 @@
what you get in a typical Fedora install):
postgresql-server
- python-sqlobject
python-psycopg2
- $ sudo yum install postgresql-server python-sqlobject python-psycopg2
+ $ sudo yum install postgresql-server python-psycopg2
It also depends on the qpid python code. You can satisfy these
dependencies either by installing the python-qpid package, or by
@@ -66,68 +65,17 @@
haven't already done it, you'll need to initialize the postgres
service, edit permissions, and start it up.
-Initialize the postgresql data files:
+ $ sudo cumin-database stop # If necessary
+ $ sudo cumin-database configure
+ $ sudo cumin-database start
+ $ su -c "cumin-database create" # 'su -c' so the env inherits
- $ sudo su - postgres # Now you're the postgres user
- $ initdb -D /var/lib/pgsql/data
+At this point you should have a working database. Test it:
-Edit postgresql permissions:
+ $ sudo cumin-database check
- $ vi /var/lib/pgsql/data/pg_hba.conf
-
- [Add the following line, *before* the other similar lines]
-
- host cumin cumin 127.0.0.1/32 trust
-
-Alternative postgresql permissions:
-
- $ vi /var/lib/pgsql/data/pg_hba.conf
-
- [Add the following line, *before* the other similar lines]
-
- host cumin cumin 127.0.0.1/32 ident cumin
-
- $ vi /var/lib/pgsql/data/pg_ident.conf
-
- [Add the following lines at the bottom, substituting your user
- name for "youruser"]
-
- cumin youruser cumin
- cumin root cumin
-
-Start the postgresql service:
-
- $ exit # Back to your own user
- $ sudo /sbin/service postgresql start
- Starting postgresql service: [ OK ]
-
-Now you can create a database. First you have to switch to the
-postgres user, and then you can use the create* scripts.
-
-Create the postgresql database:
-
- $ sudo su - postgres # Become the postgres user again
- $ createuser --superuser cumin
- CREATE ROLE
- $ createdb --owner=cumin cumin
- CREATE DATABASE
- $ exit # Leave the postgres user
-
-At this point you should have a working database. Test it using psql:
-
- $ psql -d cumin -U cumin -h localhost
- Welcome to psql 8.2.7, the PostgreSQL interactive terminal.
- [...]
- cumin=# # Type \q to get out
-
-Now you can load the scheme definition.
-
- $ cumin-admin create-schema
- Executed 100 statements from file
'/home/jross/checkouts/mgmt/cumin-test-0/sql/schema.sql'
- Executed 6 statements from file
'/home/jross/checkouts/mgmt/cumin-test-0/sql/indexes.sql'
-
At this point you should have a working database and schema that you
-can connect to at postgresql://cumin@localhost/cumin. All that
+can connect to with 'psql -d cumin -U cumin -h localhost'. All that
remains is to add a cumin user:
Add a cumin user:
@@ -137,7 +85,6 @@
Confirm new password: # Re-type said password
User 'guest' is added
-
USING THE DEVEL ENVIRONMENT
---------------------------
@@ -151,12 +98,3 @@
export DEVEL_HOME="${HOME}/mgmt"
exec "${DEVEL_HOME}/bin/devel"
-
-
-SOME GOTCHAS YOU MIGHT RUN INTO
--------------------------------
-
-1. PostgreSQL "sameuser ident" authentication
-
- If you get an error about failed ident authentication, make sure
- you have an ident server installed and running.
Modified: mgmt/newdata/bin/devel-check
===================================================================
--- mgmt/newdata/bin/devel-check 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/bin/devel-check 2010-05-05 15:58:15 UTC (rev 3946)
@@ -44,7 +44,7 @@
print "Python %s" % sys.version
-smodules = ["qpid", "mllib", "sqlobject",
"psycopg2"]
+smodules = ["qpid", "mllib", "psycopg2"]
for smodule in smodules:
print "Module '%s' ->" % smodule,
Deleted: mgmt/newdata/bin/devel-reload-database
===================================================================
--- mgmt/newdata/bin/devel-reload-database 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/bin/devel-reload-database 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,12 +0,0 @@
-#!/bin/bash
-
-if [ -z "$1" -o -z "$2" ]; then
- echo "Usage: devel-reload-database DATABASE-USER DATABASE-NAME"
- exit 1
-fi
-
-psql -U "$1" -d "$2" -c "drop schema public cascade"
-psql -U "$1" -d "$2" -c "create schema public"
-psql -U "$1" -d "$2" -f "$DEVEL_HOME"/mint/sql/schema.sql
-
-python "$DEVEL_HOME"/cumin/python/cumin/demo.py
postgresql://"$1"@localhost/"$2"
Modified: mgmt/newdata/bin/reschema
===================================================================
--- mgmt/newdata/bin/reschema 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/bin/reschema 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,3 +1,5 @@
#!/bin/bash -ex
-exec mint-demo reload
+cumin-admin drop-schema
+cumin-admin create-schema
+cumin-admin add-user guest guest
Modified: mgmt/newdata/cumin/bin/cumin
===================================================================
--- mgmt/newdata/cumin/bin/cumin 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/bin/cumin 2010-05-05 15:58:15 UTC (rev 3946)
@@ -7,10 +7,10 @@
trap die EXIT
-mint-server &
+cumin-data &
mpid="$!"
-cumin-server &
+cumin-web &
cpid="$!"
while :; do
Added: mgmt/newdata/cumin/bin/cumin-admin
===================================================================
--- mgmt/newdata/cumin/bin/cumin-admin (rev 0)
+++ mgmt/newdata/cumin/bin/cumin-admin 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,144 @@
+#!/usr/bin/python
+
+from parsley.collectionsex import defaultdict
+
+from cumin import *
+from cumin.config import *
+from cumin.util import *
+
+def main():
+ config = CuminConfig()
+ values = config.parse()
+
+ parser = CuminOptionParser(values.data)
+
+ opts, args = parser.parse_args()
+
+ try:
+ name = args[0]
+ except IndexError:
+ parser.print_usage()
+ sys.exit(1)
+
+ name = name.replace("-", "_")
+
+ commands = globals()
+
+ try:
+ command = commands[name]
+ except KeyError:
+ print "Command '%s' is unknown" % name
+ sys.exit(1)
+
+ app = Cumin(config.home, opts.broker, opts.database)
+
+ app.check()
+ app.init()
+
+ conn = app.database.get_connection()
+ cursor = conn.cursor()
+
+ try:
+ command(app, cursor, opts, args[1:])
+
+ conn.commit()
+ finally:
+ cursor.close()
+
+def error(msg):
+ print msg
+ sys.exit(1)
+
+def print_schema(app, cursor, opts, args):
+ print app.admin.get_schema(),
+
+def create_schema(app, cursor, opts, args):
+ app.admin.create_schema(cursor)
+
+ app.admin.add_role(cursor, "user")
+ app.admin.add_role(cursor, "admin")
+
+ print "The schema is created"
+
+def drop_schema(app, cursor, opts, args):
+ app.admin.drop_schema(cursor)
+
+ print "The schema is dropped"
+
+def list_users(app, cursor, opts, args):
+ user_cls = app.model.com_redhat_cumin.User
+ role_cls = app.model.com_redhat_cumin.Role
+ mapping_cls = app.model.com_redhat_cumin.UserRoleMapping
+
+ query = SqlQuery(mapping_cls.sql_table)
+
+ SqlInnerJoin(query,
+ role_cls.sql_table,
+ role_cls._id.sql_column,
+ mapping_cls.sql_table._role_id)
+
+ users = user_cls.get_selection(cursor)
+
+ cols = (mapping_cls.sql_table._user_id, role_cls.name.sql_column)
+ sql = query.emit(cols)
+
+ cursor.execute(sql)
+
+ roles_by_user_id = defaultdict(list)
+
+ for id, name in cursor.fetchall():
+ roles_by_user_id[id].append(name)
+
+ print " ID Name Roles"
+ print "---- -------------------- --------------------"
+
+ for user in users:
+ try:
+ roles = ", ".join(roles_by_user_id[user._id])
+ except KeyError:
+ roles = ""
+
+ print "%4i %-20s %-20s" % (user._id, user.name, roles)
+
+ count = len(users)
+
+ print
+ print "(%i user%s found)" % (count, ess(count))
+
+def add_user(app, cursor, opts, args):
+ try:
+ name = args[0]
+ except IndexError:
+ error("NAME is required")
+
+ try:
+ password = args[1]
+ except IndexError:
+ password = prompt_password()
+
+ crypted = crypt_password(password)
+
+ role = app.admin.get_role(cursor, "user")
+
+ try:
+ user = app.admin.add_user(cursor, name, crypted)
+ except IntegrityError:
+ error("Error: a user called '%s' already exists" % name)
+
+ app.admin.add_assignment(cursor, user, role)
+
+ print "User '%s' is added" % name
+
+def remove_user(app, cursor, opts, args):
+ try:
+ name = args[0]
+ except IndexError:
+ error("NAME is required")
+
+ user = app.admin.get_user(cursor, name)
+ user.delete(cursor)
+
+ print "User '%s' is removed" % name
+
+if __name__ == "__main__":
+ main()
Property changes on: mgmt/newdata/cumin/bin/cumin-admin
___________________________________________________________________
Name: svn:executable
+ *
Added: mgmt/newdata/cumin/bin/cumin-admin-test
===================================================================
--- mgmt/newdata/cumin/bin/cumin-admin-test (rev 0)
+++ mgmt/newdata/cumin/bin/cumin-admin-test 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,33 @@
+#!/bin/bash
+
+id="test_${RANDOM}"
+code=0
+tmpdir=$(mktemp -d)
+trap "rm -rf ${tmpdir}" EXIT
+
+while read command; do
+ echo -n "Testing command '$command'..."
+
+ $command &> "${tmpdir}/output"
+
+ if [[ $? == 0 ]]; then
+ echo " OK"
+ else
+ echo
+ echo "Command failed with exit code $?"
+ echo "Output:"
+ cat "${tmpdir}/output"
+ code=1
+ fi
+done <<EOF
+cumin-admin --help
+cumin-admin add-user "$id" changeme
+cumin-admin remove-user "$id"
+EOF
+
+#cumin-admin remove-user "$id" --force
+#cumin-admin assign "$id" admin
+#cumin-admin unassign "$id" admin
+#cumin-admin list-users
+
+exit "$code"
Property changes on: mgmt/newdata/cumin/bin/cumin-admin-test
___________________________________________________________________
Name: svn:executable
+ *
Added: mgmt/newdata/cumin/bin/cumin-data
===================================================================
--- mgmt/newdata/cumin/bin/cumin-data (rev 0)
+++ mgmt/newdata/cumin/bin/cumin-data 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,41 @@
+#!/usr/bin/python
+
+from cumin.config import *
+from cumin.util import *
+from mint import *
+
+def main():
+ config = CuminConfig()
+ values = config.parse()
+
+ parser = CuminOptionParser(values.data)
+
+ opts, args = parser.parse_args()
+
+ setup_logging(opts)
+
+ model_dir = os.path.join(config.home, "model")
+
+ mint = Mint(model_dir, opts.broker, opts.database)
+
+ mint.check()
+ mint.init()
+
+ if opts.init_only:
+ return
+
+ mint.start()
+
+ try:
+ while True:
+ # print_threads()
+
+ sleep(5)
+ finally:
+ mint.stop()
+
+if __name__ == "__main__":
+ try:
+ main()
+ except KeyboardInterrupt:
+ pass
Property changes on: mgmt/newdata/cumin/bin/cumin-data
___________________________________________________________________
Name: svn:executable
+ *
Added: mgmt/newdata/cumin/bin/cumin-database
===================================================================
--- mgmt/newdata/cumin/bin/cumin-database (rev 0)
+++ mgmt/newdata/cumin/bin/cumin-database 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,151 @@
+#!/bin/bash -e
+
+if [[ "$EUID" != "0" ]]; then
+ echo "This script must be run as root"
+ exit 2
+fi
+
+pgdata="/var/lib/pgsql/data"
+pglog="${pgdata}/pg_log"
+pghbaconf="${pgdata}/pg_hba.conf"
+dbname="cumin"
+
+function check-environment {
+ which rpm > /dev/null
+ rpm -q postgresql-server > /dev/null
+}
+
+function check-server {
+ # Is it installed?
+ # Is it initialized?
+ # Is it running?
+
+ test -d "$pgdata" || {
+ echo "The database is not configured. Run 'cumin-database
configure'."
+ exit 1
+ }
+
+ /sbin/service postgresql status > /dev/null || {
+ echo "The database is not running. Run 'cumin-database
start'."
+ exit 1
+ }
+}
+
+function check-access {
+ psql -d cumin -U cumin -h localhost -c '\q' &> /dev/null || {
+ echo "The database is not accessible. Run 'cumin-database
create'"
+ exit 1
+ }
+}
+
+function format-output {
+ while read line; do
+ echo " | $line"
+ done
+}
+
+function run {
+ echo " | \$ $1"
+
+ if [[ "$2" ]]; then
+ su - postgres -c "$1" | format-output 2>&1
+ else
+ $1 | format-output 2>&1
+ fi
+
+ return ${PIPESTATUS[0]}
+}
+
+case "$1" in
+ start)
+ run "/sbin/service postgresql start"
+ echo "The database server is started."
+ ;;
+ stop)
+ run "/sbin/service postgresql stop"
+ echo "The database server is stopped."
+ ;;
+ configure)
+ check-environment
+
+ if grep ${dbname} ${pghbaconf} &> /dev/null; then
+ echo "The database server appears to have been configured
already."
+ exit 1
+ fi
+
+ if /sbin/service postgresql status > /dev/null; then
+ echo "The database server is running. To proceed with"
+ echo "configuration, it must be stopped."
+ exit 1
+ fi
+
+ if [[ ! -d "$pgdata" ]]; then
+ run "initdb --pgdata='$pgdata' --auth='ident
sameuser'" postgres
+ run "mkdir '$pglog'" postgres
+ run "chmod 700 '$pglog'" postgres
+
+ /sbin/restorecon -R "$pgdata"
+ fi
+
+ python <<EOF
+from cumin.database import modify_pghba_conf
+modify_pghba_conf('${pghbaconf}', '${dbname}', 'cumin')
+EOF
+
+ echo "The database server is configured."
+ ;;
+ check)
+ echo -n "Checking environment ... "
+ check-environment && echo "OK"
+
+ echo -n "Checking server ........ "
+ check-server && echo "OK"
+
+ echo -n "Checking access ........ "
+ check-access && echo "OK"
+
+ # check-data
+
+ echo "The database is ready."
+ ;;
+ create)
+ check-environment
+ check-server
+
+ run "createuser --superuser ${dbname}" postgres
+ run "createdb --owner=${dbname} ${dbname}" postgres
+
+ check-access
+
+ run "cumin-admin create-schema"
+ # run "cumin-admin add-role user"
+ # run "cumin-admin add-role admin"
+
+ echo "The database is initialized."
+ ;;
+ drop)
+ check-environment
+ check-server
+
+ run "dropdb ${dbname}" postgres
+ run "dropuser ${dbname}" postgres
+
+ echo "The database is dropped."
+ ;;
+ annihilate)
+ run "rm -rf /var/lib/pgsql/data"
+ echo "Ouch!"
+ ;;
+ *)
+ echo "Control and configure the cumin database"
+ echo "Usage: cumin-database COMMAND"
+ echo "Commands:"
+ echo " start Start the database server"
+ echo " stop Stop the database server"
+ echo " configure Configure the main database cluster"
+ echo " check Check the cumin database"
+ echo " create Create the user, database, and schema"
+ echo " drop Discard the database user, database, and all
data"
+ exit 1
+ ;;
+esac
Property changes on: mgmt/newdata/cumin/bin/cumin-database
___________________________________________________________________
Name: svn:executable
+ *
Added: mgmt/newdata/cumin/bin/cumin-web
===================================================================
--- mgmt/newdata/cumin/bin/cumin-web (rev 0)
+++ mgmt/newdata/cumin/bin/cumin-web 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,47 @@
+#!/usr/bin/python
+
+from parsley.threadingex import print_threads
+
+from cumin import *
+from cumin.config import *
+from cumin.util import *
+
+def main():
+ config = CuminConfig()
+ values = config.parse()
+
+ parser = CuminOptionParser(values.web)
+
+ parser.add_option("--host", default=values.web.host)
+ parser.add_option("--port", default=values.web.port)
+
+ opts, args = parser.parse_args()
+
+ setup_logging(opts)
+
+ cumin = Cumin(config.home, opts.broker, opts.database,
+ opts.host, opts.port)
+
+ cumin.user = values.web.user
+
+ cumin.check()
+ cumin.init()
+
+ if opts.init_only:
+ return
+
+ cumin.start()
+
+ try:
+ while True:
+ # print_threads()
+
+ sleep(5)
+ finally:
+ cumin.stop()
+
+if __name__ == "__main__":
+ try:
+ main()
+ except KeyboardInterrupt:
+ pass
Property changes on: mgmt/newdata/cumin/bin/cumin-web
___________________________________________________________________
Name: svn:executable
+ *
Modified: mgmt/newdata/cumin/instance/etc/cumin.conf
===================================================================
--- mgmt/newdata/cumin/instance/etc/cumin.conf 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/instance/etc/cumin.conf 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,4 +1,11 @@
-[main]
-data: postgresql://cumin@localhost/cumin
+[common]
+# database: dbname=cumin user=cumin host=localhost
+# broker: localhost:5672
debug: True
+
+[web]
+# host: localhost
+# host: 0.0.0.0
user: guest
+
+[data]
Added: mgmt/newdata/cumin/instance/model
===================================================================
--- mgmt/newdata/cumin/instance/model (rev 0)
+++ mgmt/newdata/cumin/instance/model 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1 @@
+link ../model
\ No newline at end of file
Property changes on: mgmt/newdata/cumin/instance/model
___________________________________________________________________
Name: svn:special
+ *
Deleted: mgmt/newdata/cumin/instance/sql
===================================================================
--- mgmt/newdata/cumin/instance/sql 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/instance/sql 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1 +0,0 @@
-link ../../mint/sql
\ No newline at end of file
Deleted: mgmt/newdata/cumin/instance/xml
===================================================================
--- mgmt/newdata/cumin/instance/xml 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/instance/xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1 +0,0 @@
-link ../../mint/xml
\ No newline at end of file
Added: mgmt/newdata/cumin/model/Makefile
===================================================================
--- mgmt/newdata/cumin/model/Makefile (rev 0)
+++ mgmt/newdata/cumin/model/Makefile 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,26 @@
+.PHONY: update
+
+FILES := qpid.xml qpid-store.xml qpid-acl.xml qpid-cluster.xml condor.xml sesame.xml
+
+default:
+ @echo "'make update' fetches new versions"
+
+update: ${FILES}
+
+qpid.xml:
+ svn export
http://svn.apache.org/repos/asf/qpid/trunk/qpid/specs/management-schema.xml
qpid.xml
+
+qpid-store.xml:
+ svn export
http://anonsvn.jboss.org/repos/rhmessaging/store/trunk/cpp/lib/qmf-schema...
qpid-store.xml
+
+qpid-acl.xml:
+ svn export
http://svn.apache.org/repos/asf/qpid/trunk/qpid/cpp/src/qpid/acl/manageme...
qpid-acl.xml
+
+qpid-cluster.xml:
+ svn export
http://svn.apache.org/repos/asf/qpid/trunk/qpid/cpp/src/qpid/cluster/mana...
qpid-cluster.xml
+
+condor.xml:
+ wget
"http://git.fedorahosted.org/git/grid.git?p=grid.git;a=blob_plain;f=src/management/condor-management-schema.xml;hb=V7_4-QMF-branch"
-O condor.xml
+
+sesame.xml:
+ svn export
http://anonsvn.jboss.org/repos/rhmessaging/mgmt/trunk/sesame/cpp/src/qmfg...
sesame.xml
Added: mgmt/newdata/cumin/model/condor.xml
===================================================================
--- mgmt/newdata/cumin/model/condor.xml (rev 0)
+++ mgmt/newdata/cumin/model/condor.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,873 @@
+<schema package="mrg.grid">
+
+<!--
+/*
+ * Copyright 2008 Red Hat, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *
http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<group name="daemon-stats">
+ <property name="CondorPlatform"
+ type="sstr"
+ desc="The Condor platform string for the daemon's platform"/>
+ <property name="CondorVersion"
+ type="sstr"
+ desc="The Condor version string for the daemon's version"/>
+ <property name="DaemonStartTime"
+ type="absTime" unit="nanosecond"
+ desc="Number of nanoseconds since epoch when the daemon
+ was started"/>
+
+ <statistic name="MonitorSelfAge" type="uint32"/>
+ <statistic name="MonitorSelfCPUUsage" type="double"/>
+ <statistic name="MonitorSelfImageSize" type="double"/>
+ <statistic name="MonitorSelfRegisteredSocketCount"
type="uint32"/>
+ <statistic name="MonitorSelfResidentSetSize" type="uint32"/>
+ <statistic name="MonitorSelfTime" type="absTime"/>
+</group>
+
+<!--
+CpuBusy = ((LoadAvg - CondorLoadAvg) >= 0.500000)
+CpuBusyTime = 0
+CpuIsBusy = FALSE
+
+HasCheckpointing = TRUE
+HasFileTransfer = TRUE
+HasIOProxy = TRUE
+HasJava = TRUE
+HasJICLocalConfig = TRUE
+HasJICLocalStdin = TRUE
+HasJobDeferral = TRUE
+HasMPI = TRUE
+HasPerFileEncryption = TRUE
+HasReconnect = TRUE
+HasRemoteSyscalls = TRUE
+HasTDP = TRUE
+HasVM = FALSE
+
+JavaMFlops = 8.156164
+JavaVendor = "Free Software Foundation, Inc."
+JavaVersion = "1.4.2"
+
+Subnet = "10.16.43"
+
+Set by Collector:
+ UpdateSequenceNumber = 627
+ UpdatesHistory = "0x00000000000000000000000000000000"
+ UpdatesLost = 0
+ UpdatesSequenced = 58
+ UpdatesTotal = 59
+-->
+
+ <class name="Slot">
+
+ <group name="daemon-stats"/>
+
+ <property name="Pool" type="sstr" index="y"/>
+ <property name="System" type="sstr" index="y"/>
+
+ <property name="AccountingGroup"
+ type="sstr"
+ optional="y"
+ desc="AccountingGroup of the running job, fully
+ qualified with a UidDomain, UidDomain taken from
+ RemoteUser, only present when a job is
+ executing"/>
+ <property name="Activity"
+ type="sstr"
+ desc="One of: Idle, No job activity; Busy, Job is
+ running; Suspended, Job is suspended; Vacating,
+ Job is being removed; Killing, Job is being
+ killed; Benchmarking, Benchmarks being run"/>
+ <property name="Arch"
+ type="sstr"
+ desc="Slot's architecture, e.g.: ALPHA, Diginal Alpha;
+ HPPA1, HP PA-RISC 1.x (7000 series); HPPA2, HP
+ PA-RISC 2.x (8000 series); IA64, Intel Itanium;
+ INTEL, Intel x86 (Pentium, Xeon, etc); SGI, SGI
+ MIPS; SUN4u, Sun UltraSparc; SUN4x, Sun Sparc
+ (not UltraSparc); PPC, Power Macintosh; PPC64,
+ 64-bit Power Macintosh; X86_64, AMD/Intel 64-bit
+ x86"/>
+ <property name="CheckpointPlatform"
+ type="sstr"
+ desc="Opaque string encoding OS, hardware and kernel
+ attributes"/>
+ <property name="ClientMachine"
+ type="sstr"
+ optional="y"
+ desc="The hostname of the machine that has claimed the
+ slot, only present when slot is claimed"/>
+ <statistic name="ClockDay"
+ type="uint32"
+ desc="Day of the week: 0 = Sunday, 1 = Monday, ..., 6 =
+ Saturaday"/>
+ <statistic name="ClockMin"
+ type="uint32" unit="minute"
+ desc="Number of elapsed minutes since midnight"/>
+ <property name="ConcurrencyLimits"
+ type="sstr"
+ optional="y"
+ desc="Set of concurrency limits associated with the
+ current job"/>
+ <statistic name="CondorLoadAvg"
+ type="double"
+ desc="Portion of LoadAvg generated by Condor (job or
+ benchmark)"/>
+ <statistic name="ConsoleIdle"
+ type="uint32" unit="second"
+ desc="Seconds since activity on console keyboard or
+ mouse"/>
+ <property name="Cpus"
+ type="uint32"
+ desc="Number of CPUs in slot"/>
+ <property name="CurrentRank"
+ type="double"
+ optional="y"
+ desc="Slots' affinity for running the job it is
+ currently hosting, calculated as Rank expression
+ evaluated in context of the running job's ad"/>
+ <property name="Disk"
+ type="uint32" unit="KiB"
+ desc="Amount of disk space in KiB available in the slot"/>
+ <property name="EnteredCurrentActivity"
+ type="absTime" unit="nanosecond"
+ desc="Time at which current Activity was entered,
+ number of nanoseconds since Unix epoch"/>
+ <property name="EnteredCurrentState"
+ type="absTime" unit="nanosecond"
+ desc="Time at which current State was entered,
+ number of seconds since Unix epoch"/>
+ <property name="FileSystemDomain"
+ type="sstr"
+ desc="Configured namespace shared by slots with
+ uniformly mounted shared storage"/>
+ <property name="GlobalJobId"
+ type="sstr"
+ optional="y"
+ desc="The running job's GlobalJobId, only present when
+ a job is executing"/>
+ <statistic name="ImageSize"
+ type="uint32" unit="KiB"
+ desc="Estimate of the memory image size, in KiB, of the
+ running job, only present when a job is
+ executing, pulled by STARTD_JOB_EXPRS"/>
+ <property name="IsValidCheckpointPlatform"
+ type="lstr"
+ desc="A configurable expression representing if a
+ checkpointed job can run on the slot, part of the
+ slot's Requirements along with the Start
+ expression"/>
+ <property name="JobId"
+ type="sstr"
+ optional="y"
+ desc="The running job's identifier,
+ i.e. ClusterId.ProcId, only present when a job is
+ executing"/>
+ <property name="JobStart"
+ type="absTime" unit="nanosecond"
+ optional="y"
+ desc="The number of nanosecond since epoch when the job
+ began executing, only present when a job is
+ executing"/>
+ <statistic name="KeyboardIdle"
+ type="uint32" unit="second"
+ desc="Number of seconds since any activity on any
+ keyboard or mouse associated with the machine,
+ including pseudo-terminals"/>
+ <property name="KFlops"
+ type="uint32"
+ desc="Relative floating point performance on a Linpack
+ benchmark"/>
+ <property name="LastBenchmark"
+ type="absTime" unit="nanosecond"
+ desc="Number of nanoseconds since epoch when the last
+ benchmark was run"/>
+ <property name="LastFetchWorkCompleted"
+ type="absTime" unit="nanosecond"
+ desc="Number of nanoseconds since epoch when the
+ FetchWork Hook returned"
+ optional="y"/>
+ <property name="LastFetchWorkSpawned"
+ type="absTime" unit="nanosecond"
+ desc="Number of nanoseconds since epoch when the
+ FetchWork Hook was invoked"
+ optional="y"/>
+ <property name="LastPeriodicCheckpoint"
+ type="absTime" unit="nanosecond"
+ desc="The number of nanoseconds since epoch when the
+ job last performed a periodic checkpoint, only
+ present when a job is executing"
+ optional="y"/>
+<!--
+ <statistic name="LastHeardFrom"
+ type="absTime" unit="nanosecond"
+ desc="Time when the Collector received an update from
+ the slot, nanoseconds since epoch, inserted by
+ Collector"/>
+-->
+ <statistic name="LoadAvg"
+ type="double"
+ desc="Load average of CPUs hosting the slot"/>
+ <property name="Machine"
+ type="sstr"
+ desc="The fully qualified hostname of slot's host
+ machine"/>
+ <property name="MaxJobRetirementTime"
+ type="lstr" unit="second"
+ desc="Expression evaluated in context of job ad
+ producing the number of seconds a job is allowed
+ to finish before being killed, relevant when job
+ is being kicked out of the slot"/>
+ <property name="Memory"
+ type="uint32" unit="MiB"
+ desc="Amount of RAM available in the slot, in MiB"/>
+ <property name="Mips"
+ type="uint32"
+ desc="Relative integer performance on a Dhrystone
+ benchmark"/>
+ <property name="MyAddress"
+ type="sstr"
+ desc="IP:Port of StartD in charge of the slot"/>
+ <statistic name="MyCurrentTime"
+ type="absTime" unit="nanosecond"
+ desc="The number of nanoseconds since epoch that the
+ slot produced an updated ad"/>
+<!--
+ <property name="MyType"
+ type="sstr"
+ desc="Always 'Machine'"\>
+-->
+ <property name="Name"
+ type="sstr"
+ index="y"
+ desc="Name of the slot, either the same as Machine,
+ slot#@Machine, or a configured value"/>
+ <property name="NextFetchWorkDelay"
+ type="int32" unit="second"
+ desc="Number of seconds until the next FetchWork
+ Hook will be invoked, -1 means never"/>
+ <property name="OpSys"
+ type="sstr"
+ desc="Slot's operating system, e.g.: HPUX10, HPUX
+ 10.20; HPUX11, HPUX B.11.00; LINUX, Linux
+ 2.[0,2,4,6].x kernels; OSF1, Diginal Unix 4.x;
+ OSX, Darwin; OSX10_2, Darwin 6.4; SOLARIS25,
+ Solaris 2.4 or 5.5; SOLARIS251, Solaris 2.5.1 or
+ 5.5.1; SOLARIS26, Solaris 2.6 or 5.6; SOLARIS27,
+ Solaris 2.7 or 5.7; SOLARIS28, Solaris 2.8 or
+ 5.8; SOLARIS29, Solaris 2.9 or 5.9; WINNT50,
+ Windows 2000; WINNT51, Windows XP; WINNT52,
+ Windows Server 2003; WINNT60, Windows Vista"/>
+ <property name="PreemptingConcurrencyLimits"
+ type="sstr"
+ optional="y"
+ desc="Set of concurrency limits associated with the
+ preempting job"/>
+ <property name="PreemptingOwner"
+ type="sstr"
+ optional="y"
+ desc="The name of the user originally preempting the
+ current job, i.e. the incoming user, only present
+ when slot is claimed"/>
+ <property name="PreemptingUser"
+ type="sstr"
+ optional="y"
+ desc="The name of the user preempting the current job,
+ different from PreemptingOwner only if the claim
+ was given to another user who is using it to
+ preempt, only present when slot is claimed"/>
+ <property name="PreemptingRank"
+ type="double"
+ optional="y"
+ desc="Slots' affinity for running the incoming,
+ preempting, job, calculated as Rank expression
+ evaluated in context of the incoming job's ad,
+ only present when slot is claimed"/>
+ <property name="RemoteOwner"
+ type="sstr"
+ optional="y"
+ desc="The name of the user who originally claimed the
+ slot, only present when slot is claimed"/>
+ <property name="RemoteUser"
+ type="sstr"
+ optional="y"
+ desc="The name of the user who is currently using the
+ slot, different from RemoteOwner only if the
+ claim was given to another user who is using the
+ slot, only present when slot is claimed"/>
+ <property name="Requirements"
+ type="lstr"
+ desc="Expression evaluated in the context of a job ad
+ to determine if the slot will run a job"/>
+ <property name="Rank"
+ type="lstr"
+ desc="Configured expression representing how the slot
+ prefers jobs"/>
+ <property name="SlotID"
+ type="uint32"
+ desc="The # in the slot's Name, i.e.
Name='slot#@Machine'"/>
+ <property name="Start"
+ type="lstr"
+ desc="Expression evaluated to determine if a slot is
+ willing to start running a job"/>
+ <property name="StarterAbilityList"
+ type="lstr"
+ desc="StringList, comma separated, set of abilities the
+ slot has, i.e. HasFileTransfer,HasJava,HasVM,
+ query with stringListMember('Element',
+ StarterAbilityList)"/>
+ <property name="State"
+ type="sstr"
+ desc="One of: Owner, unavailable to Condor; Unclaimed,
+ available to Condor, but no job match yet;
+ Matched, job found, but not yet claimed; Claimed,
+ claimed and job likely running (see Activity);
+ Preempting, running job is being kicked off the
+ slot"/>
+<!--
+ <statistic name="TargetType"
+ type="sstr"
+ desc="Always 'Job'"/>
+-->
+ <property name="TimeToLive"
+ type="uint32" unit="second"
+ desc="Number of second until StartD managing the slot
+ has until it will exit"/>
+ <property name="TotalClaimRunTime"
+ type="uint32" unit="second"
+ optional="y"
+ desc="Number of seconds the current claim has spent
+ running jobs, only present when slot is
+ claimed"/>
+ <property name="TotalClaimSuspendTime"
+ type="uint32" unit="second"
+ optional="y"
+ desc="Number of seconds the current claim has spent
+ with suspended jobs, only present when slot is
+ claimed"/>
+ <statistic name="TotalCondorLoadAvg"
+ type="double"
+ desc="Portion of TotalLoadAvg generated by Condor (jobs
+ or benchmarks)"/>
+ <property name="TotalCpus"
+ type="uint32"
+ desc="Total number of CPUs on slot's host machine, or
+ NUM_CPUS configuration option"/>
+ <property name="TotalDisk"
+ type="uint32" unit="KiB"
+ desc="Amount of disk space available on the slot's host
+ machine"/>
+ <property name="TotalJobRunTime"
+ type="uint32" unit="second"
+ optional="y"
+ desc="Number of seconds the current job has spent
+ running, i.e. Claimed/Busy, only present when
+ slot is claimed"/>
+ <property name="TotalJobSuspendTime"
+ type="uint32" unit="second"
+ optional="y"
+ desc="Number of seconds the current job has spent
+ suspended, i.e. Claimed/Suspended, only present
+ when slot is claimed"/>
+ <statistic name="TotalLoadAvg"
+ type="double"
+ desc="Total load average of the slot's host machine"/>
+ <property name="TotalMemory"
+ type="uint32" unit="MiB"
+ desc="Total RAM available on slot's machine, in MiB"/>
+ <property name="TotalSlots"
+ type="uint32"
+ desc="Total number of slots sharing the Machine"/>
+ <statistic name="TotalTimeBackfillBusy"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Backfill and Activity=Busy since the
+ Startd started"/>
+ <statistic name="TotalTimeBackfillIdle"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Backfill and Activity=Idle since the
+ Startd started"/>
+ <statistic name="TotalTimeBackfillKilling"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Backfill and Activity=Killing since the
+ Startd started"/>
+ <statistic name="TotalTimeClaimedBusy"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Claimed and Activity=Busy since the
+ Startd started"/>
+ <statistic name="TotalTimeClaimedIdle"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Claimed and Activity=Idle since the
+ Startd started"/>
+ <statistic name="TotalTimeClaimedRetiring"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Claimed and Activity=Retiring since the
+ Startd started"/>
+ <statistic name="TotalTimeClaimedSuspended"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Claimed and Activity=Suspended since the
+ Startd started"/>
+ <statistic name="TotalTimeMatchedIdle"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Matched and Activity=Idle since the
+ Startd started"/>
+ <statistic name="TotalTimeOwnerIdle"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Owner and Activity=Idle since the
+ Startd started"/>
+ <statistic name="TotalTimePreemptingKilling"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Preempting and Activity=Killing since the
+ Startd started"/>
+ <statistic name="TotalTimePreemptingVacating"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Preempting and Activity=Vacating since the
+ Startd started"/>
+ <statistic name="TotalTimeUnclaimedBenchmarking"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Unclaimed and Activity=Benchmarking since
+ the Startd started"/>
+ <statistic name="TotalTimeUnclaimedIdle"
+ type="uint32" unit="second"
+ desc="Accumulated number of seconds the slot has been
+ in State=Unclaimed and Activity=Idle since the
+ Startd started"/>
+ <property name="TotalVirtualMemory"
+ type="uint32" unit="KiB"
+ desc="Amount of swap space available on slot"/>
+ <property name="UidDomain"
+ type="sstr"
+ desc="Configured namespace shared by slots with
+ uniform uid/gid entries, i.e. same logins and
+ groups"/>
+ <property name="VirtualMemory"
+ type="uint32" unit="KiB"
+ desc="Amount of currently available virtual memory
+ (swap space) in KiB"/>
+ <property name="WindowsBuildNumber"
+ type="uint32"
+ desc="Integer extracted from the platform type,
+ representing a build number for a Windows
+ operating system, only present on Windows
+ slots"/>
+ <property name="WindowsMajorVersion"
+ type="uint32"
+ desc="Integer extracted from the platform type,
+ representing a major version number for a Windows
+ operating system, only present on Windows
+ slots, e.g. 5 for OpSys=WINNT50"/>
+ <property name="WindowsMinorVersion"
+ type="uint32"
+ desc="Integer extected from the platform type,
+ representing a minor version numer for a Windows
+ operating system, only present on Windows
+ slots, e.g. 2 for OpSys=WINNT52"/>
+
+<!--
+ <property name="AdditionalAttributes" type="map"/>
+-->
+ </class>
+
+<!--
+Exec Host, Order(Rank?), StartTime, TotalTime (Sys, User), Project, AccountingGroup
+-->
+<!--
+ <class name="Job">
+ <property name="schedulerRef" type="objId"
parentRef="y" index="y"
references="mrg.grid:Scheduler"/>
+ <property name="submitterRef" type="objId"
references="mrg.grid.Submitter"/>
+
+ <property name="AccountingGroup" type="sstr"
optional="y" desc=""/>
+ <property name="Args" type="lstr" optional="y"
desc=""/>
+ <property name="ClusterId"
+ type="uint32" index="y"
+ desc="The id of the cluster the job belongs
+ to. ClusterIds are unique within a SchedD."/>
+ <property name="Cmd" type="lstr" desc=""/>
+ <property name="ConcurrencyLimits" type="lstr"
optional="y" desc=""/>
+ <property name="CustomGroup" type="sstr"
optional="y" desc=""/>
+ <property name="CustomId" type="sstr" optional="y"
desc=""/>
+ <property name="CustomPriority" type="uint32"
optional="y" desc=""/>
+ <property name="GlobalJobId" type="sstr"
desc=""/>
+ <property name="In"
+ type="lstr"
+ desc="The file where the job's standard input is read
+ from."/>
+ <property name="Iwd" type="lstr" desc=""/>
+ <property name="JobStatus"
+ type="uint32"
+ desc="One of: 0, unexpanded; 1, idle; 2, running; 3,
+ removed; 4, completed; 5, held; or, 6, submission
+ error"/>
+ <property name="Note"
+ type="lstr" optional="y"
+ desc="An arbitrary note attached to the job."/>
+ <property name="Out"
+ type="lstr"
+ desc="The file where the job's standard output is
+ written."/>
+ <property name="Owner"
+ type="sstr"
+ desc="The submitter of the job."/>
+ <property name="User"
+ type="sstr"
+ desc="The Owner '@' the configured UidDomain namespace"/>
+ <property name="ProcId"
+ type="uint32" index="y"
+ desc="The id of the job within its cluster. ProcIds re
+ unique within a cluster."/>
+ <property name="QDate"
+ type="absTime" unit="nanoseconds"
+ desc="The number of nanoseconds since epoch when the
+ job was submitted."/>
+
+
+// <property name="Requirements" type="lstr"
desc=""/>
+// <property name="Scheduler" type="sstr"
desc=""/>
+
+
+ <property name="JobUniverse"
+ type="uint32"
+ desc=""/>
+
+ <property name="Title" type="sstr" optional="y"
desc=""/>
+ <property name="UserLog" type="lstr" optional="y"
desc=""/>
+
+ <property name="HoldReason" type="lstr" optional="y"
desc=""/>
+
+ <property name="DAGNodeName"
+ type="sstr" optional="y" desc=""/>
+ <property name="DAGParentNodeNames"
+ type="lstr" optional="y"
+ desc="Comma separated list of the job's parent's node
+ names"/>
+ <property name="DAGManJobId"
+ type="uint32" optional="y"
+ desc="The ClusterId of the DAGMan job who spawned the
+ job"/>
+
+ <property name="Ad" type="map" optional="y"
desc=""/>
+
+ <method name="GetAd">
+ <arg name="JobAd" dir="O" type="map"
+ desc="(name,value,type) tuples; Values are INTEGER, FLOAT,
+ STRING and EXPR. The EXPR value is not first class,
+ it is an unquoted, with double quotes, string"/>
+ </method>
+
+ <method name="SetAttribute">
+ <arg name="Name" dir="I" type="sstr"/>
+ <arg name="Value" dir="I" type="lstr"/>
+ </method>
+
+ <method name="Hold">
+ <arg name="Reason" dir="I" type="sstr"/>
+ </method>
+
+ <method name="Release">
+ <arg name="Reason" dir="I" type="sstr"/>
+ </method>
+
+ <method name="Remove">
+ <arg name="Reason" dir="I" type="sstr"/>
+ </method>
+
+ <method name="Fetch">
+ <arg name="File" dir="I" type="sstr"/>
+ <arg name="Start" dir="I" type="int32"/>
+ <arg name="End" dir="I" type="int32"/>
+ <arg name="Data" dir="O" type="lstr"/>
+ </method>
+ </class>
+-->
+
+ <class name="Scheduler">
+ <group name="daemon-stats"/>
+
+ <property name="Pool" type="sstr" index="y"/>
+ <property name="System" type="sstr" index="y"/>
+
+ <property name="JobQueueBirthdate" type="absTime"/>
+ <property name="MaxJobsRunning" type="uint32"
desc=""/>
+ <property name="Machine" type="sstr" desc=""/>
+ <property name="MyAddress" type="sstr" desc=""/>
+ <statistic name="NumUsers" type="uint32"/>
+ <property name="Name" type="sstr" index="y"
desc=""/>
+ <statistic name="TotalHeldJobs" type="uint32"/>
+ <statistic name="TotalIdleJobs" type="uint32"/>
+ <statistic name="TotalJobAds" type="uint32"/>
+ <statistic name="TotalRemovedJobs" type="uint32"/>
+ <statistic name="TotalRunningJobs" type="uint32"/>
+
+ <method name="Submit">
+ <arg name="Ad" dir="I" type="map"/>
+ <arg name="Id" dir="O" type="sstr"/>
+ </method>
+
+ <method name="GetAd">
+ <arg name="Id" dir="I" type="sstr"
+ desc="Job's Id, the string ClusterId.ProcId"/>
+ <arg name="JobAd" dir="O" type="map"
+ desc="(name,value,type) tuples; Values are INTEGER, FLOAT,
+ STRING and EXPR. The EXPR value is not first class,
+ it is an unquoted, with double quotes, string"/>
+ </method>
+
+ <method name="SetAttribute">
+ <arg name="Id" dir="I" type="sstr"
+ desc="Job's Id, the string ClusterId.ProcId"/>
+ <arg name="Name" dir="I" type="sstr"/>
+ <arg name="Value" dir="I" type="lstr"/>
+ </method>
+
+ <method name="Hold">
+ <arg name="Id" dir="I" type="sstr"
+ desc="Job's Id, the string ClusterId.ProcId"/>
+ <arg name="Reason" dir="I" type="sstr"/>
+ </method>
+
+ <method name="Release">
+ <arg name="Id" dir="I" type="sstr"
+ desc="Job's Id, the string ClusterId.ProcId"/>
+ <arg name="Reason" dir="I" type="sstr"/>
+ </method>
+
+ <method name="Remove">
+ <arg name="Id" dir="I" type="sstr"
+ desc="Job's Id, the string ClusterId.ProcId"/>
+ <arg name="Reason" dir="I" type="sstr"/>
+ </method>
+
+ <method name="Fetch">
+ <arg name="Id" dir="I" type="sstr"
+ desc="Job's Id, the string ClusterId.ProcId"/>
+ <arg name="File" dir="I" type="sstr"/>
+ <arg name="Start" dir="I" type="int32"/>
+ <arg name="End" dir="I" type="int32"/>
+ <arg name="Data" dir="O" type="lstr"/>
+ </method>
+
+ <method name="GetStates">
+ <arg name="Submission" dir="I" type="sstr"/>
+ <arg name="State" dir="I" type="uint32"/>
+ <arg name="Count" dir="O" type="uint32"/>
+ </method>
+
+ <method name="GetJobs">
+ <arg name="Submission" dir="I" type="sstr"/>
+ <arg name="Jobs" dir="O" type="map"/>
+ </method>
+
+ <method name="echo">
+ <arg name="sequence" dir="IO" type="uint32"/>
+ <arg name="body" dir="IO" type="lstr"/>
+ </method>
+ </class>
+
+ <class name="Submitter">
+ <property name="schedulerRef" type="objId"
parentRef="y" index="y"
references="mrg.grid:Scheduler"/>
+
+ <statistic name="HeldJobs" type="uint32"/>
+ <statistic name="IdleJobs" type="uint32"/>
+ <property name="JobQueueBirthdate" type="absTime"/>
+ <property name="Machine" type="sstr"/>
+ <property name="Name" type="sstr" index="y"/>
+ <statistic name="RunningJobs" type="uint32"/>
+ <property name="ScheddName" type="sstr"/>
+ </class>
+
+ <class name="Negotiator">
+ <property name="Pool" type="sstr" index="y"/>
+ <property name="System" type="sstr" index="y"/>
+
+ <property name="Name" type="sstr" index="y"/>
+ <property name="Machine" type="sstr"/>
+ <property name="MyAddress" type="sstr" desc=""/>
+
+ <!-- NOTE: MonitorSelf* statistics are currently missing in 7.0.0 -->
+ <group name="daemon-stats"/>
+
+ <method name="GetLimits">
+ <arg name="Limits" dir="O" type="map"/>
+ </method>
+
+ <method name="SetLimit">
+ <arg name="Name" dir="I" type="sstr"/>
+ <arg name="Max" dir="I" type="double"/>
+ </method>
+
+ <method name="GetStats">
+ <arg name="Name" dir="I" type="sstr"
desc="User or group name"/>
+ <arg name="Ad" dir="O" type="map"/>
+<!--
+ <arg name="Effective" dir="O" type="double"/>
+ <arg name="Real" dir="O" type="double"/>
+ <arg name="Factor" dir="O" type="double"/>
+ <arg name="Resources" dir="O" type="unit32"/>
+ <arg name="Usage" dir="O" type="double"
units="hours"/>
+-->
+ </method>
+
+ <method name="SetPriority">
+ <arg name="Name" dir="I" type="sstr"
desc="User or group name"/>
+ <arg name="Priority" dir="I" type="double"/>
+ </method>
+
+ <method name="SetPriorityFactor">
+ <arg name="Name" dir="I" type="sstr"
desc="User or group name"/>
+ <arg name="PriorityFactor" dir="I"
type="double"/>
+ </method>
+
+ <method name="SetUsage">
+ <arg name="Name" dir="I" type="sstr"
desc="User or group name"/>
+ <arg name="Usage" dir="I" type="double"/>
+ </method>
+
+ <!--
+ <method name="GetStaticQuota">
+ <arg name="Name" dir="I" type="sstr"
desc="Group name"/>
+ <arg name="Quota" dir="O" type="uint32"/>
+ </method>
+
+ <method name="GetDynamicQuota">
+ <arg name="Name" dir="I" type="sstr"
desc="Group name"/>
+ <arg name="Quota" dir="O" type="double"/>
+ </method>
+
+ <method name="SetStaticQuota">
+ <arg name="Name" dir="I" type="sstr"
desc="Group name"/>
+ <arg name="Quota" dir="I" type="uint32"/>
+ </method>
+
+ <method name="SetDynamicQuota">
+ <arg name="Name" dir="I" type="sstr"
desc="Group name"/>
+ <arg name="Quota" dir="I" type="double"/>
+ </method>
+-->
+
+ <method name="GetRawConfig">
+ <arg name="Name" dir="I" type="sstr"
desc="Config param name"/>
+ <arg name="Value" dir="O" type="lstr"/>
+ </method>
+
+ <method name="SetRawConfig">
+ <arg name="Name" dir="I" type="sstr"
desc="Config param name"/>
+ <arg name="Value" dir="I" type="lstr"/>
+ </method>
+
+ <method name="Reconfig"/>
+ </class>
+
+ <class name="Collector">
+ <property name="Pool" type="sstr" index="y"/>
+ <property name="System" type="sstr" index="y"/>
+
+ <property name="CondorPlatform" type="sstr"/>
+ <property name="CondorVersion" type="sstr"/>
+ <property name="Name" type="sstr" index="y"/>
+ <property name="MyAddress" type="sstr"/>
+
+ <statistic name="RunningJobs" type="uint32"/>
+ <statistic name="IdleJobs" type="uint32"/>
+ <statistic name="HostsTotal" type="uint32"/>
+ <statistic name="HostsClaimed" type="uint32"/>
+ <statistic name="HostsUnclaimed" type="uint32"/>
+ <statistic name="HostsOwner" type="uint32"/>
+ </class>
+
+ <class name="Master">
+
+ <group name="daemon-stats"/>
+
+ <property name="Pool" type="sstr" index="y"/>
+ <property name="System" type="sstr" index="y"/>
+
+ <property name="Name" type="sstr" index="y"/>
+ <property name="Machine" type="sstr"/>
+ <property name="MyAddress" type="sstr"/>
+ <property name="RealUid" type="int32"/>
+
+ <method name="Start">
+ <arg name="Subsystem"
+ dir="I" type="sstr"
+ desc="The component/subsystem to start: one of STARTD,
+ SCHEDD, COLLECTOR, NEGOTIATOR, KBDD or QUILL"/>
+ </method>
+
+ <method name="Stop">
+ <arg name="Subsystem"
+ dir="I" type="sstr"
+ desc="The component/subsystem to stop: one of STARTD,
+ SCHEDD, COLLECTOR, NEGOTIATOR, KBDD or QUILL"/>
+ </method>
+ </class>
+
+ <class name="Grid">
+ <property name="Pool" type="sstr" index="y"/>
+
+ <property name="Name" type="sstr"/>
+ <property name="ScheddName" type="sstr"/>
+ <property name="Owner" type="sstr"/>
+
+ <statistic name="NumJobs" type="uint32"/>
+ <property name="JobLimit"
+ type="uint32"
+ desc="Maximum number of jobs that can be in the process
+ of being submitted at any time."/>
+ <property name="SubmitLimit"
+ type="uint32"
+ desc="Limit on the number of jobs that will be submitted
+ to the grid resource at once."/>
+
+ <statistic name="SubmitsInProgress" type="uint32"/>
+ <statistic name="SubmitsQueued" type="uint32"/>
+ <statistic name="SubmitsAllowed" type="uint32"/>
+ <statistic name="SubmitsWanted" type="uint32"/>
+
+ <property name="GridResourceUnavailableTime"
+ type="absTime" unit="nanosecond"
+ optional="y"
+ desc="If present, the Grid is down for the specified
+ amount of time."/>
+
+ <statistic name="RunningJobs" type="uint32"/>
+ <statistic name="IdleJobs" type="uint32"/>
+ </class>
+
+ <class name="Submission">
+ <property name="schedulerRef" type="objId"
parentRef="y" index="y"
references="mrg.grid:Scheduler"/>
+
+ <property name="Name" type="sstr" index="y"/>
+ <property name="Owner" type="sstr" index="y"/>
+
+ <statistic name="Idle" type="count32"/>
+ <statistic name="Running" type="count32"/>
+ <statistic name="Removed" type="count32"/>
+ <statistic name="Completed" type="count32"/>
+ <statistic name="Held" type="count32"/>
+ </class>
+
+</schema>
Added: mgmt/newdata/cumin/model/cumin.xml
===================================================================
--- mgmt/newdata/cumin/model/cumin.xml (rev 0)
+++ mgmt/newdata/cumin/model/cumin.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,25 @@
+<schema package="com.redhat.cumin">
+ <class name="BrokerGroup">
+ <property name="name" type="sstr"/>
+ <property name="description" type="lstr"
optional="y"/>
+ </class>
+
+ <class name="BrokerGroupMapping">
+ <property name="broker" type="objId"
references="org.apache.qpid.broker:Broker"/>
+ <property name="group" type="objId"
references="BrokerGroup"/>
+ </class>
+
+ <class name="User">
+ <property name="name" type="sstr" index="y"/>
+ <property name="password" type="sstr"
index="y"/>
+ </class>
+
+ <class name="Role">
+ <property name="name" type="sstr" index="y"/>
+ </class>
+
+ <class name="UserRoleMapping">
+ <property name="user" references="User"
index="y"/>
+ <property name="role" references="Role"
index="y"/>
+ </class>
+</schema>
Added: mgmt/newdata/cumin/model/qpid-acl.xml
===================================================================
--- mgmt/newdata/cumin/model/qpid-acl.xml (rev 0)
+++ mgmt/newdata/cumin/model/qpid-acl.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,44 @@
+<schema package="org.apache.qpid.acl">
+
+<!--
+ * Copyright (c) 2008 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *
http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+-->
+
+ <class name="Acl">
+ <property name="brokerRef" type="objId"
references="org.apache.qpid.broker:Broker" access="RO"
index="y" parentRef="y"/>
+ <property name="policyFile" type="sstr"
access="RO" desc="Name of the policy file"/>
+ <property name="enforcingAcl" type="bool"
access="RO" desc="Currently Enforcing ACL"/>
+ <property name="transferAcl" type="bool"
access="RO" desc="Any transfer ACL rules in force"/>
+ <property name="lastAclLoad" type="absTime"
access="RO" desc="Timestamp of last successful load of ACL"/>
+ <statistic name="aclDenyCount" type="count64"
unit="request" desc="Number of ACL requests denied"/>
+
+ <method name="reloadACLFile" desc="Reload the ACL file"/>
+ </class>
+
+ <eventArguments>
+ <arg name="action" type="sstr"/>
+ <arg name="arguments" type="map"/>
+ <arg name="objectName" type="sstr"/>
+ <arg name="objectType" type="sstr"/>
+ <arg name="reason" type="sstr"/>
+ <arg name="userId" type="sstr"/>
+ </eventArguments>
+
+ <event name="allow" sev="inform" args="userId,
action, objectType, objectName, arguments"/>
+ <event name="deny" sev="notice" args="userId,
action, objectType, objectName, arguments"/>
+ <event name="fileLoaded" sev="inform"
args="userId"/>
+ <event name="fileLoadFailed" sev="error" args="userId,
reason"/>
+
+</schema>
Added: mgmt/newdata/cumin/model/qpid-cluster.xml
===================================================================
--- mgmt/newdata/cumin/model/qpid-cluster.xml (rev 0)
+++ mgmt/newdata/cumin/model/qpid-cluster.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,61 @@
+<schema package="org.apache.qpid.cluster">
+
+ <!--
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+
http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing,
+ software distributed under the License is distributed on an
+ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ KIND, either express or implied. See the License for the
+ specific language governing permissions and limitations
+ under the License.
+ -->
+
+ <!-- Type information:
+
+Numeric types with "_wm" suffix are watermarked numbers. These are compound
+values containing a current value, and a low and high water mark for the reporting
+interval. The low and high water marks are set to the current value at the
+beginning of each interval and track the minimum and maximum values of the statistic
+over the interval respectively.
+
+Access rights for configuration elements:
+
+RO => Read Only
+RC => Read/Create, can be set at create time only, read-only thereafter
+RW => Read/Write
+
+If access rights are omitted for a property, they are assumed to be RO.
+
+ -->
+
+ <class name="Cluster">
+ <property name="brokerRef" type="objId"
references="org.apache.qpid.broker:Broker" access="RC"
index="y" parentRef="y"/>
+ <property name="clusterName" type="sstr"
access="RC" desc="Name of cluster this server is a member of"/>
+ <property name="clusterID" type="sstr"
access="RO" desc="Globally unique ID (UUID) for this cluster
instance"/>
+ <property name="memberID" type="sstr"
access="RO" desc="ID of this member of the cluster"/>
+ <property name="publishedURL" type="sstr"
access="RC" desc="URL this node advertizes itself as"/>
+ <property name="clusterSize" type="uint16"
access="RO" desc="Number of brokers currently in the cluster"/>
+ <property name="status" type="sstr"
access="RO" desc="Cluster node status (STALLED,ACTIVE,JOINING)"/>
+ <property name="members" type="lstr"
access="RO" desc="List of member URLs delimited by ';'"/>
+ <property name="memberIDs" type="lstr"
access="RO" desc="List of member IDs delimited by ';'"/>
+
+ <method name="stopClusterNode">
+ <arg name="brokerId" type="sstr" dir="I"/>
+ </method>
+ <method name="stopFullCluster"/>
+
+ </class>
+
+
+
+</schema>
+
Added: mgmt/newdata/cumin/model/qpid-store.xml
===================================================================
--- mgmt/newdata/cumin/model/qpid-store.xml (rev 0)
+++ mgmt/newdata/cumin/model/qpid-store.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,103 @@
+<schema package="com.redhat.rhm.store">
+
+<!--
+ Copyright (c) 2007, 2008 Red Hat, Inc.
+
+ This file is part of the Qpid async store library msgstore.so.
+
+ This library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ This library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with this library; if not, write to the Free Software
+ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
+ USA
+
+ The GNU Lesser General Public License is available in the file COPYING.
+ -->
+
+ <class name="Store">
+ <property name="brokerRef" type="objId"
access="RO" references="org.apache.qpid.broker:Broker"
index="y" parentRef="y"/>
+ <property name="location" type="sstr"
access="RO" desc="Logical directory on disk"/>
+ <property name="defaultInitialFileCount" type="uint16"
access="RO" unit="file" desc="Default number of files initially
allocated to each journal"/>
+ <property name="defaultDataFileSize" type="uint32"
access="RO" unit="RdPg" desc="Default size of each journal data
file"/>
+ <property name="tplIsInitialized" type="bool"
access="RO" desc="Transaction prepared list has been
initialized by a transactional prepare"/>
+ <property name="tplDirectory" type="sstr"
access="RO" desc="Transaction prepared list
directory"/>
+ <property name="tplWritePageSize" type="uint32"
access="RO" unit="byte" desc="Page size in transaction prepared
list write-page-cache"/>
+ <property name="tplWritePages" type="uint32"
access="RO" unit="wpage" desc="Number of pages in transaction
prepared list write-page-cache"/>
+ <property name="tplInitialFileCount" type="uint16"
access="RO" unit="file" desc="Number of files initially
allocated to transaction prepared list journal"/>
+ <property name="tplDataFileSize" type="uint32"
access="RO" unit="byte" desc="Size of each journal data file in
transaction prepared list journal"/>
+ <property name="tplCurrentFileCount" type="uint32"
access="RO" unit="file" desc="Number of files currently
allocated to transaction prepared list journal"/>
+
+ <statistic name="tplTransactionDepth" type="hilo32"
unit="txn" desc="Number of currently enqueued prepared
transactions"/>
+ <statistic name="tplTxnPrepares" type="count64"
unit="record" desc="Total transaction prepares on transaction prepared
list"/>
+ <statistic name="tplTxnCommits" type="count64"
unit="record" desc="Total transaction commits on transaction prepared
list"/>
+ <statistic name="tplTxnAborts" type="count64"
unit="record" desc="Total transaction aborts on transaction prepared
list"/>
+ <statistic name="tplOutstandingAIOs" type="hilo32"
unit="aio_op" desc="Number of currently outstanding AIO requests in Async
IO system"/>
+ </class>
+
+ <class name="Journal">
+ <property name="queueRef" type="objId"
access="RO" references="org.apache.qpid.broker:Queue"
isGeneralReference="y"/>
+ <property name="name" type="sstr"
access="RO" index="y"/>
+ <property name="directory" type="sstr"
access="RO" desc="Directory containing journal
files"/>
+ <property name="baseFileName" type="sstr"
access="RO" desc="Base filename prefix for journal"/>
+ <property name="writePageSize" type="uint32"
access="RO" unit="byte" desc="Page size in
write-page-cache"/>
+ <property name="writePages" type="uint32"
access="RO" unit="wpage" desc="Number of pages in
write-page-cache"/>
+ <property name="readPageSize" type="uint32"
access="RO" unit="byte" desc="Page size in
read-page-cache"/>
+ <property name="readPages" type="uint32"
access="RO" unit="rpage" desc="Number of pages in
read-page-cache"/>
+ <property name="initialFileCount" type="uint16"
access="RO" unit="file" desc="Number of files initially
allocated to this journal"/>
+ <property name="autoExpand" type="bool"
access="RO" desc="Auto-expand enabled"/>
+ <property name="currentFileCount" type="uint16"
access="RO" unit="file" desc="Number of files currently
allocated to this journal"/>
+ <property name="maxFileCount" type="uint16"
access="RO" unit="file" desc="Max number of files allowed for
this journal"/>
+ <property name="dataFileSize" type="uint32"
access="RO" unit="byte" desc="Size of each journal data
file"/>
+
+ <statistic name="recordDepth" type="hilo32"
unit="record" desc="Number of currently enqueued records (durable
messages)"/>
+ <statistic name="enqueues" type="count64"
unit="record" desc="Total enqueued records on journal"/>
+ <statistic name="dequeues" type="count64"
unit="record" desc="Total dequeued records on journal"/>
+ <statistic name="txn" type="count32"
unit="record" desc="Total open transactions (xids) on journal"/>
+ <statistic name="txnEnqueues" type="count64"
unit="record" desc="Total transactional enqueued records on
journal"/>
+ <statistic name="txnDequeues" type="count64"
unit="record" desc="Total transactional dequeued records on
journal"/>
+ <statistic name="txnCommits" type="count64"
unit="record" desc="Total transactional commit records on
journal"/>
+ <statistic name="txnAborts" type="count64"
unit="record" desc="Total transactional abort records on
journal"/>
+ <statistic name="outstandingAIOs" type="hilo32"
unit="aio_op" desc="Number of currently outstanding AIO requests in Async
IO system"/>
+
+<!--
+ The following are not yet "wired up" in JournalImpl.cpp
+-->
+ <statistic name="freeFileCount" type="hilo32"
unit="file" desc="Number of files free on this journal. Includes free
files trapped in holes."/>
+ <statistic name="availableFileCount" type="hilo32"
unit="file" desc="Number of files available to be written. Excluding
holes"/>
+ <statistic name="writeWaitFailures" type="count64"
unit="record" desc="AIO Wait failures on write"/>
+ <statistic name="writeBusyFailures" type="count64"
unit="record" desc="AIO Busy failures on write"/>
+ <statistic name="readRecordCount" type="count64"
unit="record" desc="Records read from the journal"/>
+ <statistic name="readBusyFailures" type="count64"
unit="record" desc="AIO Busy failures on read"/>
+ <statistic name="writePageCacheDepth" type="hilo32"
unit="wpage" desc="Current depth of write-page-cache"/>
+ <statistic name="readPageCacheDepth" type="hilo32"
unit="rpage" desc="Current depth of read-page-cache"/>
+
+ <method name="expand" desc="Increase number of files allocated for
this journal">
+ <arg name="by" type="uint32" dir="I"
desc="Number of files to increase journal size by"/>
+ </method>
+ </class>
+
+ <eventArguments>
+ <arg name="autoExpand" type="bool" desc="Journal
auto-expand enabled"/>
+ <arg name="fileSize" type="uint32" desc="Journal file
size in bytes"/>
+ <arg name="jrnlId" type="sstr" desc="Journal
Id"/>
+ <arg name="numEnq" type="uint32" desc="Number of
recovered enqueues"/>
+ <arg name="numFiles" type="uint16" desc="Number of
journal files"/>
+ <arg name="numTxn" type="uint32" desc="Number of
recovered transactions"/>
+ <arg name="numTxnDeq" type="uint32" desc="Number of
recovered transactional dequeues"/>
+ <arg name="numTxnEnq" type="uint32" desc="Number of
recovered transactional enqueues"/>
+ <arg name="what" type="sstr" desc="Description of
event"/>
+ </eventArguments>
+ <event name="enqThresholdExceeded" sev="warn"
args="jrnlId, what"/>
+ <event name="created" sev="notice"
args="jrnlId, fileSize, numFiles"/>
+ <event name="full" sev="error"
args="jrnlId, what"/>
+ <event name="recovered" sev="notice"
args="jrnlId, fileSize, numFiles, numEnq, numTxn, numTxnEnq, numTxnDeq"/>
+</schema>
Added: mgmt/newdata/cumin/model/qpid.xml
===================================================================
--- mgmt/newdata/cumin/model/qpid.xml (rev 0)
+++ mgmt/newdata/cumin/model/qpid.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,388 @@
+<schema package="org.apache.qpid.broker">
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+
http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing,
+ software distributed under the License is distributed on an
+ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ KIND, either express or implied. See the License for the
+ specific language governing permissions and limitations
+ under the License.
+-->
+
+ <!-- Type information:
+
+ Numeric types with "_wm" suffix are watermarked numbers. These are
compound
+ values containing a current value, and a low and high water mark for the
reporting
+ interval. The low and high water marks are set to the current value at the
+ beginning of each interval and track the minimum and maximum values of the
statistic
+ over the interval respectively.
+
+ Access rights for configuration elements:
+
+ RO => Read Only
+ RC => Read/Create, can be set at create time only, read-only thereafter
+ RW => Read/Write
+
+ If access rights are omitted for a property, they are assumed to be RO.
+
+ -->
+
+ <!-- Questions: Does C++ broker round-robin dests on queues? -->
+
+ <!--
+ ===============================================================
+ System
+ ===============================================================
+ -->
+ <class name="System">
+ <property name="systemId" index="y" type="uuid"
access="RC"/>
+
+ <property name="osName" type="sstr" access="RO"
desc="Operating System Name"/>
+ <property name="nodeName" type="sstr" access="RO"
desc="Node Name"/>
+ <property name="release" type="sstr"
access="RO"/>
+ <property name="version" type="sstr"
access="RO"/>
+ <property name="machine" type="sstr"
access="RO"/>
+
+ </class>
+
+ <!--
+ ===============================================================
+ Broker
+ ===============================================================
+ -->
+ <class name="Broker">
+ <property name="systemRef" type="objId"
references="System" access="RC" index="y" desc="System
ID" parentRef="y"/>
+ <property name="port" type="uint16"
access="RC" index="y" desc="TCP Port for AMQP Service"/>
+ <property name="workerThreads" type="uint16"
access="RO" desc="Thread pool size"/>
+ <property name="maxConns" type="uint16"
access="RO" desc="Maximum allowed connections"/>
+ <property name="connBacklog" type="uint16"
access="RO" desc="Connection backlog limit for listening socket"/>
+ <property name="stagingThreshold" type="uint32"
access="RO" desc="Broker stages messages over this size to disk"/>
+ <property name="mgmtPubInterval" type="uint16"
access="RW" unit="second" min="1" desc="Interval for
management broadcasts"/>
+ <property name="version" type="sstr"
access="RO" desc="Running software version"/>
+ <property name="dataDir" type="sstr"
access="RO" optional="y" desc="Persistent configuration storage
location"/>
+ <statistic name="uptime" type="deltaTime"/>
+
+ <method name="echo" desc="Request a response to test the path to
the management broker">
+ <arg name="sequence" dir="IO" type="uint32"
default="0"/>
+ <arg name="body" dir="IO" type="lstr"
default=""/>
+ </method>
+
+ <method name="connect" desc="Establish a connection to another
broker">
+ <arg name="host" dir="I"
type="sstr"/>
+ <arg name="port" dir="I"
type="uint32"/>
+ <arg name="durable" dir="I"
type="bool"/>
+ <arg name="authMechanism" dir="I"
type="sstr"/>
+ <arg name="username" dir="I"
type="sstr"/>
+ <arg name="password" dir="I"
type="sstr"/>
+ <arg name="transport" dir="I"
type="sstr"/>
+ </method>
+
+ <method name="queueMoveMessages" desc="Move messages from one queue
to another">
+ <arg name="srcQueue" dir="I" type="sstr"
desc="Source queue"/>
+ <arg name="destQueue" dir="I" type="sstr"
desc="Destination queue"/>
+ <arg name="qty" dir="I"
type="uint32" desc="# of messages to move. 0 means all messages"/>
+ </method>
+
+ </class>
+
+ <!--
+ ===============================================================
+ Management Agent
+ ===============================================================
+ -->
+ <class name="Agent">
+ <property name="connectionRef" type="objId"
references="Connection" access="RO" index="y"/>
+ <property name="label" type="sstr"
access="RO" desc="Label for agent"/>
+ <property name="registeredTo" type="objId"
references="Broker" access="RO" desc="Broker agent is registered
to"/>
+ <property name="systemId" type="uuid"
access="RO" desc="Identifier of system where agent
resides"/>
+ <property name="brokerBank" type="uint32"
access="RO" desc="Assigned object-id broker bank"/>
+ <property name="agentBank" type="uint32"
access="RO" desc="Assigned object-id agent bank"/>
+ </class>
+
+ <!--
+ ===============================================================
+ Virtual Host
+ ===============================================================
+ -->
+ <class name="Vhost">
+ <property name="brokerRef" type="objId"
references="Broker" access="RC" index="y"
parentRef="y"/>
+ <property name="name" type="sstr"
access="RC" index="y"/>
+ <property name="federationTag" type="sstr"
access="RO"/>
+ </class>
+
+ <!--
+ ===============================================================
+ Queue
+ ===============================================================
+ -->
+ <class name="Queue">
+ <property name="vhostRef" type="objId"
references="Vhost" access="RC" index="y"
parentRef="y"/>
+ <property name="name" type="sstr" access="RC"
index="y"/>
+
+ <property name="durable" type="bool"
access="RC"/>
+ <property name="autoDelete" type="bool"
access="RC"/>
+ <property name="exclusive" type="bool"
access="RC"/>
+ <property name="arguments" type="map"
access="RO" desc="Arguments supplied in queue.declare"/>
+ <property name="altExchange" type="objId"
references="Exchange" access="RO" optional="y"/>
+
+ <statistic name="msgTotalEnqueues" type="count64"
unit="message" desc="Total messages enqueued"/>
+ <statistic name="msgTotalDequeues" type="count64"
unit="message" desc="Total messages dequeued"/>
+ <statistic name="msgTxnEnqueues" type="count64"
unit="message" desc="Transactional messages enqueued"/>
+ <statistic name="msgTxnDequeues" type="count64"
unit="message" desc="Transactional messages dequeued"/>
+ <statistic name="msgPersistEnqueues" type="count64"
unit="message" desc="Persistent messages enqueued"/>
+ <statistic name="msgPersistDequeues" type="count64"
unit="message" desc="Persistent messages dequeued"/>
+ <statistic name="msgDepth" type="count32"
unit="message" desc="Current size of queue in messages"
assign="msgTotalEnqueues - msgTotalDequeues"/>
+ <statistic name="byteDepth" type="count32"
unit="octet" desc="Current size of queue in bytes"
assign="byteTotalEnqueues - byteTotalDequeues"/>
+ <statistic name="byteTotalEnqueues" type="count64"
unit="octet" desc="Total messages enqueued"/>
+ <statistic name="byteTotalDequeues" type="count64"
unit="octet" desc="Total messages dequeued"/>
+ <statistic name="byteTxnEnqueues" type="count64"
unit="octet" desc="Transactional messages enqueued"/>
+ <statistic name="byteTxnDequeues" type="count64"
unit="octet" desc="Transactional messages dequeued"/>
+ <statistic name="bytePersistEnqueues" type="count64"
unit="octet" desc="Persistent messages enqueued"/>
+ <statistic name="bytePersistDequeues" type="count64"
unit="octet" desc="Persistent messages dequeued"/>
+ <statistic name="consumerCount" type="hilo32"
unit="consumer" desc="Current consumers on queue"/>
+ <statistic name="bindingCount" type="hilo32"
unit="binding" desc="Current bindings"/>
+ <statistic name="unackedMessages" type="hilo32"
unit="message" desc="Messages consumed but not yet acked"/>
+ <statistic name="messageLatency" type="mmaTime"
unit="nanosecond" desc="Broker latency through this queue"
optional="y"/>
+
+ <method name="purge" desc="Discard all or some messages on a
queue">
+ <arg name="request" dir="I" type="uint32"
desc="0 for all messages or n>0 for n messages"/>
+ </method>
+
+ <method name="reroute" desc="Remove all or some messages on this
queue and route them to an exchange">
+ <arg name="request" dir="I" type="uint32"
desc="0 for all messages or n>0 for n messages"/>
+ <arg name="useAltExchange" dir="I" type="bool"
desc="Iff true, use the queue's configured alternate exchange; iff false, use
exchange named in the 'exchange' argument"/>
+ <arg name="exchange" dir="I" type="sstr"
desc="Name of the exchange to route the messages through"/>
+ </method>
+ </class>
+
+ <!--
+ ===============================================================
+ Exchange
+ ===============================================================
+ -->
+ <class name="Exchange">
+ <property name="vhostRef" type="objId"
references="Vhost" access="RC" index="y"
parentRef="y"/>
+ <property name="name" type="sstr"
access="RC" index="y"/>
+ <property name="type" type="sstr"
access="RO"/>
+ <property name="durable" type="bool"
access="RO"/>
+ <property name="autoDelete" type="bool"
access="RO"/>
+ <property name="altExchange" type="objId"
references="Exchange" access="RO" optional="y"/>
+ <property name="arguments" type="map"
access="RO" desc="Arguments supplied in exchange.declare"/>
+
+ <statistic name="producerCount" type="hilo32"
desc="Current producers on exchange"/>
+ <statistic name="bindingCount" type="hilo32"
desc="Current bindings"/>
+ <statistic name="msgReceives" type="count64"
desc="Total messages received"/>
+ <statistic name="msgDrops" type="count64"
desc="Total messages dropped (no matching key)"/>
+ <statistic name="msgRoutes" type="count64"
desc="Total routed messages"/>
+ <statistic name="byteReceives" type="count64"
desc="Total bytes received"/>
+ <statistic name="byteDrops" type="count64"
desc="Total bytes dropped (no matching key)"/>
+ <statistic name="byteRoutes" type="count64"
desc="Total routed bytes"/>
+ </class>
+
+ <!--
+ ===============================================================
+ Binding
+ ===============================================================
+ -->
+ <class name="Binding">
+ <property name="exchangeRef" type="objId"
references="Exchange" access="RC" index="y"
parentRef="y"/>
+ <property name="queueRef" type="objId"
references="Queue" access="RC" index="y"/>
+ <property name="bindingKey" type="sstr"
access="RC" index="y"/>
+ <property name="arguments" type="map"
access="RC"/>
+ <property name="origin" type="sstr"
access="RO" optional="y"/>
+
+ <statistic name="msgMatched" type="count64"/>
+ </class>
+
+ <!--
+ ===============================================================
+ Subscription
+ ===============================================================
+ -->
+ <class name="Subscription">
+ <property name="sessionRef" type="objId"
references="Session" access="RC" index="y"
parentRef="y"/>
+ <property name="queueRef" type="objId"
references="Queue" access="RC" index="y"/>
+ <property name="name" type="sstr"
access="RC" index="y"/>
+ <property name="browsing" type="bool"
access="RC"/>
+ <property name="acknowledged" type="bool"
access="RC"/>
+ <property name="exclusive" type="bool"
access="RC"/>
+ <property name="creditMode" type="sstr"
access="RO" desc="WINDOW or CREDIT"/>
+ <property name="arguments" type="map"
access="RC"/>
+ <statistic name="delivered" type="count64"
unit="message" desc="Messages delivered"/>
+ </class>
+
+ <!--
+ ===============================================================
+ Connection
+ ===============================================================
+ -->
+ <class name="Connection">
+ <property name="vhostRef" type="objId"
references="Vhost" access="RC" index="y"
parentRef="y"/>
+ <property name="address" type="sstr" access="RC"
index="y"/>
+ <property name="incoming" type="bool"
access="RC"/>
+ <property name="SystemConnection" type="bool"
access="RC" desc="Infrastucture/ Inter-system connection (Cluster,
Federation, ...)"/>
+ <property name="federationLink" type="bool"
access="RO" desc="Is this a federation link"/>
+ <property name="authIdentity" type="sstr"
access="RO" desc="authId of connection if authentication
enabled"/>
+ <property name="remoteProcessName" type="sstr"
access="RO" optional="y" desc="Name of executable running as
remote client"/>
+ <property name="remotePid" type="uint32"
access="RO" optional="y" desc="Process ID of remote
client"/>
+ <property name="remoteParentPid" type="uint32"
access="RO" optional="y" desc="Parent Process ID of remote
client"/>
+ <property name="shadow" type="bool"
access="RO" desc="True for shadow connections"/>
+ <statistic name="closing" type="bool" desc="This
client is closing by management request"/>
+ <statistic name="framesFromClient" type="count64"/>
+ <statistic name="framesToClient" type="count64"/>
+ <statistic name="bytesFromClient" type="count64"/>
+ <statistic name="bytesToClient" type="count64"/>
+
+ <method name="close"/>
+ </class>
+
+ <!--
+ ===============================================================
+ Link
+ ===============================================================
+ -->
+ <class name="Link">
+
+ This class represents an inter-broker connection.
+
+ <property name="vhostRef" type="objId"
references="Vhost" access="RC" index="y"
parentRef="y"/>
+ <property name="host" type="sstr" access="RC"
index="y"/>
+ <property name="port" type="uint16" access="RC"
index="y"/>
+ <property name="transport" type="sstr"
access="RC"/>
+ <property name="durable" type="bool"
access="RC"/>
+
+ <statistic name="state" type="sstr"
desc="Operational state of the link"/>
+ <statistic name="lastError" type="sstr" desc="Reason
link is not operational"/>
+
+ <method name="close"/>
+
+ <method name="bridge" desc="Bridge messages over the
link">
+ <arg name="durable" dir="I" type="bool"/>
+ <arg name="src" dir="I" type="sstr"/>
+ <arg name="dest" dir="I" type="sstr"/>
+ <arg name="key" dir="I" type="sstr"/>
+ <arg name="tag" dir="I" type="sstr"/>
+ <arg name="excludes" dir="I" type="sstr"/>
+ <arg name="srcIsQueue" dir="I" type="bool"/>
+ <arg name="srcIsLocal" dir="I" type="bool"/>
+ <arg name="dynamic" dir="I" type="bool"/>
+ <arg name="sync" dir="I"
type="uint16"/>
+ </method>
+ </class>
+
+
+ <!--
+ ===============================================================
+ Bridge
+ ===============================================================
+ -->
+ <class name="Bridge">
+ <property name="linkRef" type="objId"
references="Link" access="RC" index="y"
parentRef="y"/>
+ <property name="channelId" type="uint16"
access="RC" index="y"/>
+ <property name="durable" type="bool"
access="RC"/>
+ <property name="src" type="sstr"
access="RC"/>
+ <property name="dest" type="sstr"
access="RC"/>
+ <property name="key" type="sstr"
access="RC"/>
+ <property name="srcIsQueue" type="bool"
access="RC"/>
+ <property name="srcIsLocal" type="bool"
access="RC"/>
+ <property name="tag" type="sstr"
access="RC"/>
+ <property name="excludes" type="sstr"
access="RC"/>
+ <property name="dynamic" type="bool"
access="RC"/>
+ <property name="sync" type="uint16"
access="RC"/>
+ <method name="close"/>
+ </class>
+
+
+ <!--
+ ===============================================================
+ Session
+ ===============================================================
+ -->
+ <class name="Session">
+ <property name="vhostRef" type="objId"
references="Vhost" access="RC" index="y"
parentRef="y"/>
+ <property name="name" type="sstr"
access="RC" index="y"/>
+ <property name="channelId" type="uint16"
access="RO"/>
+ <property name="connectionRef" type="objId"
references="Connection" access="RO"/>
+ <property name="detachedLifespan" type="uint32"
access="RO" unit="second"/>
+ <property name="attached" type="bool"
access="RO"/>
+ <property name="expireTime" type="absTime"
access="RO" optional="y"/>
+ <property name="maxClientRate" type="uint32"
access="RO" unit="msgs/sec" optional="y"/>
+
+ <statistic name="framesOutstanding" type="count32"/>
+
+ <statistic name="TxnStarts" type="count64"
unit="transaction" desc="Total transactions started "/>
+ <statistic name="TxnCommits" type="count64"
unit="transaction" desc="Total transactions committed"/>
+ <statistic name="TxnRejects" type="count64"
unit="transaction" desc="Total transactions rejected"/>
+ <statistic name="TxnCount" type="count32"
unit="transaction" desc="Current pending transactions"/>
+
+ <statistic name="clientCredit" type="count32"
unit="message" desc="Client message credit"/>
+
+ <method name="solicitAck"/>
+ <method name="detach"/>
+ <method name="resetLifespan"/>
+ <method name="close"/>
+ </class>
+
+ <!--
+ ===============================================================
+ ManagementSetupState
+ ===============================================================
+
+ This thing is used during cluster recovery operations (and maybe
+ eventually elsewhere) to transmit assorted state from one broker to
+ another. At present, the two data propagated are the object number
+ counter and boot sequence, both of which are used for creating
+ object ids for newly-created objects.
+
+ -->
+ <class name="ManagementSetupState">
+ <!-- for reasons that aren't clear (to me, anyhow) you have to say
+ access="RO" to get accessor methods defined. RC or RW don't do
+ it. Probably this is documented someplace, but I couldn't find
+ it. -jrd -->
+ <property name="objectNum" type="uint64"
access="RO"/>
+ <property name="bootSequence" type="uint16"
access="RO"/>
+ </class>
+
+ <eventArguments>
+ <arg name="altEx" type="sstr" desc="Name of the
alternate exchange"/>
+ <arg name="args" type="map" desc="Supplemental
arguments or parameters supplied"/>
+ <arg name="autoDel" type="bool" desc="Created object is
automatically deleted when no longer in use"/>
+ <arg name="dest" type="sstr" desc="Destination tag
for a subscription"/>
+ <arg name="disp" type="sstr" desc="Disposition of a
declaration: 'created' if object was created, 'existing' if object already
existed"/>
+ <arg name="durable" type="bool" desc="Created object is
durable"/>
+ <arg name="exName" type="sstr" desc="Name of an
exchange"/>
+ <arg name="exType" type="sstr" desc="Type of an
exchange"/>
+ <arg name="excl" type="bool" desc="Created object is
exclusive for the use of the owner only"/>
+ <arg name="key" type="lstr" desc="Key text used for
routing or binding"/>
+ <arg name="qName" type="sstr" desc="Name of a
queue"/>
+ <arg name="reason" type="lstr" desc="Reason for a
failure"/>
+ <arg name="rhost" type="sstr" desc="Address (i.e. DNS
name, IP address, etc.) of a remotely connected host"/>
+ <arg name="user" type="sstr" desc="Authentication
identity"/>
+ </eventArguments>
+
+ <event name="clientConnect" sev="inform" args="rhost,
user"/>
+ <event name="clientConnectFail" sev="warn" args="rhost,
user, reason"/>
+ <event name="clientDisconnect" sev="inform" args="rhost,
user"/>
+ <event name="brokerLinkUp" sev="inform"
args="rhost"/>
+ <event name="brokerLinkDown" sev="warn"
args="rhost"/>
+ <event name="queueDeclare" sev="inform" args="rhost,
user, qName, durable, excl, autoDel, args, disp"/>
+ <event name="queueDelete" sev="inform" args="rhost,
user, qName"/>
+ <event name="exchangeDeclare" sev="inform" args="rhost,
user, exName, exType, altEx, durable, autoDel, args, disp"/>
+ <event name="exchangeDelete" sev="inform" args="rhost,
user, exName"/>
+ <event name="bind" sev="inform" args="rhost,
user, exName, qName, key, args"/>
+ <event name="unbind" sev="inform" args="rhost,
user, exName, qName, key"/>
+ <event name="subscribe" sev="inform" args="rhost,
user, qName, dest, excl, args"/>
+ <event name="unsubscribe" sev="inform" args="rhost,
user, dest"/>
+</schema>
+
Added: mgmt/newdata/cumin/model/rosemary.xml
===================================================================
--- mgmt/newdata/cumin/model/rosemary.xml (rev 0)
+++ mgmt/newdata/cumin/model/rosemary.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,165 @@
+<model>
+ <package name="org.apache.qpid.broker">
+ <class name="Binding">
+ <property name="bindingKey">
+ <title>Binding Key</title>
+ </property>
+
+ <property name="arguments">
+ <title>Arguments</title>
+ </property>
+
+ <property name="origin">
+ <title>Origin</title>
+ </property>
+
+ <statistic name="msgMatched">
+ <title>Messages Matched</title>
+ </statistic>
+ </class>
+
+ <class name="Broker">
+ <object>
+ <title>host:%(port)s</title>
+ </object>
+
+ <property name="port">
+ <title>Port</title>
+ </property>
+ </class>
+
+ <class name="Connection">
+ <property name="remotePid">
+ <title>Process ID</title>
+ <!-- value -->
+ </property>
+
+ <property name="remoteParentPid">
+ <title>Parent PID</title>
+ </property>
+ </class>
+
+ <class name="Exchange">
+ <property name="name">
+ <title>Name</title>
+ </property>
+
+ <statistic name="producerCount">
+ <title>Producers</title>
+ </statistic>
+
+ <statistic name="bindingCount">
+ <title>Bindings</title>
+ </statistic>
+
+ <statistic name="msgRoutes">
+ <title>Messages Routed</title>
+ </statistic>
+
+ <statistic name="byteRoutes">
+ <title>Bytes Routed</title>
+ </statistic>
+ </class>
+
+ <class name="Queue">
+ <property name="name">
+ <title>Name</title>
+ </property>
+
+ <statistic name="consumerCount">
+ <title>Consumers</title>
+ </statistic>
+
+ <statistic name="bindingCount">
+ <title>Bindings</title>
+ </statistic>
+
+ <statistic name="msgDepth">
+ <title>Queue Messages</title>
+ </statistic>
+
+ <statistic name="byteDepth">
+ <title>Bytes</title>
+ </statistic>
+
+ <statistic name="msgPersistEnqueues">
+ <title>Msgs. Enqueued</title>
+ </statistic>
+
+ <statistic name="msgPersistDequeues">
+ <title>Msgs. Dequeued</title>
+ </statistic>
+
+ <statistic name="bytePersistEnqueues">
+ <title>Bytes Enqueued</title>
+ </statistic>
+
+ <statistic name="bytePersistDequeues">
+ <title>Bytes Dequeued</title>
+ </statistic>
+
+ <statistic name="msgTotalEnqueues">
+ <title>Msgs. Enqueued</title>
+ </statistic>
+
+ <statistic name="msgTotalDequeues">
+ <title>Msgs. Dequeued</title>
+ </statistic>
+
+ <statistic name="byteTotalEnqueues">
+ <title>Bytes Enqueued</title>
+ </statistic>
+
+ <statistic name="byteTotalDequeues">
+ <title>Bytes Dequeued</title>
+ </statistic>
+
+ <statistic name="unackedMessages">
+ <title>Msgs. Unacked</title>
+ </statistic>
+
+ <statistic name="messageLatency">
+ <title>Msg. Latency</title>
+ </statistic>
+ </class>
+
+ <class name="System">
+ <property name="nodeName">
+ <title>Host</title>
+ </property>
+ </class>
+ </package>
+
+ <package name="org.apache.qpid.cluster">
+ <class name="Cluster">
+ <property name="clusterName">
+ <title>Cluster</title>
+ </property>
+ </class>
+ </package>
+
+ <package name="com.redhat.cumin">
+ <class name="BrokerGroup">
+ <title>Broker Group</title>
+
+ <property name="name">
+ <title>Name</title>
+ </property>
+
+ <property name="description">
+ <title>Description</title>
+ </property>
+ </class>
+ </package>
+
+ <package name="com.redhat.sesame">
+ <class name="Sysimage">
+ <property name="nodeName">
+ <title>Host</title>
+ </property>
+ <statistic name="loadAverage1Min">
+ <title>Load Average 1 Minute</title>
+ </statistic>
+ </class>
+ </package>
+</model>
Added: mgmt/newdata/cumin/model/sesame.xml
===================================================================
--- mgmt/newdata/cumin/model/sesame.xml (rev 0)
+++ mgmt/newdata/cumin/model/sesame.xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,31 @@
+<schema package="com.redhat.sesame">
+
+ <class name="Sysimage">
+ <property name="uuid" index="y" type="uuid"
access="RC" desc="UUID of System Image"/>
+
+ <property name="osName" type="sstr" access="RO"
desc="Operating System Name"/>
+ <property name="nodeName" type="sstr" access="RO"
desc="Node Name"/>
+ <property name="release" type="sstr"
access="RO"/>
+ <property name="version" type="sstr"
access="RO"/>
+ <property name="machine" type="sstr"
access="RO"/>
+ <property name="distro" type="sstr" access="RO"
optional="y"/>
+
+ <property name="memTotal" type="uint32" access="RO"
unit="kByte"/>
+ <property name="swapTotal" type="uint32" access="RO"
unit="kByte"/>
+
+ The following statistics are gathered from /proc/meminfo
+
+ <statistic name="memFree" type="uint32"
unit="kByte"/>
+ <statistic name="swapFree" type="uint32"
unit="kByte"/>
+
+ The following statistics are gathered from /proc/loadavg
+
+ <statistic name="loadAverage1Min" type="float"/>
+ <statistic name="loadAverage5Min" type="float"/>
+ <statistic name="loadAverage10Min" type="float"/>
+ <statistic name="procTotal" type="uint32"/>
+ <statistic name="procRunning" type="uint32"/>
+ </class>
+
+</schema>
+
Modified: mgmt/newdata/cumin/python/cumin/account/main.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/account/main.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/account/main.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,17 +1,12 @@
from cumin import *
from cumin.util import *
-from model import *
from widgets import *
class Module(CuminModule):
def __init__(self, app, name):
super(Module, self).__init__(app, name)
- #cls = app.rosemary.com_redhat_cumin.User
-
- #ChangePassword(self, cls)
-
self.app.login_page = LoginPage(self.app, "login.html")
self.app.add_page(self.app.login_page)
Deleted: mgmt/newdata/cumin/python/cumin/account/model.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/account/model.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/account/model.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,22 +0,0 @@
-from cumin.objecttask import *
-from cumin.model import *
-from cumin.util import *
-
-from widgets import *
-
-class ChangePassword(ObjectTask):
- def __init__(self, module, cls):
- super(ChangePassword, self).__init__(module, cls)
-
- self.form = ChangePasswordForm(module.app, self.name, self)
-
- def get_title(self, session, user):
- return "Change password"
-
- def do_enter(self, session, user):
- pass
-
- def do_invoke(self, invoc, user, password):
- # XXX
- user.password = crypt_password(password)
- user.syncUpdate()
Modified: mgmt/newdata/cumin/python/cumin/account/widgets.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/account/widgets.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/account/widgets.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -8,7 +8,6 @@
from cumin.util import *
import main
-import model
from wooly import Session
@@ -34,25 +33,32 @@
self.settings = SettingsFrame(app, "main")
self.add_tab(self.settings)
-class SettingsFrame(CuminFrame):
+class SettingsFrame(Frame):
def __init__(self, app, name):
super(SettingsFrame, self).__init__(app, name)
- self.view = SettingsView(app, "view")
- self.add_mode(self.view)
+ self.change_password_form = ChangePasswordForm(app, "change_password")
+ self.app.form_page.modes.add_mode(self.change_password_form)
+ link = self.ChangePasswordLink(app, "change_password")
+ self.add_child(link)
+
def render_title(self, session):
return "Settings"
-class SettingsView(Widget):
- def init(self):
- # XXX deferring this, but I don't like it
- #task = self.app.account.ChangePassword
- #link = ObjectTaskLink(self.app, "change_password", task, None)
- #self.add_child(link)
+ class ChangePasswordLink(Link):
+ def render_href(self, session):
+ nsession = wooly.Session(self.app.form_page)
+ form = self.frame.change_password_form
+
+ form.return_url.set(nsession, session.marshal())
+ form.show(nsession)
- super(SettingsView, self).init()
+ return nsession.marshal()
+ def render_content(self, sessino):
+ return "Change Password"
+
class LoginPage(HtmlPage):
def __init__(self, app, name):
super(LoginPage, self).__init__(app, name)
@@ -110,13 +116,12 @@
self.validate(session)
if not self.errors.get(session):
- cls = self.app.rosemary.com_redhat_cumin.User
- user = None
-
- for obj in cls.get_selection(session.cursor, name=name):
- user = obj
- break
+ conn = self.app.database.get_connection()
+ cursor = conn.cursor()
+ cls = self.app.model.com_redhat_cumin.User
+ user = cls.get_object(cursor, name=name)
+
if not user:
self.login_invalid.set(session, True)
return
@@ -126,7 +131,7 @@
if crypted and crypt(password, crypted) == crypted:
# You're in!
- login = model.LoginSession(self.app, user)
+ login = LoginSession(self.app, user)
session.client_session.attributes["login_session"] = login
url = self.page.origin.get(session)
@@ -136,7 +141,7 @@
self.login_invalid.set(session, True)
def render_operator_link(self, session):
- email = self.app.config.operator_email
+ email = self.app.operator_email
if email:
return "<a href=\"mailto:%s\">site
operator</a>" % email
@@ -151,9 +156,9 @@
def render_content(self, session):
return "Submit"
-class ChangePasswordForm(ObjectTaskForm):
- def __init__(self, app, name, task):
- super(ChangePasswordForm, self).__init__(app, name, task)
+class ChangePasswordForm(FoldingFieldSubmitForm):
+ def __init__(self, app, name):
+ super(ChangePasswordForm, self).__init__(app, name)
self.current = self.Current(app, "current")
self.current.required = True
@@ -192,12 +197,20 @@
self.validate(session)
if not self.errors.get(session):
- user = session.client_session.attributes["login_session"].user
+ conn = self.app.database.get_connection()
+ cursor = conn.cursor()
+
password = self.new0.get(session)
- self.task.invoke(session, user, password)
- self.task.exit_with_redirect(session)
+ user = session.client_session.attributes["login_session"].user
+ user.password = crypt_password(password)
+ user.save(cursor)
+ conn.commit()
+
+ url = self.return_url.get(session)
+ self.page.redirect.set(session, url)
+
class Current(PasswordField):
def render_title(self, session):
return "Current password"
Added: mgmt/newdata/cumin/python/cumin/admin.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/admin.py (rev 0)
+++ mgmt/newdata/cumin/python/cumin/admin.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,71 @@
+from StringIO import StringIO
+
+from util import *
+
+log = logging.getLogger("cumin.admin")
+
+class CuminAdmin(object):
+ def __init__(self, app):
+ self.app = app
+
+ def get_schema(self):
+ writer = StringIO()
+ self.app.model.sql_model.write_create_ddl(writer)
+ return writer.getvalue()
+
+ def create_schema(self, cursor):
+ cursor.execute(self.get_schema())
+
+ def drop_schema(self, cursor):
+ writer = StringIO()
+ self.app.model.sql_model.write_drop_ddl(writer)
+ sql = writer.getvalue()
+
+ cursor.execute(sql)
+
+ def get_role(self, cursor, name):
+ cls = self.app.model.com_redhat_cumin.Role
+ return cls.get_object(cursor, name=name)
+
+ def add_role(self, cursor, name):
+ cls = self.app.model.com_redhat_cumin.Role
+
+ role = cls.create_object(cursor)
+ role.name = name
+ role.fake_qmf_values()
+ role.save(cursor)
+
+ return role
+
+ def get_user(self, cursor, name):
+ cls = self.app.model.com_redhat_cumin.User
+ return cls.get_object(cursor, name=name)
+
+ def add_user(self, cursor, name, crypted_password):
+ cls = self.app.model.com_redhat_cumin.User
+
+ user = cls.create_object(cursor)
+ user.name = name
+ user.password = crypted_password
+ user.fake_qmf_values()
+ user.save(cursor)
+
+ return user
+
+ def get_assignment(self, cursor, user, role):
+ cls = self.app.model.com_redhat_cumin.UserRoleMapping
+
+ mapping = cls.get_object(cursor, _user_id=user._id, _role_id=role._id)
+
+ return mapping
+
+ def add_assignment(self, cursor, user, role):
+ cls = self.app.model.com_redhat_cumin.UserRoleMapping
+
+ mapping = cls.create_object(cursor)
+ mapping._user_id = user._id
+ mapping._role_id = role._id
+ mapping.fake_qmf_values()
+ mapping.save(cursor)
+
+ return mapping
Modified: mgmt/newdata/cumin/python/cumin/config.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/config.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/config.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,10 +1,10 @@
-import os
-import sys
-import logging
+from optparse import OptionParser
-from parsley.config import Config, ConfigParameter
+from parsley.config import *
from parsley.loggingex import *
+from util import *
+
log = logging.getLogger("cumin.config")
class CuminConfig(Config):
@@ -17,17 +17,48 @@
if not os.path.isdir(self.home):
raise Exception("Home path '%s' is not a directory")
- sdef = os.path.normpath("/usr/share/amqp/amqp.0-10-qpid-errata.xml")
- self.spec = os.environ.get("AMQP_SPEC", sdef)
+ web = CuminConfigSection(self, "web")
- param = ConfigParameter(self, "data", str)
- param.default = "postgresql://cumin@localhost/cumin"
+ param = ConfigParameter(web, "host", str)
+ param.default = "localhost"
- param = ConfigParameter(self, "qmf", str)
+ param = ConfigParameter(web, "port", int)
+ param.default = 45672
+
+ param = ConfigParameter(web, "operator-email", str)
+
+ param = ConfigParameter(web, "user", str)
+
+ data = CuminConfigSection(self, "data")
+
+ param = ConfigParameter(data, "expire-frequency", int)
+ param.default = 600 # 10 minutes
+
+ param = ConfigParameter(data, "expire-threshold", int)
+ param.default = 24 * 3600 # 1 day
+
+ def parse(self):
+ paths = list()
+ paths.append(os.path.join(self.home, "etc", "cumin.conf"))
+ paths.append(os.path.join(os.path.expanduser("~"),
".cumin.conf"))
+
+ return self.parse_files(paths)
+
+class CuminConfigSection(ConfigSection):
+ def __init__(self, config, name):
+ super(CuminConfigSection, self).__init__(config, name)
+
+ param = ConfigParameter(self, "database", str)
+ param.default = "dbname=cumin user=cumin host=localhost"
+
+ param = ConfigParameter(self, "broker", str)
param.default = "amqp://localhost"
+ param = ConfigParameter(self, "model", str)
+ param.default = os.path.join(self.config.home, "xml")
+
param = ConfigParameter(self, "log-file", str)
- param.default = os.path.join(self.home, "log", "cumin.log")
+ param.default = os.path.join(self.config.home, "log",
"cumin.log")
param = ConfigParameter(self, "log-level", str)
param.default = "warn"
@@ -35,27 +66,24 @@
param = ConfigParameter(self, "debug", bool)
param.default = False
- param = ConfigParameter(self, "user", str)
+class CuminOptionParser(OptionParser):
+ def __init__(self, section):
+ OptionParser.__init__(self)
- param = ConfigParameter(self, "operator-email", str)
+ self.add_option("--database", default=section.database)
+ self.add_option("--broker", default=section.broker)
+ self.add_option("--model", default=section.model)
+ self.add_option("--log-file", default=section.log_file)
+ self.add_option("--log-level", default=section.log_level)
+ self.add_option("--debug", default=section.debug)
+ self.add_option("--init-only", action="store_true")
- self.expire_frequency = 600
- self.expire_threshold = 24 * 3600
+def setup_logging(values):
+ modules = ("cumin", "mint", "parsley",
"rosemary", "wooly")
- def init(self, opts=None):
- super(CuminConfig, self).init()
+ for name in modules:
+ enable_logging(name, values.log_level, values.log_file)
- self.load_file(os.path.join(self.home, "etc", "cumin.conf"))
- self.load_file(os.path.join(os.path.expanduser("~"),
".cumin.conf"))
-
- if opts:
- self.load_dict(opts)
-
- modules = ("cumin", "mint", "parsley",
"rosemary", "wooly")
-
+ if values.debug:
for name in modules:
- enable_logging(name, self.log_level, self.log_file)
-
- if self.debug:
- for name in modules:
- enable_logging(name, "debug", sys.stderr)
+ enable_logging(name, "debug", sys.stderr)
Added: mgmt/newdata/cumin/python/cumin/database.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/database.py (rev 0)
+++ mgmt/newdata/cumin/python/cumin/database.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,67 @@
+import psycopg2
+import re
+
+from util import *
+
+log = logging.getLogger("cumin.database")
+
+class CuminDatabase(object):
+ def __init__(self, app, dsn):
+ self.app = app
+ self.dsn = dsn
+
+ self.connection_args = dict()
+
+ def init(self):
+ log.info("Initializing %s", self)
+
+ #m = re.match(r"^([^:]+)://([^@]+)@([^/]+)/(.+)$", self.uri)
+
+ def get_connection(self):
+ return psycopg2.connect(self.dsn)
+
+ def __repr__(self):
+ return self.__class__.__name__
+
+def modify_pghba_conf(path, database_name, user_name):
+ comment_or_empty_line_pattern = re.compile('^\w*#|^\w*$')
+ record_pattern = re.compile('^\w*(local|host|hostssl|hostnossl)')
+
+ file = open(path, "r")
+
+ lines = list()
+ first_record_index = None
+
+ for i, line in enumerate(file):
+ lines.append(line)
+
+ if record_pattern.match(line):
+ if first_record_index is None:
+ first_record_index = i
+
+ tokens = line.split()
+
+ if tokens[1] == database_name:
+ raise Exception("This file already contains a " + \
+ "%s record" % database_name)
+ elif comment_or_empty_line_pattern.match(line):
+ pass
+ else:
+ raise Exception("This doesn't look like a pg_hba.conf file")
+
+ file.close()
+
+ if first_record_index is None:
+ first_record_index = len(lines)
+
+ line = "host %s %s ::1/128 trust\n" % (database_name, user_name)
+ lines.insert(first_record_index, line)
+ line = "host %s %s 127.0.0.1/32 trust\n" % (database_name, user_name)
+ lines.insert(first_record_index, line)
+
+ file = open(path, "w")
+
+ for line in lines:
+ file.write(line)
+
+ file.close()
Modified: mgmt/newdata/cumin/python/cumin/grid/collector.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/collector.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/collector.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -21,7 +21,7 @@
class CollectorFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Collector
+ cls = app.model.mrg_grid.Collector
super(CollectorFrame, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/main.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/main.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/main.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -16,14 +16,14 @@
def __init__(self, app, name):
super(Module, self).__init__(app, name)
- # cls = app.rosemary.mrg_grid.Job
+ # cls = app.model.mrg_grid.Job
# JobHold(self, cls)
# JobRelease(self, cls)
# JobRemove(self, cls)
# JobSetAttribute(self, cls)
- # cls = app.rosemary.mrg_grid.Negotiator
+ # cls = app.model.mrg_grid.Negotiator
# NegotiatorEditDynamicQuota(self, cls)
# NegotiatorEditStaticQuota(self, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/negotiator.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/negotiator.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/negotiator.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -20,7 +20,7 @@
class NegotiatorFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Negotiator
+ cls = app.model.mrg_grid.Negotiator
super(NegotiatorFrame, self).__init__(app, name, cls)
@@ -41,7 +41,7 @@
class NegotiatorSelector(ObjectSelector):
def __init__(self, app, name, pool):
- cls = app.rosemary.mrg_grid.Negotiator
+ cls = app.model.mrg_grid.Negotiator
super(NegotiatorSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/pool.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/pool.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/pool.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -34,7 +34,7 @@
class PoolSelector(ObjectSelector):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Collector
+ cls = app.model.mrg_grid.Collector
super(PoolSelector, self).__init__(app, name, cls)
@@ -48,7 +48,7 @@
class PoolFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Collector
+ cls = app.model.mrg_grid.Collector
super(PoolFrame, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/scheduler.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/scheduler.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/scheduler.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -25,7 +25,7 @@
class SchedulerFrame(ObjectFrame):
def __init__(self, app, name, pool):
- cls = app.rosemary.mrg_grid.Scheduler
+ cls = app.model.mrg_grid.Scheduler
super(SchedulerFrame, self).__init__(app, name, cls)
@@ -49,7 +49,7 @@
class SchedulerSelector(ObjectSelector):
def __init__(self, app, name, pool):
- cls = app.rosemary.mrg_grid.Scheduler
+ cls = app.model.mrg_grid.Scheduler
super(SchedulerSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/slot.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/slot.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/slot.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -16,7 +16,7 @@
class SlotSelector(ObjectSelector):
def __init__(self, app, name, pool):
- cls = app.rosemary.mrg_grid.Slot
+ cls = app.model.mrg_grid.Slot
super(SlotSelector, self).__init__(app, name, cls)
@@ -34,7 +34,7 @@
class SlotFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Slot
+ cls = app.model.mrg_grid.Slot
super(SlotFrame, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/submission.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/submission.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/submission.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -13,13 +13,13 @@
class SubmissionFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Submission
+ cls = app.model.mrg_grid.Submission
super(SubmissionFrame, self).__init__(app, name, cls)
class SubmissionSelector(ObjectSelector):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Submission
+ cls = app.model.mrg_grid.Submission
super(SubmissionSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/grid/submitter.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/grid/submitter.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/grid/submitter.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -21,7 +21,7 @@
class SubmitterFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Submitter
+ cls = app.model.mrg_grid.Submitter
super(SubmitterFrame, self).__init__(app, name, cls)
@@ -32,7 +32,7 @@
class SubmitterSelector(ObjectSelector):
def __init__(self, app, name, scheduler):
- cls = app.rosemary.mrg_grid.Submitter
+ cls = app.model.mrg_grid.Submitter
super(SubmitterSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/inventory/main.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/inventory/main.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/inventory/main.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -18,8 +18,8 @@
app.main_page.main.inventory = self.frame
app.main_page.main.add_tab(self.frame)
- self.system_slots_page = SystemSlotMapPage(app, "systemslots.png")
- app.add_page(self.system_slots_page)
+ self.app.system_slots_page = SystemSlotMapPage(app, "systemslots.png")
+ self.app.add_page(self.app.system_slots_page)
class InventoryFrame(CuminFrame):
def __init__(self, app, name):
Modified: mgmt/newdata/cumin/python/cumin/inventory/system.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/inventory/system.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/inventory/system.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -17,7 +17,7 @@
class SystemSelector(ObjectSelector):
def __init__(self, app, name):
- cls = app.rosemary.com_redhat_sesame.Sysimage
+ cls = app.model.com_redhat_sesame.Sysimage
super(SystemSelector, self).__init__(app, name, cls)
@@ -32,7 +32,7 @@
class SystemFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.com_redhat_sesame.Sysimage
+ cls = app.model.com_redhat_sesame.Sysimage
super(SystemFrame, self).__init__(app, name, cls)
@@ -95,12 +95,10 @@
def render_image_href(self, session):
system = self.system.get(session)
- import main # XXX
-
- page = main.module.system_slots_page
+ page = self.app.system_slots_page
sess = Session(page)
- page.system.set(sess, system)
+ page.id.set(sess, system._id)
return sess.marshal()
@@ -112,10 +110,10 @@
def render_slots_href(self, session):
system = self.system.get(session)
- page = main.module.system_slots_page
+ page = self.app.system_slots_page
sess = Session(page)
- page.system.set(sess, system)
+ page.id.set(sess, system._id)
page.json.set(sess, "slots")
page.groups.set(sess, [])
@@ -225,14 +223,15 @@
class SystemSlotMapPage(SlotMapPage):
def __init__(self, app, name):
- self.system = SystemParameter(app, "id")
- super(SystemSlotMapPage, self).__init__(app, name, self.system,
"System")
+ super(SystemSlotMapPage, self).__init__(app, name, None, "System")
- self.add_parameter(self.system)
+ self.id = IntegerParameter(app, "id")
+ self.add_parameter(self.id)
def do_process(self, session):
super(SystemSlotMapPage, self).do_process(session)
- system = self.system.get(session)
+ cls = self.app.com_redhat_sesame.Sysimage
+ system = cls.get_object_by_id(session.cursor, self.id.get(session))
self.slots.add_where_expr(session, "s.system = '%s'",
system.nodeName)
Modified: mgmt/newdata/cumin/python/cumin/main.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/main.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/main.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -3,19 +3,20 @@
import sys
from mint import *
-from parsley.config import Config, ConfigParameter
from parsley.loggingex import *
-from rosemary.model import RosemaryModel
from stat import StatChartPage, StatStackedPage, \
StatFlashPage, FlashFullPage
from wooly import Application, Session, Page
from wooly.pages import ResourcePage
from wooly.parameters import IntegerParameter
-from config import *
+from admin import *
+from database import *
from model import *
from objectselector import *
from objecttask import *
+from server import *
+from session import *
from sqladapter import *
from user import *
from widgets import *
@@ -26,30 +27,66 @@
log = logging.getLogger("cumin")
class Cumin(Application):
- def __init__(self, config):
+ def __init__(self, home, broker_uri, database_dsn,
+ host="localhost", port=45672):
super(Cumin, self).__init__()
- self.log = log
+ self.home = home
- self.config = config
+ model_dir = os.path.join(self.home, "model")
- self.devel_enabled = self.config.debug
+ self.model = CuminModel(self, model_dir)
+ self.session = CuminSession(self, broker_uri)
+ self.database = CuminDatabase(self, database_dsn)
+ self.server = CuminServer(self, host, port)
+ self.admin = CuminAdmin(self)
+ self.add_resource_dir(os.path.join(self.home, "resources-wooly"))
+ self.add_resource_dir(os.path.join(self.home, "resources"))
+
self.modules = list()
self.modules_by_name = dict()
- self.home = self.config.home
+ self.user = None
+ self.operator_email = None
- self.add_resource_dir(os.path.join(self.home, "resources-wooly"))
- self.add_resource_dir(os.path.join(self.home, "resources"))
+ def check(self):
+ log.info("Checking %s", self)
- self.model = CuminModel(self, self.config.data)
+ if not os.path.isdir(self.home):
+ msg = "Cumin home '%s' not found or not a directory"
+ raise Exception(msg % self.home)
- self.rosemary = RosemaryModel()
- self.rosemary.sql_logging_enabled = True
- self.rosemary.load_xml_dir(os.path.join(self.home, "xml"))
- self.rosemary.init()
+ self.model.check()
+ def init(self):
+ log.info("Initializing %s", self)
+
+ self.model.init()
+ self.session.init()
+ self.database.init()
+ self.server.init()
+
+ self.add_pages()
+
+ import account
+ import messaging
+ import grid
+ import inventory
+ import usergrid
+
+ account.Module(self, "account")
+ messaging.Module(self, "messaging")
+ grid.Module(self, "grid")
+ inventory.Module(self, "inventory")
+ usergrid.Module(self, "usergrid")
+
+ for module in self.modules:
+ module.init()
+
+ super(Cumin, self).init()
+
+ def add_pages(self):
self.main_page = MainPage(self, "index.html")
self.main_page.page_html_class = "Cumin"
@@ -67,40 +104,18 @@
self.resource_page.protected = False
- def check(self):
- if not os.path.isdir(self.home):
- raise Exception \
- ("Error: cumin home '%s' not found or not a directory"
\
- % self.home)
+ def start(self):
+ log.info("Starting %s", self)
- self.model.check()
+ self.session.start()
+ self.server.start()
- def do_init(self):
- import account
- import messaging
- import grid
- import inventory
- #import usergrid
+ def stop(self):
+ log.info("Stopping %s", self)
- account.Module(self, "account")
- messaging.Module(self, "messaging")
- grid.Module(self, "grid")
- inventory.Module(self, "inventory")
- #usergrid.Module(self, "usergrid")
+ self.server.stop()
+ self.session.stop()
- for module in self.modules:
- module.init()
-
- self.model.init()
-
- super(Cumin, self).do_init()
-
- def do_start(self):
- self.model.start()
-
- def do_stop(self):
- self.model.stop()
-
class CuminModule(object):
def __init__(self, app, name):
self.app = app
@@ -156,7 +171,7 @@
def do_process(self, session):
super(OverviewFrame, self).do_process(session)
- count = len(self.app.model.mint.model.qmf_brokers)
+ count = len(self.app.session.qmf_brokers)
if count == 0:
self.mode.set(session, self.notice)
@@ -211,7 +226,7 @@
sum(s."msgTotalEnqueues")) / (count(1)-1)) / 30 as
avg_60"""
queue_id_col = self.table._columns_by_name["_id"]
vhostRef_col = self.table._columns_by_name["_vhostRef_id"]
- vhost_table = self.app.rosemary.org_apache_qpid_broker.Vhost.sql_table
+ vhost_table = self.app.model.org_apache_qpid_broker.Vhost.sql_table
vhost_id_col = vhost_table._columns_by_name["_id"]
vhost_brokerRef_col = vhost_table._columns_by_name["_brokerRef_id"]
@@ -262,7 +277,7 @@
class TopQueueTable(TopTable):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Queue
+ cls = app.model.org_apache_qpid_broker.Queue
adapter = TopQueueAdapter(app, cls)
super(TopQueueTable, self).__init__(app, name, adapter)
@@ -302,7 +317,7 @@
class TopSystemTable(TopObjectTable):
def __init__(self, app, name):
- cls = app.rosemary.com_redhat_sesame.Sysimage
+ cls = app.model.com_redhat_sesame.Sysimage
super(TopSystemTable, self).__init__(app, name, cls)
@@ -318,14 +333,15 @@
class TopSubmissionTable(TopObjectTable):
def __init__(self, app, name):
- cls = app.rosemary.mrg_grid.Submission
+ cls = app.model.mrg_grid.Submission
super(TopSubmissionTable, self).__init__(app, name, cls)
col = self.NameColumn(app, cls.Name.name, cls.Name, cls._id, None)
self.add_column(col)
- col = self.DurationColumn(app, cls._qmf_create_time.name, cls._qmf_create_time)
+ col = self.DurationColumn(app, cls._qmf_create_time.name,
+ cls._qmf_create_time)
self.add_column(col)
self.sort_col = cls._qmf_create_time.name
Modified: mgmt/newdata/cumin/python/cumin/messaging/binding.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/binding.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/binding.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -32,7 +32,7 @@
class BindingFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Binding
+ cls = app.model.org_apache_qpid_broker.Binding
super(BindingFrame, self).__init__(app, name, cls)
@@ -72,9 +72,9 @@
class BindingData(ObjectSqlAdapter):
def __init__(self, app):
- binding = app.rosemary.org_apache_qpid_broker.Binding
- exchange = app.rosemary.org_apache_qpid_broker.Exchange
- queue = app.rosemary.org_apache_qpid_broker.Queue
+ binding = app.model.org_apache_qpid_broker.Binding
+ exchange = app.model.org_apache_qpid_broker.Exchange
+ queue = app.model.org_apache_qpid_broker.Queue
super(BindingData, self).__init__(app, binding)
@@ -83,9 +83,9 @@
class BindingSelector(ObjectSelector):
def __init__(self, app, name):
- binding = app.rosemary.org_apache_qpid_broker.Binding
- exchange = app.rosemary.org_apache_qpid_broker.Exchange
- queue = app.rosemary.org_apache_qpid_broker.Queue
+ binding = app.model.org_apache_qpid_broker.Binding
+ exchange = app.model.org_apache_qpid_broker.Exchange
+ queue = app.model.org_apache_qpid_broker.Queue
data = BindingData(app)
@@ -106,7 +106,6 @@
(app, "queue", queue.name, queue._id, frame)
self.add_column(self.queue_column)
- self.add_attribute_column(binding.arguments)
self.add_attribute_column(binding.origin)
self.add_attribute_column(binding.msgMatched)
Modified: mgmt/newdata/cumin/python/cumin/messaging/broker.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/broker.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/broker.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -23,10 +23,10 @@
class BrokerData(ObjectSqlAdapter):
def __init__(self, app):
- broker = app.rosemary.org_apache_qpid_broker.Broker
- system = app.rosemary.org_apache_qpid_broker.System
- cluster = app.rosemary.org_apache_qpid_cluster.Cluster
- mapping = app.rosemary.com_redhat_cumin.BrokerGroupMapping
+ broker = app.model.org_apache_qpid_broker.Broker
+ system = app.model.org_apache_qpid_broker.System
+ cluster = app.model.org_apache_qpid_cluster.Cluster
+ mapping = app.model.com_redhat_cumin.BrokerGroupMapping
super(BrokerData, self).__init__(app, broker)
@@ -53,9 +53,9 @@
class BrokerSelector(ObjectSelector):
def __init__(self, app, name, data):
- broker = app.rosemary.org_apache_qpid_broker.Broker
- system = app.rosemary.org_apache_qpid_broker.System
- cluster = app.rosemary.org_apache_qpid_cluster.Cluster
+ broker = app.model.org_apache_qpid_broker.Broker
+ system = app.model.org_apache_qpid_broker.System
+ cluster = app.model.org_apache_qpid_cluster.Cluster
data = BrokerData(app)
@@ -92,7 +92,7 @@
class BrokerFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Broker
+ cls = app.model.org_apache_qpid_broker.Broker
super(BrokerFrame, self).__init__(app, name, cls)
@@ -138,7 +138,7 @@
self.broker.set(session, broker)
- cls = self.app.rosemary.org_apache_qpid_broker.Vhost
+ cls = self.app.model.org_apache_qpid_broker.Vhost
args = {"_brokerRef_id": id, "name": "/"}
for obj in cls.get_selection(session.cursor, **args):
@@ -257,7 +257,7 @@
self.add_child(self.brokers)
def render_title(self, session, *args):
- return "Brokers %s" % fmt_count(Broker.select().count())
+ return "Brokers"
def render_clear_filters_href(self, session):
branch = session.branch()
@@ -265,7 +265,8 @@
return branch.marshal()
def render_group_filters(self, session):
- groups = BrokerGroup.select()
+ cls = self.app.model.com_redhat_cumin.BrokerGroup
+ groups = cls.get_selection(session.cursor)
return self._render_filters(session, groups, self.group_tmpl)
def render_group_link(self, session, group):
@@ -362,15 +363,15 @@
vhost = self.object.get(session)
- cls = self.app.rosemary.com_redhat_cumin.BrokerGroupMapping
+ cls = self.app.model.com_redhat_cumin.BrokerGroupMapping
mappings = cls.get_selection(session.cursor, _broker_id=vhost._brokerRef_id)
checked_groups = [x._group_id for x in mappings]
self.groups.inputs.set(session, checked_groups)
def process_submit(self, session):
vhost = self.object.get(session)
- cls = self.app.rosemary.org_apache_qpid_broker.Broker
- broker = cls.get_object(session.cursor, vhost._brokerRef_id)
+ cls = self.app.model.org_apache_qpid_broker.Broker
+ broker = cls.get_object_by_id(session.cursor, vhost._brokerRef_id)
groups = self.groups.get(session)
self.task.invoke(session, broker, groups)
@@ -381,7 +382,7 @@
return "Groups"
def do_get_items(self, session):
- cls = self.app.rosemary.com_redhat_cumin.BrokerGroup
+ cls = self.app.model.com_redhat_cumin.BrokerGroup
groups = cls.get_selection(session.cursor)
return (FormInputItem(x._id, title=x.name) for x in groups)
@@ -395,14 +396,14 @@
return "Add to groups"
def do_invoke(self, invoc, broker, groups):
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
- cls = self.app.rosemary.com_redhat_cumin.BrokerGroup
+ cls = self.app.model.com_redhat_cumin.BrokerGroup
all_groups = cls.get_selection(cursor)
selected_ids = [x._id for x in groups]
- cls = self.app.rosemary.com_redhat_cumin.BrokerGroupMapping
+ cls = self.app.model.com_redhat_cumin.BrokerGroupMapping
try:
for group in all_groups:
existing_mapping = cls.get_selection(cursor, _broker_id=broker._id,
_group_id=group._id)
Modified: mgmt/newdata/cumin/python/cumin/messaging/brokergroup.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/brokergroup.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/brokergroup.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -18,7 +18,7 @@
class BrokerGroupSelector(ObjectSelector):
def __init__(self, app, name):
- cls = app.rosemary.com_redhat_cumin.BrokerGroup
+ cls = app.model.com_redhat_cumin.BrokerGroup
super(BrokerGroupSelector, self).__init__(app, name, cls)
@@ -35,7 +35,7 @@
return "Remove"
def do_invoke(self, invoc, group):
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
@@ -70,7 +70,7 @@
class BrokerGroupFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.com_redhat_cumin.BrokerGroup
+ cls = app.model.com_redhat_cumin.BrokerGroup
super(BrokerGroupFrame, self).__init__(app, name, cls)
@@ -86,7 +86,7 @@
def __init__(self, app, name, task):
super(BrokerGroupForm, self).__init__(app, name, task)
- self.name_ = UniqueNameField(app, "name", BrokerGroup) # XXX
+ self.name_ = StringField(app, "name")
self.add_field(self.name_)
self.description = self.Description(app, "description")
@@ -106,7 +106,7 @@
return "Add broker group"
def do_invoke(self, invoc, obj, name, description):
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
group = self.cls.create_object(cursor)
@@ -155,7 +155,7 @@
group.name = name
group.description = description
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
@@ -197,7 +197,7 @@
self.app.main_page.main.messaging.view.show(session)
def do_invoke(self, invoc, group):
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
Modified: mgmt/newdata/cumin/python/cumin/messaging/brokerlink.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/brokerlink.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/brokerlink.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -20,7 +20,7 @@
class BrokerLinkFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Link
+ cls = app.model.org_apache_qpid_broker.Link
super(BrokerLinkFrame, self).__init__(app, name, cls)
@@ -41,7 +41,7 @@
class BrokerLinkSelector(ObjectSelector):
def __init__(self, app, name, vhost):
- cls = app.rosemary.org_apache_qpid_broker.Link
+ cls = app.model.org_apache_qpid_broker.Link
super(BrokerLinkSelector, self).__init__(app, name, cls)
@@ -71,7 +71,7 @@
class RouteSelector(ObjectSelector):
def __init__(self, app, name, link):
- cls = app.rosemary.org_apache_qpid_broker.Bridge
+ cls = app.model.org_apache_qpid_broker.Bridge
super(RouteSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/messaging/connection.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/connection.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/connection.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -17,7 +17,7 @@
class ConnectionFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Connection
+ cls = app.model.org_apache_qpid_broker.Connection
super(ConnectionFrame, self).__init__(app, name, cls)
@@ -54,7 +54,7 @@
class ConnectionSelector(ObjectSelector):
def __init__(self, app, name, vhost):
- cls = app.rosemary.org_apache_qpid_broker.Connection
+ cls = app.model.org_apache_qpid_broker.Connection
super(ConnectionSelector, self).__init__(app, name, cls)
@@ -150,7 +150,7 @@
class SessionFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Session
+ cls = app.model.org_apache_qpid_broker.Session
super(SessionFrame, self).__init__(app, name, cls)
@@ -173,7 +173,7 @@
class SessionSelector(ObjectSelector):
def __init__(self, app, name, conn):
- cls = app.rosemary.org_apache_qpid_broker.Session
+ cls = app.model.org_apache_qpid_broker.Session
super(SessionSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/messaging/exchange.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/exchange.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/exchange.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -20,7 +20,7 @@
class ExchangeFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Exchange
+ cls = app.model.org_apache_qpid_broker.Exchange
super(ExchangeFrame, self).__init__(app, name, cls)
@@ -71,7 +71,7 @@
class ExchangeSelector(ObjectSelector):
def __init__(self, app, name, vhost):
- cls = app.rosemary.org_apache_qpid_broker.Exchange
+ cls = app.model.org_apache_qpid_broker.Exchange
super(ExchangeSelector, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/messaging/queue.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/queue.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/queue.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -23,7 +23,7 @@
class QueueFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Queue
+ cls = app.model.org_apache_qpid_broker.Queue
super(QueueFrame, self).__init__(app, name, cls)
@@ -63,7 +63,7 @@
class QueueSelector(ObjectSelector):
def __init__(self, app, name, vhost):
- cls = app.rosemary.org_apache_qpid_broker.Queue
+ cls = app.model.org_apache_qpid_broker.Queue
super(QueueSelector, self).__init__(app, name, cls)
@@ -122,9 +122,7 @@
def do_invoke(self, invoc, queue, name, durable, args):
session = self.app.model.get_session_by_object(queue)
- session.queue_declare(queue=name,
- durable=durable,
- arguments=args)
+ session.queue_declare(queue=name, durable=durable, arguments=args)
session.sync()
invoc.end()
@@ -500,7 +498,7 @@
class JournalAttribute(Attribute):
def get(self, session):
queue = self.widget.object.get(session)
- cls = self.app.rosemary.com_redhat_rhm_store.Journal
+ cls = self.app.model.com_redhat_rhm_store.Journal
journals = cls.get_selection(session.cursor,
_queueRef_id=queue._id)
@@ -576,7 +574,7 @@
class QueueSearchInputSet(IncrementalSearchInput):
def do_get_items(self, session):
- cls = self.app.rosemary.org_apache_qpid_broker.Queue
+ cls = self.app.model.org_apache_qpid_broker.Queue
vhost = self.form.get_object(session)
vhostid = vhost._id
queues = cls.get_selection(session.cursor, _vhostRef_id=vhostid)
@@ -599,11 +597,11 @@
return "Move messages"
def do_invoke(self, invoc, vhost, src, dst, count):
- cls = self.app.rosemary.org_apache_qpid_broker.Broker
- conn = self.app.model.get_sql_connection()
+ cls = self.app.model.org_apache_qpid_broker.Broker
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
- broker = cls.get_object(cursor, vhost._brokerRef_id)
+ broker = cls.get_object_by_id(cursor, vhost._brokerRef_id)
finally:
cursor.close()
self.qmf_call(invoc, broker, "queueMoveMessages", src, dst, count)
@@ -682,8 +680,8 @@
def get_object(self, session):
# task expects a vhost object
queue = self.object.get(session)
- cls = self.app.rosemary.org_apache_qpid_broker.Vhost
- vhost = cls.get_object(session.cursor, queue._vhostRef_id)
+ cls = self.app.model.org_apache_qpid_broker.Vhost
+ vhost = cls.get_object_by_id(session.cursor, queue._vhostRef_id)
return vhost
class QueueSrcField(FormField):
Modified: mgmt/newdata/cumin/python/cumin/messaging/subscription.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/subscription.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/subscription.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -7,7 +7,7 @@
class SubscriptionSelector(ObjectSelector):
def __init__(self, app, name, queue):
- cls = app.rosemary.org_apache_qpid_broker.Subscription
+ cls = app.model.org_apache_qpid_broker.Subscription
super(SubscriptionSelector, self).__init__(app, name, cls)
@@ -23,7 +23,7 @@
class SubscriptionFrame(ObjectFrame):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Subscription
+ cls = app.model.org_apache_qpid_broker.Subscription
super(SubscriptionFrame, self).__init__(app, name, cls)
Modified: mgmt/newdata/cumin/python/cumin/messaging/test.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/messaging/test.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/messaging/test.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,6 +1,3 @@
-from mint import *
-from mint.schema import *
-
from cumin.test import *
from cumin.util import *
Modified: mgmt/newdata/cumin/python/cumin/model.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/model.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/model.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,11 +1,7 @@
-import logging
-
-from datetime import datetime, timedelta
from decimal import *
-from mint import Mint, MintConfig
+from mint import *
+from rosemary.model import *
from struct import unpack, calcsize
-from sqladapter import *
-from sqlobject import sqlhub
from threading import Thread, Lock
from types import *
from wooly import *
@@ -15,97 +11,82 @@
from formats import *
from parameters import *
+from sqladapter import *
from util import *
import wooly
log = logging.getLogger("cumin.model")
-class CuminModel(object):
- def __init__(self, app, data_uri):
+class CuminModel(RosemaryModel):
+ def __init__(self, app, model_dir):
+ super(CuminModel, self).__init__()
+
self.app = app
+ self.model_dir = model_dir
- config = app.config
-
- self.mint = Mint(config)
- self.mint.update_enabled = False
- self.mint.expire_enabled = False
- self.mint.vacuum_enabled = False
-
self.lock = Lock()
- self.classes = list()
- self.invocations = set()
+ # int seq => callable
+ self.outstanding_method_calls = dict()
- self.frame = None
-
self.tasks = list()
self.task_invocations = list()
self.limits_by_negotiator = dict()
self.jobs_by_submission = dict()
- # Messaging
+ # # Messaging
- CuminBroker(self)
- CuminQueue(self)
- CuminExchange(self)
- CuminBinding(self)
- CuminConnection(self)
- CuminSession(self)
- CuminLink(self)
- CuminRoute(self)
- CuminBrokerStoreModule(self)
- CuminJournal(self)
- CuminBrokerAclModule(self)
- CuminBrokerClusterModule(self)
+ # CuminBroker(self)
+ # CuminQueue(self)
+ # CuminExchange(self)
+ # CuminBinding(self)
+ # CuminConnection(self)
+ # CuminSession(self)
+ # CuminLink(self)
+ # CuminRoute(self)
+ # CuminBrokerStoreModule(self)
+ # CuminJournal(self)
+ # CuminBrokerAclModule(self)
+ # CuminBrokerClusterModule(self)
- # Grid
+ # # Grid
- CuminScheduler(self)
- CuminSubmission(self)
- CuminSubmitter(self)
- CuminJob(self)
- CuminJobGroup(self)
- CuminLimit(self)
+ # CuminScheduler(self)
+ # CuminSubmission(self)
+ # CuminSubmitter(self)
+ # CuminJob(self)
+ # CuminJobGroup(self)
+ # CuminLimit(self)
- CuminCollector(self)
- CuminNegotiator(self)
+ # CuminCollector(self)
+ # CuminNegotiator(self)
- # Systems
+ # # Systems
- CuminSystem(self)
- CuminSlot(self)
- CuminGrid(self)
+ # CuminSystem(self)
+ # CuminSlot(self)
+ # CuminGrid(self)
- # Other
+ # # Other
- CuminSubject(self)
+ # CuminSubject(self)
def check(self):
- self.mint.check()
+ log.info("Checking %s", self)
- def init(self):
- self.mint.init()
+ assert os.path.isdir(self.model_dir)
- self.frame = self.app.main_page.main
+ log.debug("Model dir exists at '%s'", self.model_dir)
- for cls in self.classes:
- cls.init()
+ def init(self):
+ log.info("Initializing %s", self)
- def start(self):
- self.mint.start()
+ self.load_model_dir(self.model_dir)
- def stop(self):
- self.mint.stop()
+ super(CuminModel, self).init()
- def get_sql_connection(self):
- return sqlhub.getConnection().getConnection()
-
- def add_class(self, cls):
- self.classes.append(cls)
- setattr(self, cls.cumin_name, cls)
-
def get_ad_groups(self):
return AdProperty.get_ad_groups()
@@ -122,10 +103,6 @@
def show_main(self, session):
return self.app.main_page.main.show(session)
- def get_main_pool(self):
- for coll in Collector.select():
- return Pool(coll)
-
def get_session_by_object(self, object):
assert object
@@ -145,7 +122,7 @@
store = NegotiatorLimitStore(self, negotiator)
store.start_updates()
- self.app.model.limits_by_negotiator[negotiator] = store
+ self.limits_by_negotiator[negotiator] = store
sleep(1)
@@ -165,7 +142,7 @@
store = SubmissionJobStore(self, submission)
store.start_updates()
- self.app.model.jobs_by_submission[submission] = store
+ self.jobs_by_submission[submission] = store
sleep(1)
@@ -396,7 +373,7 @@
pass
def get_connection(self):
- return self.model.get_sql_connection()
+ return self.app.database.get_connection()
def get_db_name(self):
name = self.name
@@ -1364,35 +1341,35 @@
def get_object_name(self, conn):
return conn.address
-class CuminSession(RemoteClass):
- def __init__(self, model):
- super(CuminSession, self).__init__(model, "session",
- Session, SessionStats)
+# class CuminSession(RemoteClass):
+# def __init__(self, model):
+# super(CuminSession, self).__init__(model, "session",
+# Session, SessionStats)
- prop = CuminProperty(self, "name")
- prop.title = "Name"
+# prop = CuminProperty(self, "name")
+# prop.title = "Name"
- prop = CuminProperty(self, "channelId")
- prop.title = "Channel ID"
+# prop = CuminProperty(self, "channelId")
+# prop.title = "Channel ID"
- prop = CuminProperty(self, "detachedLifespan")
- prop.title = "Detached Lifespan"
+# prop = CuminProperty(self, "detachedLifespan")
+# prop.title = "Detached Lifespan"
- stat = CuminStat(self, "expireTime")
- stat.title = "Expiration"
- stat.category = "general"
+# stat = CuminStat(self, "expireTime")
+# stat.title = "Expiration"
+# stat.category = "general"
- stat = CuminStat(self, "framesOutstanding")
- stat.title = "Frames Outstanding"
- stat.unit = "frame"
- stat.category = "general"
+# stat = CuminStat(self, "framesOutstanding")
+# stat.title = "Frames Outstanding"
+# stat.unit = "frame"
+# stat.category = "general"
- stat = CuminStat(self, "attached")
- stat.title = "Attached"
- stat.category = "general"
+# stat = CuminStat(self, "attached")
+# stat.title = "Attached"
+# stat.category = "general"
- def get_title(self, session):
- return "Session"
+# def get_title(self, session):
+# return "Session"
class CuminLink(RemoteClass):
def __init__(self, model):
Modified: mgmt/newdata/cumin/python/cumin/objectframe.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/objectframe.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/objectframe.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -74,7 +74,7 @@
super(ObjectFrame, self).do_process(session)
def get_object(self, session, id):
- return self.cls.get_object(session.cursor, id)
+ return self.cls.get_object_by_id(session.cursor, id)
class ObjectAttributes(Widget):
def __init__(self, app, name, object):
Modified: mgmt/newdata/cumin/python/cumin/objecttask.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/objecttask.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/objecttask.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -78,7 +78,7 @@
invoc.end()
- agent = self.app.model.mint.model.agents[obj._qmf_agent_id]
+ agent = self.app.model.agents[obj._qmf_agent_id]
agent.call_method(completion, obj, meth, *args)
def exception(self, invoc, e):
Modified: mgmt/newdata/cumin/python/cumin/parameters.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/parameters.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/parameters.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -34,11 +34,11 @@
self.cls = cls
def do_unmarshal(self, string):
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
- return self.cls.get_object(cursor, int(string))
+ return self.cls.get_object_by_id(cursor, int(string))
finally:
cursor.close()
@@ -55,17 +55,17 @@
def do_get(self, session):
id = self.id_parameter.get(session)
- conn = self.widget.app.model.get_sql_connection()
+ conn = self.widget.app.database.get_connection()
cursor = conn.cursor()
try:
- return self.cls.get_object(cursor, id)
+ return self.cls.get_object_by_id(cursor, id)
finally:
cursor.close()
class VhostParameter(RosemaryObjectParameter):
def __init__(self, app, name):
- cls = app.rosemary.org_apache_qpid_broker.Vhost
+ cls = app.model.org_apache_qpid_broker.Vhost
super(VhostParameter, self).__init__(app, name, cls)
@@ -94,13 +94,13 @@
class NewBrokerGroupParameter(Parameter):
def do_unmarshal(self, string):
id = int(string)
- cls = self.app.rosemary.com_redhat_cumin.BrokerGroup
+ cls = self.app.model.com_redhat_cumin.BrokerGroup
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
- return cls.get_object(cursor, id)
+ return cls.get_object_by_id(cursor, id)
finally:
cursor.close()
Added: mgmt/newdata/cumin/python/cumin/session.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/session.py (rev 0)
+++ mgmt/newdata/cumin/python/cumin/session.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,74 @@
+from model import *
+from util import *
+
+from qmf.console import Console, Session
+
+log = logging.getLogger("cumin.session")
+
+class CuminSession(object):
+ def __init__(self, app, broker_uri):
+ self.app = app
+ self.broker_uri = broker_uri
+
+ self.qmf_session = None
+ self.qmf_brokers = list()
+
+ def add_broker(self, uri):
+ log.info("Adding QMF broker at %s", uri)
+
+ assert self.qmf_session
+
+ qmf_broker = self.qmf_session.addBroker(uri)
+
+ name = qmf_broker.thread.__class__.__name__
+ qmf_broker.thread.name = "%s(%s)" % (name, uri)
+
+ self.qmf_brokers.append(qmf_broker)
+
+ def check(self):
+ log.info("Checking %s", self)
+
+ def init(self):
+ log.info("Initializing %s", self)
+
+ def start(self):
+ log.info("Starting %s", self)
+
+ assert self.qmf_session is None
+
+ self.qmf_session = Session(CuminConsole(self.app.model),
+ manageConnections=True,
+ rcvObjects=False)
+
+ self.add_broker(self.broker_uri)
+
+ def stop(self):
+ log.info("Stopping %s", self)
+
+ for qmf_broker in self.qmf_brokers:
+ self.qmf_session.delBroker(qmf_broker)
+
+ def __repr__(self):
+ return "%s(%s)" % (self.__class__.__name__, self.broker_uri)
+
+class CuminConsole(Console):
+ def __init__(self, model):
+ self.model = model
+
+ def newAgent(self, qmf_agent):
+ log.info("New agent %s", qmf_agent)
+
+ def delAgent(self, qmf_agent):
+ log.info("Deleting agent %s", qmf_agent)
+
+ def methodResponse(self, broker, seq, response):
+ log.info("Method response for request %i received from %s",
+ seq, broker)
+ log.debug("Response: %s", response)
+
+ self.model.lock.acquire()
+ try:
+ callback = self.model.outstanding_method_calls.pop(seq)
+ callback(response.text, response.outArgs)
+ finally:
+ self.model.lock.release()
Modified: mgmt/newdata/cumin/python/cumin/sqladapter.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/sqladapter.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/sqladapter.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -20,7 +20,7 @@
def get_count(self, values):
# XXX urgh. I want session in here
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
@@ -45,7 +45,7 @@
def get_data(self, values, options):
sql_options = self.get_sql_options(options)
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
@@ -133,9 +133,9 @@
class TestData(ObjectSqlAdapter):
def __init__(self, app):
- broker = app.rosemary.org_apache_qpid_broker.Broker
- system = app.rosemary.org_apache_qpid_broker.System
- cluster = app.rosemary.org_apache_qpid_cluster.Cluster
+ broker = app.model.org_apache_qpid_broker.Broker
+ system = app.model.org_apache_qpid_broker.System
+ cluster = app.model.org_apache_qpid_cluster.Cluster
super(TestData, self).__init__(app, broker)
Modified: mgmt/newdata/cumin/python/cumin/stat.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/stat.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/stat.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -277,7 +277,7 @@
def get_adapter_stats(self, session):
rpackage = self.rosemary_package.get(session)
rclass = self.rosemary_class.get(session)
- rosemary_package = self.app.rosemary._packages_by_name[rpackage]
+ rosemary_package = self.app.model._packages_by_name[rpackage]
rosemary_class = rosemary_package._classes_by_name[rclass]
id = str(self.id.get(session))
@@ -500,15 +500,15 @@
if not obj:
rpackage = self.widget.rosemary_package.get(session)
rclass = self.widget.rosemary_class.get(session)
- rosemary_package = self.app.rosemary._packages_by_name[rpackage]
+ rosemary_package = self.app.model._packages_by_name[rpackage]
rosemary_class = rosemary_package._classes_by_name[rclass]
id = self.widget.id.get(session)
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
cursor = conn.cursor()
try:
- obj = rosemary_class.get_object(cursor, id)
+ obj = rosemary_class.get_object_by_id(cursor, id)
finally:
cursor.close()
@@ -1164,17 +1164,3 @@
writer = Writer()
chart.write(writer)
return writer.to_string()
-
-if __name__ == "__main__":
- import sys
-
- try:
- connuri = sys.argv[1]
- conn = connectionForURI(connuri)
- sqlhub.processConnection = conn
- except IndexError:
- print "Usage: stat.py DATABASE-URI"
- sys.exit(1)
-
- #data = DemoData()
- #data.load()
Modified: mgmt/newdata/cumin/python/cumin/test.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/test.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/test.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,4 +1,3 @@
-from mint import Subject
from parsley.test import *
from wooly import *
Modified: mgmt/newdata/cumin/python/cumin/tools.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/tools.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/tools.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -52,12 +52,6 @@
if os.getuid() == 0:
os.setuid(os.stat(sys.argv[0]).st_uid)
- try:
- import psyco
- psyco.full()
- except ImportError:
- pass
-
self.config.init()
def run(self):
Modified: mgmt/newdata/cumin/python/cumin/usergrid/model.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/usergrid/model.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/usergrid/model.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -15,7 +15,7 @@
return self.user.get(session).name
def get_connection(self, session):
- return self.app.model.get_sql_connection()
+ return self.app.database.get_connection()
def get_object(self, session):
cursor = self.execute(session)
Modified: mgmt/newdata/cumin/python/cumin/util.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/util.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/util.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -5,12 +5,15 @@
from crypt import crypt
from datetime import datetime, timedelta
from qpid.datatypes import uuid4
+from pprint import *
from random import randint
from random import sample
from threading import Thread, Event
from time import mktime, time, sleep
from xml.sax.saxutils import escape as do_xml_escape
+from parsley.threadingex import print_threads
+
def xml_escape(string):
if string:
return do_xml_escape(string)
Modified: mgmt/newdata/cumin/python/cumin/widgets.py
===================================================================
--- mgmt/newdata/cumin/python/cumin/widgets.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/cumin/python/cumin/widgets.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,4 +1,3 @@
-from datetime import datetime, timedelta
from wooly import *
from wooly.pages import *
from wooly.datatable import *
@@ -6,7 +5,6 @@
from wooly.forms import *
from wooly.sql import *
from wooly.tables import *
-from mint.schema import *
from objecttask import *
from objectselector import *
@@ -27,7 +25,7 @@
class CuminSqlDataSet(SqlDataSet):
def get_connection(self, session):
- return self.app.model.get_sql_connection()
+ return self.app.database.get_connection()
class CuminHeartBeat(Widget):
""" the intent is to add stuff here """
@@ -118,7 +116,7 @@
# pages.append(self.app.main_page)
pages.append(self.app.main_page)
- #pages.append(self.app.user_grid_page) XXX
+ pages.append(self.app.user_grid_page)
pages.append(self.app.account_page)
return pages
@@ -766,7 +764,7 @@
return super(CuminTable.Links, self).do_render(session)
def get_connection(self, session):
- return self.app.model.get_sql_connection()
+ return self.app.database.get_connection()
def do_process(self, session, *args):
super(CuminTable, self).do_process(session, *args)
@@ -1304,6 +1302,7 @@
class Wait(Widget):
pass
+# XXX this should move somewhere else
class LoginSession(object):
def __init__(self, app, user):
self.app = app
@@ -1323,7 +1322,7 @@
self.add_attribute(self.user)
def do_process(self, session):
- conn = self.app.model.get_sql_connection()
+ conn = self.app.database.get_connection()
setattr(session, "cursor", conn.cursor())
if self.authorized(session):
@@ -1340,21 +1339,14 @@
if login.created > when:
return True
- elif self.app.config.user:
- user = Subject.getByName(self.app.config.user)
+ elif self.app.user:
+ cls = self.app.model.com_redhat_cumin.User
+ users = cls.get_selection(session.cursor, name=self.app.user)
- # cls = self.app.rosemary.com_redhat_cumin.User
- # name_literal = "'%s'" % self.app.config.user
- # user = None
+ if not users:
+ raise Exception("User '%s' not found" % self.app.user)
- # for obj in cls.get_selection(session.cursor, name=name_literal):
- # user = obj
- # break
-
- if user is None:
- raise Exception("User '%s' not found" %
self.app.config.user)
-
- login = LoginSession(self.app, user)
+ login = LoginSession(self.app, users[0])
session.client_session.attributes["login_session"] = login
return True
Deleted: mgmt/newdata/mint/bin/mint-admin
===================================================================
--- mgmt/newdata/mint/bin/mint-admin 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-admin 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,10 +0,0 @@
-#!/usr/bin/python
-
-from mint.tools import MintAdminTool
-
-if __name__ == "__main__":
- try:
- tool = MintAdminTool("mint-admin")
- tool.main()
- except KeyboardInterrupt:
- pass
Deleted: mgmt/newdata/mint/bin/mint-admin-test
===================================================================
--- mgmt/newdata/mint/bin/mint-admin-test 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-admin-test 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,32 +0,0 @@
-#!/bin/bash
-
-id="$RANDOM"
-code=0
-tmpdir=$(mktemp -d)
-trap "rm -rf ${tmpdir}" EXIT
-
-while read command; do
- echo -n "Testing command '$command'..."
-
- $command &> "${tmpdir}/output"
-
- if [[ $? == 0 ]]; then
- echo " OK"
- else
- echo
- echo "Command failed with exit code $?"
- echo "Output:"
- cat "${tmpdir}/output"
- code=1
- fi
-done <<EOF
-mint-admin --help
-mint-admin add-user "$id" changeme
-mint-admin assign "$id" admin
-mint-admin unassign "$id" admin
-mint-admin list-users
-mint-admin remove-user "$id" --force
-mint-admin list-roles
-EOF
-
-exit "$code"
Deleted: mgmt/newdata/mint/bin/mint-bench
===================================================================
--- mgmt/newdata/mint/bin/mint-bench 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-bench 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,56 +0,0 @@
-#!/usr/bin/python
-
-import sys, os, logging, mint.sql
-
-from mint.tools import MintBenchTool
-
-def do_main():
- MintBenchTool("mint-bench").main()
-
-def main():
- if "--profile" in sys.argv:
- sys.argv.remove("--profile")
-
- from profile import Profile
- from pstats import Stats
-
- prof = Profile()
-
- print "Calibrating"
-
- biases = list()
-
- for i in range(4):
- bias = prof.calibrate(20000)
- biases.append(bias)
- print i, bias
-
- prof.bias = sum(biases) / float(5)
-
- print "Using bias %f" % prof.bias
-
- try:
- prof.run("do_main()")
- except KeyboardInterrupt:
- pass
-
- file = "/tmp/cumin-test-stats"
-
- prof.dump_stats(file)
-
- stats = Stats(file)
-
- stats.sort_stats("cumulative").print_stats(15)
- stats.sort_stats("time").print_stats(15)
-
- stats.strip_dirs()
- else:
- do_main()
-
-if __name__ == "__main__":
- mint.sql.profile = mint.sql.SqlProfile()
-
- try:
- main()
- except KeyboardInterrupt:
- mint.sql.profile.report()
Deleted: mgmt/newdata/mint/bin/mint-database
===================================================================
--- mgmt/newdata/mint/bin/mint-database 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-database 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,205 +0,0 @@
-#!/bin/bash -e
-
-if [[ "$EUID" != "0" ]]; then
- echo "This script must be run as root"
- exit 2
-fi
-
-pgdata="/var/lib/pgsql/data"
-pglog="${pgdata}/pg_log"
-pghbaconf="${pgdata}/pg_hba.conf"
-dbname="cumin"
-
-function check-environment {
- which rpm > /dev/null
- run "rpm -q postgresql-server"
-}
-
-function check-postgresql {
- # Is it installed?
- # Is it initialized?
- # Is it running?
-
- test -d "$pgdata" || {
- echo "The database is not initialized. Run 'mint-database
configure'."
- exit 1
- }
-
- run "/sbin/service postgresql status" || {
- echo "The database is not running. Run '/sbin/service postgresql
start'."
- exit 1
- }
-}
-
-function confirmed {
- while [[ "$confirm" != "yes" ]]; do
- echo -n "Type 'yes' to proceed or [Ctrl-c] to exit: "
- read confirm
- done
-
- return 0
-}
-
-function format-output {
- while read line; do
- echo " | $line"
- done
-}
-
-function run {
- echo " | \$ $1"
-
- if [[ "$2" ]]; then
- su - postgres -c "$1" | format-output 2>&1
- else
- $1 | format-output 2>&1
- fi
-
- return ${PIPESTATUS[0]}
-}
-
-function initdb {
- run "initdb --pgdata='$pgdata' --auth='ident sameuser'"
postgres
- run "mkdir '$pglog'" postgres
- run "chmod 700 '$pglog'" postgres
-
- /sbin/restorecon -R "$pgdata"
-}
-
-function modify-postgresql-config {
- python <<EOF
-import re
-
-comment_or_empty_line_pattern = re.compile('^\w*#|^\w*$')
-record_pattern = re.compile('^\w*(local|host|hostssl|hostnossl)')
-
-database_name = "cumin"
-path = "$pghbaconf"
-file = open(path, "r")
-
-lines = list()
-first_record_index = None
-
-for i, line in enumerate(file):
- lines.append(line)
-
- if record_pattern.match(line):
- if first_record_index is None:
- first_record_index = i
-
- tokens = line.split()
-
- if tokens[1] == database_name:
- raise Exception("This file already contains a " + \
- "%s record" % database_name)
- elif comment_or_empty_line_pattern.match(line):
- pass
- else:
- raise Exception("This doesn't look like a pg_hba.conf file")
-
-file.close()
-
-if first_record_index is None:
- first_record_index = len(lines)
-
-line = "host %s %s ::1/128 trust\n" % (database_name, database_name)
-lines.insert(first_record_index, line)
-line = "host %s %s 127.0.0.1/32 trust\n" % (database_name, database_name)
-lines.insert(first_record_index, line)
-
-file = open(path, "w")
-
-for line in lines:
- file.write(line)
-
-file.close()
-EOF
-
- return $?
-}
-
-case "$1" in
- status)
- check-environment
- check-postgresql
-
- # Is it configured to be accessible?
- # Is it accessible?
- # Does it have a schema loaded?
-
- run "psql -d cumin -U cumin -h localhost -c '\q'" postgres ||
{
- echo "The database is not accessible."
- exit 1
- }
-
- echo "The database is ready."
- ;;
- configure)
- check-environment
-
- if test -f $pghbaconf && run "grep ${dbname} ${pghbaconf}";
then
- echo "The database server appears to have been configured
already."
- exit 1
- fi
-
- i_stopped_postgres=""
-
- if run "/sbin/service postgresql status"; then
- echo "The database server is running. To proceed with
configuration,"
- echo "I need to stop it (I'll start it again after I'm
done)."
-
- if confirmed; then
- run "/sbin/service postgresql stop"
- i_stopped_postgres="yes"
- fi
- fi
-
- test -d "$pgdata" || {
- echo "The database server is not initialized. To proceed, I need
to"
- echo "initialize it."
-
- if confirmed; then
- initdb
- fi
- }
-
- modify-postgresql-config
-
- if [[ "$i_stopped_postgres" == "yes" ]]; then
- run "/sbin/service postgresql start"
- fi
-
- echo "The database server is configured. Make sure postgresql is
running"
- echo "and run 'mint-database create'."
-
- # chkconfig stuff ?
- ;;
- create)
- check-environment
- check-postgresql
-
- run "createuser --superuser ${dbname}" postgres
- run "createdb --owner=${dbname} ${dbname}" postgres
-
- echo "The database is created. You can now run 'mint-admin
load-schema'."
- ;;
- destroy)
- check-environment
- check-postgresql
-
- run "dropdb ${dbname}" postgres
- run "dropuser ${dbname}" postgres
-
- echo "The database is destroyed."
- ;;
- *)
- echo "Configure and check the mint database"
- echo "Usage: mint-database COMMAND"
- echo "Commands:"
- echo " status Check the database"
- echo " configure Modify the database server configuration"
- echo " create Create the mint user and database"
- echo " destroy Discard the mint user, database, and all data"
- exit 1
- ;;
-esac
Deleted: mgmt/newdata/mint/bin/mint-demo
===================================================================
--- mgmt/newdata/mint/bin/mint-demo 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-demo 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,35 +0,0 @@
-#!/bin/bash -e
-
-function check {
- which psql > /dev/null
- psql -d cumin -U cumin -h localhost -c '\q' || {
- echo "The database is not ready; use mint-database to prepare it"
- exit 1
- }
-}
-
-function load-data {
- mint-admin add-user guest guest
- mint-admin assign guest admin
- python -c "from mint.demo import main; main()"
-}
-
-case "$1" in
- load)
- check
- load-data
- ;;
- reload)
- check
- mint-admin reload-schema --force || :
- load-data
- ;;
- *)
- echo "Utilities for mint demos"
- echo "Usage: mint-demo COMMAND"
- echo "Commands:"
- echo " load Load a guest user and other demo data"
- echo " reload First drop the schema and then load"
- exit 1
- ;;
-esac
Deleted: mgmt/newdata/mint/bin/mint-server
===================================================================
--- mgmt/newdata/mint/bin/mint-server 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-server 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,14 +0,0 @@
-#!/usr/bin/python
-
-import sys, os, logging
-
-from mint.tools import MintServerTool
-
-def main():
- MintServerTool("mint-server").main()
-
-if __name__ == "__main__":
- try:
- main()
- except KeyboardInterrupt:
- pass
Deleted: mgmt/newdata/mint/bin/mint-vacuumdb
===================================================================
--- mgmt/newdata/mint/bin/mint-vacuumdb 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/bin/mint-vacuumdb 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,25 +0,0 @@
-#!/bin/bash
-
-TABLES="slot slot_stats job job_stats sysimage sysimage_stats"
-
-function check_pid() {
- PID=`pidof postmaster`
- if [[ $? == 1 ]]; then
- echo "Postgresql is not running, can't perform vacuum"
- exit
- fi
-}
-
-if [ "$1" == "all" ]; then
- check_pid
- /usr/bin/vacuumdb --dbname=cumin --analyze --echo --verbose --host=localhost
--username=cumin
-elif [ "$1" == "tables" ]; then
- check_pid
- for t in $TABLES ; do
- /usr/bin/vacuumdb --dbname=cumin --table="$t" --analyze --echo --verbose
--host=localhost --username=cumin;
- done
-else
- echo "Usage: mint-vacuumdb [all | tables]"
- echo " all = performs a database-wide vacuum/analyze on all tables"
- echo " tables = performs a vacuum/analyze on a few pre-selected tables"
-fi
Modified: mgmt/newdata/mint/python/mint/database.py
===================================================================
--- mgmt/newdata/mint/python/mint/database.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/database.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,23 +1,24 @@
-from psycopg2 import ProgrammingError
-from sqlobject import connectionForURI, sqlhub
+import psycopg2
-from model import MintInfo, Role
from util import *
log = logging.getLogger("mint.database")
class MintDatabase(object):
- def __init__(self, app):
+ def __init__(self, app, dsn):
self.app = app
+ self.dsn = dsn
def get_connection(self):
- return connectionForURI(self.app.config.data).getConnection()
+ return psycopg2.connect(self.dsn)
def check(self):
+ log.info("Checking %s", self)
+
self.check_connection()
def init(self):
- sqlhub.processConnection = connectionForURI(self.app.config.data)
+ log.info("Initializing %s", self)
def check_connection(self):
conn = self.get_connection()
@@ -25,6 +26,8 @@
try:
cursor = conn.cursor()
cursor.execute("select now()")
+
+ log.debug("Database is talking at '%s'", self.dsn)
finally:
conn.close()
@@ -39,7 +42,7 @@
try:
cursor.execute("drop schema public cascade")
- except ProgrammingError:
+ except psycopg2.ProgrammingError:
log.warn("The schema is already dropped")
conn.commit()
@@ -75,70 +78,6 @@
result.append(tmpStmt.lstrip())
return result
- def load_schema(self):
- paths = list()
-
- paths.append(os.path.join(self.app.config.home, "sql",
"schema.sql"))
- paths.append(os.path.join(self.app.config.home, "sql",
"indexes.sql"))
- paths.append(os.path.join(self.app.config.home, "sql",
"triggers.sql"))
- paths.append(os.path.join(self.app.config.home, "sql",
"rosemary.sql"))
-
- scripts = list()
-
- for path in paths:
- file = open(path, "r")
-
- try:
- scripts.append((path, file.read()))
- finally:
- file.close()
-
- conn = self.get_connection()
-
- try:
- cursor = conn.cursor()
-
- try:
- cursor.execute("create schema public");
- except:
- conn.rollback()
- pass
-
- for path, text in scripts:
- stmts = self.__splitSQLStatements(text)
- count = 0
-
- for stmt in stmts:
- stmt = stmt.strip()
-
- if stmt:
- try:
- cursor.execute(stmt)
- except Exception, e:
- print "Failed executing statement:"
- print stmt
-
- raise e
-
- count += 1
-
- print "Executed %i statements from file '%s'" % (count,
path)
-
- conn.commit()
-
- info = MintInfo(version="0.1")
- info.sync()
-
- # Standard roles
-
- user = Role(name="user")
- user.sync()
-
- admin = Role(name="admin")
- admin.sync()
- finally:
- conn.close()
-
def check_schema(self):
conn = self.get_connection()
@@ -158,3 +97,6 @@
print "No schema present"
finally:
conn.close()
+
+ def __repr__(self):
+ return self.__class__.__name__
Modified: mgmt/newdata/mint/python/mint/demo.py
===================================================================
--- mgmt/newdata/mint/python/mint/demo.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/demo.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -5,7 +5,7 @@
self.app = app
def load(self):
- cls = self.app.model.rosemary.com_redhat_cumin.BrokerGroup
+ cls = self.app.model.com_redhat_cumin.BrokerGroup
conn = self.app.database.get_connection()
cursor = conn.cursor()
Modified: mgmt/newdata/mint/python/mint/expire.py
===================================================================
--- mgmt/newdata/mint/python/mint/expire.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/expire.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,6 +1,4 @@
from newupdate import *
-from schema import *
-from sql import *
from util import *
import mint
@@ -8,28 +6,15 @@
log = logging.getLogger("mint.expire")
class ExpireThread(MintDaemonThread):
- def __init__(self, app):
- super(ExpireThread, self).__init__(app)
-
- self.keep_curr_stats = False
-
- self.ops = []
- self.attrs = dict()
-
def init(self):
+ log.debug("Initializing %s", self)
+
frequency = self.app.expire_frequency
threshold = self.app.expire_threshold
- for cls in mint.schema.statsClasses:
- self.ops.append(SqlExpire(eval(cls), self.keep_curr_stats))
- for cls in mint.schema.entityClasses:
- self.ops.append(SqlExpire(eval(cls), self.keep_curr_stats))
+ frequency_out, frequency_unit = convert_time_units(frequency)
+ threshold_out, threshold_unit = convert_time_units(threshold)
- self.attrs["threshold"] = threshold
-
- frequency_out, frequency_unit = self.__convertTimeUnits(frequency)
- threshold_out, threshold_unit = self.__convertTimeUnits(threshold)
-
args = (threshold_out, threshold_unit, frequency_out, frequency_unit)
log.debug("Expiring database records older than %d %s, every %d %s" \
@@ -47,21 +32,6 @@
sleep(frequency)
- def __convertTimeUnits(self, t):
- if t / (24 * 3600) >= 1:
- t_out = t / (24 * 3600)
- t_unit = "days"
- elif t / 3600 >= 1:
- t_out = t / 3600
- t_unit = "hours"
- elif t / 60 >= 1:
- t_out = t / 60
- t_unit = "minutes"
- else:
- t_out = t
- t_unit = "seconds"
- return (t_out, t_unit)
-
class ExpireUpdate(Update):
def do_process(self, conn, stats):
seconds = self.model.app.expire_threshold
@@ -70,7 +40,7 @@
count = 0
- for pkg in self.model.rosemary._packages:
+ for pkg in self.model._packages:
for cls in pkg._classes:
count += self.delete_samples(conn, cls, seconds)
@@ -89,3 +59,18 @@
return cursor.rowcount
finally:
cursor.close()
+
+def convert_time_units(t):
+ if t / (24 * 3600) >= 1:
+ t_out = t / (24 * 3600)
+ t_unit = "days"
+ elif t / 3600 >= 1:
+ t_out = t / 3600
+ t_unit = "hours"
+ elif t / 60 >= 1:
+ t_out = t / 60
+ t_unit = "minutes"
+ else:
+ t_out = t
+ t_unit = "seconds"
+ return (t_out, t_unit)
Modified: mgmt/newdata/mint/python/mint/main.py
===================================================================
--- mgmt/newdata/mint/python/mint/main.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/main.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -2,52 +2,62 @@
from expire import ExpireThread
from model import MintModel
from newupdate import UpdateThread
+from session import MintSession
from vacuum import VacuumThread
from util import *
log = logging.getLogger("mint.main")
-class Mint(Lifecycle):
- def __init__(self, config):
- self.log = log
+class Mint(object):
+ def __init__(self, model_dir, broker_uri, database_dsn):
+ self.model = MintModel(self, model_dir)
+ self.model.sql_logging_enabled = False
- self.config = config
- self.database = MintDatabase(self)
- self.model = MintModel(self)
+ self.session = MintSession(self, broker_uri)
+ self.database = MintDatabase(self, database_dsn)
self.update_enabled = True
self.update_thread = UpdateThread(self)
self.expire_enabled = True
- self.expire_frequency = self.config.expire_frequency
- self.expire_threshold = self.config.expire_threshold
+ self.expire_frequency = 600
+ self.expire_threshold = 24 * 3600
self.expire_thread = ExpireThread(self)
self.vacuum_enabled = True
self.vacuum_thread = VacuumThread(self)
def check(self):
- self.database.check()
+ log.info("Checking %s", self)
+
self.model.check()
+ self.session.check()
+ self.database.check()
- def do_init(self):
- self.database.init()
- self.model.init()
+ def init(self):
+ log.info("Initializing %s", self)
def state(cond):
return cond and "enabled" or "disabled"
log.info("Updates are %s", state(self.update_enabled))
log.info("Expiration is %s", state(self.expire_enabled))
+ log.info("Vacuum is %s", state(self.vacuum_enabled))
+ self.model.init()
+ self.session.init()
+ self.database.init()
+
self.update_thread.init()
self.expire_thread.init()
self.vacuum_thread.init()
- def do_start(self):
- self.model.start()
+ def start(self):
+ log.info("Starting %s", self)
+ self.session.start()
+
if self.update_enabled:
self.update_thread.start()
@@ -57,9 +67,11 @@
if self.vacuum_enabled:
self.vacuum_thread.start()
- def do_stop(self):
- self.model.stop()
+ def stop(self):
+ log.info("Stopping %s", self)
+ self.session.stop()
+
if self.update_enabled:
self.update_thread.stop()
@@ -69,49 +81,9 @@
if self.vacuum_enabled:
self.vacuum_thread.stop()
-class MintConfig(Config):
- def __init__(self):
- super(MintConfig, self).__init__()
+ def __repr__(self):
+ return self.__class__.__name__
- hdef = os.path.normpath("/var/lib/cumin")
- hdef = os.environ.get("CUMIN_HOME", hdef)
- self.home = os.environ.get("MINT_HOME", hdef)
-
- if not os.path.isdir(self.home):
- raise Exception("Home path '%s' is not a directory")
-
- param = ConfigParameter(self, "data", str)
- param.default = "postgresql://mint@localhost/mint"
-
- param = ConfigParameter(self, "qmf", str)
- param.default = "amqp://localhost"
-
- param = ConfigParameter(self, "log-file", str)
- param.default = os.path.join(self.home, "log", "mint.log")
-
- param = ConfigParameter(self, "log-level", str)
- param.default = "warn"
-
- param = ConfigParameter(self, "debug", bool)
- param.default = False
-
- param = ConfigParameter(self, "expire-frequency", int)
- param.default = 600 # 10 minutes
-
- param = ConfigParameter(self, "expire-threshold", int)
- param.default = 24 * 3600 # 1 day
-
- def init(self):
- super(MintConfig, self).init()
-
- self.load_file(os.path.join(self.home, "etc", "cumin.conf"))
- self.load_file(os.path.join(self.home, "etc", "mint.conf"))
-
- self.load_file(os.path.join(os.path.expanduser("~"),
".cumin.conf"))
- self.load_file(os.path.join(os.path.expanduser("~"),
".mint.conf"))
-
- enable_logging("mint", self.log_level, self.log_file)
-
def get_addr_for_vhost(vhost):
broker = vhost.broker
host = broker.system.nodeName
Modified: mgmt/newdata/mint/python/mint/model.py
===================================================================
--- mgmt/newdata/mint/python/mint/model.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/model.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,90 +1,50 @@
from rosemary.model import *
-from sqlobject import *
-from cache import MintCache
from newupdate import *
-from schema import *
-from schemalocal import *
from util import *
-import mint.schema
-
-from qmf.console import ClassKey, Console, Session
-
log = logging.getLogger("mint.model")
-class MintModel(Lifecycle):
- def __init__(self, app):
- self.log = log
+class MintModel(RosemaryModel):
+ def __init__(self, app, model_dir):
+ super(MintModel, self).__init__()
+
self.app = app
+ self.model_dir = model_dir
- assert mint.schema.model is None
- mint.schema.model = self
+ self.agents_by_id = dict()
- self.rosemary = RosemaryModel()
- self.rosemary.sql_logging_enabled = False
-
- self.qmf_session = None
- self.qmf_brokers = list()
-
- # qmfAgentId => MintAgent
- self.agents = dict()
-
# int seq => callable
self.outstanding_method_calls = dict()
self.lock = RLock()
def check(self):
- pass
+ log.info("Checking %s", self)
- def do_init(self):
- assert self.qmf_session is None
+ assert os.path.isdir(self.model_dir)
- self.qmf_session = Session(MintConsole(self),
- manageConnections=True,
- rcvObjects=self.app.update_enabled)
+ log.debug("Model dir exists at '%s'", self.model_dir)
- self.rosemary.load_xml_dir(os.path.join(self.app.config.home, "xml"))
- self.rosemary.init()
+ def init(self):
+ log.info("Initializing %s", self)
- def do_start(self):
- # Clean up any transient objects that a previous instance may
- # have left behind in the DB; it's basically an unconstrained
- # agent disconnect update, for any agent
+ self.load_model_dir(self.model_dir)
- # XXX
- #up = AgentDisconnectUpdate(None)
- #self.app.update_thread.enqueue(up)
+ super(MintModel, self).init()
- uris = [x.strip() for x in self.app.config.qmf.split(",")]
-
- for uri in uris:
- self.add_broker(uri)
-
- def do_stop(self):
- for qmf_broker in self.qmf_brokers:
- self.qmf_session.delBroker(qmf_broker)
-
- def add_broker(self, url):
- log.info("Adding qmf broker at %s", url)
-
- self.lock.acquire()
- try:
- qmf_broker = self.qmf_session.addBroker(url)
- self.qmf_brokers.append(qmf_broker)
- finally:
- self.lock.release()
-
def get_agent(self, qmf_agent):
id = qmf_agent.getAgentBank()
self.lock.acquire()
try:
- return self.agents[id]
+ return self.agents_by_id[id]
finally:
self.lock.release()
+ def __repr__(self):
+ return "%s(%s)" % (self.__class__.__name__, self.model_dir)
+
class MintAgent(object):
def __init__(self, model, qmf_agent):
self.model = model
@@ -94,19 +54,12 @@
self.last_heartbeat = None
- # qmfObjectId => int database id
- self.database_ids = MintCache()
-
self.objects_by_id = dict()
- # qmfObjectId => list of ModelUpdate objects
- # XXX we're no longer using this; remove it
- self.deferred_updates = defaultdict(list)
-
self.model.lock.acquire()
try:
- assert self.id not in self.model.agents
- self.model.agents[self.id] = self
+ assert self.id not in self.model.agents_by_id
+ self.model.agents_by_id[self.id] = self
finally:
self.model.lock.release()
@@ -137,7 +90,7 @@
def delete(self):
self.model.lock.acquire()
try:
- del self.model.agents[self.id]
+ del self.model.agents_by_id[self.id]
finally:
self.model.lock.release()
@@ -145,92 +98,3 @@
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, self.id)
-
-class MintConsole(Console):
- def __init__(self, model):
- self.model = model
-
- def brokerConnected(self, qmf_broker):
- log.info("Broker at %s:%i is connected",
- qmf_broker.host, qmf_broker.port)
-
- def brokerInfo(self, qmf_broker):
- log.info("Broker info from %s", qmf_broker)
-
- def brokerDisconnected(self, qmf_broker):
- log.info("Broker at %s:%i is disconnected",
- qmf_broker.host, qmf_broker.port)
-
- def newAgent(self, qmf_agent):
- log.info("Creating %s", qmf_agent)
-
- MintAgent(self.model, qmf_agent)
-
- def delAgent(self, qmf_agent):
- log.info("Deleting %s", qmf_agent)
-
- try:
- agent = self.model.get_agent(qmf_agent)
- except KeyError:
- return
-
- agent.delete()
-
- if self.model.app.update_thread.isAlive():
- up = AgentDelete(self.model, agent)
- self.model.app.update_thread.enqueue(up)
-
- def heartbeat(self, qmf_agent, timestamp):
- timestamp = timestamp / 1000000000
-
- try:
- agent = self.model.get_agent(qmf_agent)
- except KeyError:
- return
-
- agent.last_heartbeat = datetime.fromtimestamp(timestamp)
-
- def newPackage(self, name):
- log.info("New package %s", name)
-
- def newClass(self, kind, classKey):
- log.info("New class %s", classKey)
-
- # XXX I want to store class keys using this, but I can't,
- # because I don't get any agent info; instead
-
- def objectProps(self, broker, obj):
- agent = self.model.get_agent(obj._agent)
-
- if self.model.app.update_thread.isAlive():
- if obj.getTimestamps()[2]:
- up = ObjectDelete(self.model, agent, obj)
- else:
- up = ObjectUpdate(self.model, agent, obj)
-
- self.model.app.update_thread.enqueue(up)
-
- def objectStats(self, broker, obj):
- print "objectStats!", broker, obj
-
- agent = self.get_agent(obj._agent)
-
- if self.model.app.update_thread.isAlive():
- up = ObjectAddSample(self.model, agent, obj)
- self.model.app.update_thread.enqueue(up)
-
- def event(self, broker, event):
- """ Invoked when an event is raised. """
- pass
-
- def methodResponse(self, broker, seq, response):
- log.info("Method response for request %i received from %s",
- seq, broker)
- log.debug("Response: %s", response)
-
- self.model.lock.acquire()
- try:
- callback = self.model.outstanding_method_calls.pop(seq)
- callback(response.text, response.outArgs)
- finally:
- self.model.lock.release()
Modified: mgmt/newdata/mint/python/mint/newupdate.py
===================================================================
--- mgmt/newdata/mint/python/mint/newupdate.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/newupdate.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -105,7 +105,7 @@
columns = list()
- self.process_qmf_attributes(obj, columns)
+ self.process_headers(obj, columns)
self.process_properties(obj, columns)
cursor = conn.cursor()
@@ -123,7 +123,7 @@
name = class_key.getPackageName()
try:
- pkg = self.model.rosemary._packages_by_name[name]
+ pkg = self.model._packages_by_name[name]
except KeyError:
raise PackageUnknown(name)
@@ -160,7 +160,7 @@
return obj
- def process_qmf_attributes(self, obj, columns):
+ def process_headers(self, obj, columns):
table = obj._class.sql_table
update_time, create_time, delete_time = self.object.getTimestamps()
@@ -375,10 +375,10 @@
id = self.agent.id
try:
- for pkg in self.model.rosemary._packages:
+ for pkg in self.model._packages:
for cls in pkg._classes:
for obj in cls.get_selection(cursor, _qmf_agent_id=id):
- obj.delete()
+ obj.delete(cursor)
print "Bam!", obj
finally:
cursor.close()
Deleted: mgmt/newdata/mint/python/mint/schema.py
===================================================================
--- mgmt/newdata/mint/python/mint/schema.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/schema.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,1829 +0,0 @@
-from sqlobject import *
-
-from mint.util import *
-
-model = None
-
-class Slot(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SlotStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SlotStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- Pool = StringCol(default=None)
- System = StringCol(default=None)
- AccountingGroup = StringCol(default=None)
- Activity = StringCol(default=None)
- Arch = StringCol(default=None)
- CheckpointPlatform = StringCol(default=None)
- ClientMachine = StringCol(default=None)
- ConcurrencyLimits = StringCol(default=None)
- Cpus = BigIntCol(default=None)
- CurrentRank = FloatCol(default=None)
- Disk = BigIntCol(default=None)
- EnteredCurrentActivity = TimestampCol(default=None)
- EnteredCurrentState = TimestampCol(default=None)
- FileSystemDomain = StringCol(default=None)
- GlobalJobId = StringCol(default=None)
- IsValidCheckpointPlatform = StringCol(default=None)
- JobId = StringCol(default=None)
- JobStart = TimestampCol(default=None)
- KFlops = BigIntCol(default=None)
- LastBenchmark = TimestampCol(default=None)
- LastFetchWorkCompleted = TimestampCol(default=None)
- LastFetchWorkSpawned = TimestampCol(default=None)
- LastPeriodicCheckpoint = TimestampCol(default=None)
- Machine = StringCol(default=None)
- MaxJobRetirementTime = StringCol(default=None)
- Memory = BigIntCol(default=None)
- Mips = BigIntCol(default=None)
- MyAddress = StringCol(default=None)
- Name = StringCol(default=None)
- NextFetchWorkDelay = IntCol(default=None)
- OpSys = StringCol(default=None)
- PreemptingConcurrencyLimits = StringCol(default=None)
- PreemptingOwner = StringCol(default=None)
- PreemptingUser = StringCol(default=None)
- PreemptingRank = FloatCol(default=None)
- RemoteOwner = StringCol(default=None)
- RemoteUser = StringCol(default=None)
- Requirements = StringCol(default=None)
- Rank = StringCol(default=None)
- SlotID = BigIntCol(default=None)
- Start = StringCol(default=None)
- StarterAbilityList = StringCol(default=None)
- State = StringCol(default=None)
- TimeToLive = BigIntCol(default=None)
- TotalClaimRunTime = BigIntCol(default=None)
- TotalClaimSuspendTime = BigIntCol(default=None)
- TotalCpus = BigIntCol(default=None)
- TotalDisk = BigIntCol(default=None)
- TotalJobRunTime = BigIntCol(default=None)
- TotalJobSuspendTime = BigIntCol(default=None)
- TotalMemory = BigIntCol(default=None)
- TotalSlots = BigIntCol(default=None)
- TotalVirtualMemory = BigIntCol(default=None)
- UidDomain = StringCol(default=None)
- VirtualMemory = BigIntCol(default=None)
- WindowsBuildNumber = BigIntCol(default=None)
- WindowsMajorVersion = BigIntCol(default=None)
- WindowsMinorVersion = BigIntCol(default=None)
-
- CondorPlatform = StringCol(default=None)
- CondorVersion = StringCol(default=None)
- DaemonStartTime = TimestampCol(default=None)
-
-
-class SlotStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- slot = ForeignKey('Slot', cascade='null', default=None)
- ClockDay = BigIntCol(default=None)
- ClockMin = BigIntCol(default=None)
- CondorLoadAvg = FloatCol(default=None)
- ConsoleIdle = BigIntCol(default=None)
- ImageSize = BigIntCol(default=None)
- KeyboardIdle = BigIntCol(default=None)
- LoadAvg = FloatCol(default=None)
- MyCurrentTime = TimestampCol(default=None)
- TotalCondorLoadAvg = FloatCol(default=None)
- TotalLoadAvg = FloatCol(default=None)
- TotalTimeBackfillBusy = BigIntCol(default=None)
- TotalTimeBackfillIdle = BigIntCol(default=None)
- TotalTimeBackfillKilling = BigIntCol(default=None)
- TotalTimeClaimedBusy = BigIntCol(default=None)
- TotalTimeClaimedIdle = BigIntCol(default=None)
- TotalTimeClaimedRetiring = BigIntCol(default=None)
- TotalTimeClaimedSuspended = BigIntCol(default=None)
- TotalTimeMatchedIdle = BigIntCol(default=None)
- TotalTimeOwnerIdle = BigIntCol(default=None)
- TotalTimePreemptingKilling = BigIntCol(default=None)
- TotalTimePreemptingVacating = BigIntCol(default=None)
- TotalTimeUnclaimedBenchmarking = BigIntCol(default=None)
- TotalTimeUnclaimedIdle = BigIntCol(default=None)
-
- MonitorSelfAge = BigIntCol(default=None)
- MonitorSelfCPUUsage = FloatCol(default=None)
- MonitorSelfImageSize = FloatCol(default=None)
- MonitorSelfRegisteredSocketCount = BigIntCol(default=None)
- MonitorSelfResidentSetSize = BigIntCol(default=None)
- MonitorSelfTime = TimestampCol(default=None)
-
-
-
-
-class Scheduler(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SchedulerStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SchedulerStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- Pool = StringCol(default=None)
- System = StringCol(default=None)
- JobQueueBirthdate = TimestampCol(default=None)
- MaxJobsRunning = BigIntCol(default=None)
- Machine = StringCol(default=None)
- MyAddress = StringCol(default=None)
- Name = StringCol(default=None)
-
- CondorPlatform = StringCol(default=None)
- CondorVersion = StringCol(default=None)
- DaemonStartTime = TimestampCol(default=None)
-
-
- def Submit(self, callback, Ad, Id):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Ad is not None:
- args.append(Ad)
- if Id is not None:
- args.append(Id)
-
- agent.call_method(self, "Submit", callback, args)
-
- def GetAd(self, callback, Id, JobAd):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Id is not None:
- args.append(Id)
- if JobAd is not None:
- args.append(JobAd)
-
- agent.call_method(self, "GetAd", callback, args)
-
- def SetAttribute(self, callback, Id, Name, Value):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Id is not None:
- args.append(Id)
- if Name is not None:
- args.append(Name)
- if Value is not None:
- args.append(Value)
-
- agent.call_method(self, "SetAttribute", callback, args)
-
- def Hold(self, callback, Id, Reason):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Id is not None:
- args.append(Id)
- if Reason is not None:
- args.append(Reason)
-
- agent.call_method(self, "Hold", callback, args)
-
- def Release(self, callback, Id, Reason):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Id is not None:
- args.append(Id)
- if Reason is not None:
- args.append(Reason)
-
- agent.call_method(self, "Release", callback, args)
-
- def Remove(self, callback, Id, Reason):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Id is not None:
- args.append(Id)
- if Reason is not None:
- args.append(Reason)
-
- agent.call_method(self, "Remove", callback, args)
-
- def Fetch(self, callback, Id, File, Start, End, Data):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Id is not None:
- args.append(Id)
- if File is not None:
- args.append(File)
- if Start is not None:
- args.append(Start)
- if End is not None:
- args.append(End)
- if Data is not None:
- args.append(Data)
-
- agent.call_method(self, "Fetch", callback, args)
-
- def GetStates(self, callback, Submission, State, Count):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Submission is not None:
- args.append(Submission)
- if State is not None:
- args.append(State)
- if Count is not None:
- args.append(Count)
-
- agent.call_method(self, "GetStates", callback, args)
-
- def GetJobs(self, callback, Submission, Jobs):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Submission is not None:
- args.append(Submission)
- if Jobs is not None:
- args.append(Jobs)
-
- agent.call_method(self, "GetJobs", callback, args)
-
- def echo(self, callback, sequence, body):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if sequence is not None:
- args.append(sequence)
- if body is not None:
- args.append(body)
-
- agent.call_method(self, "echo", callback, args)
-
-class SchedulerStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- scheduler = ForeignKey('Scheduler', cascade='null', default=None)
- NumUsers = BigIntCol(default=None)
- TotalHeldJobs = BigIntCol(default=None)
- TotalIdleJobs = BigIntCol(default=None)
- TotalJobAds = BigIntCol(default=None)
- TotalRemovedJobs = BigIntCol(default=None)
- TotalRunningJobs = BigIntCol(default=None)
-
- MonitorSelfAge = BigIntCol(default=None)
- MonitorSelfCPUUsage = FloatCol(default=None)
- MonitorSelfImageSize = FloatCol(default=None)
- MonitorSelfRegisteredSocketCount = BigIntCol(default=None)
- MonitorSelfResidentSetSize = BigIntCol(default=None)
- MonitorSelfTime = TimestampCol(default=None)
-
-
-
-
-class Submitter(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SubmitterStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SubmitterStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- scheduler = ForeignKey('Scheduler', cascade='null', default=None)
- JobQueueBirthdate = TimestampCol(default=None)
- Machine = StringCol(default=None)
- Name = StringCol(default=None)
- ScheddName = StringCol(default=None)
-
-
-class SubmitterStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- submitter = ForeignKey('Submitter', cascade='null', default=None)
- HeldJobs = BigIntCol(default=None)
- IdleJobs = BigIntCol(default=None)
- RunningJobs = BigIntCol(default=None)
-
-
-
-
-class Negotiator(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('NegotiatorStats', cascade='null',
default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('NegotiatorStats', cascade='null',
default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- Pool = StringCol(default=None)
- System = StringCol(default=None)
- Name = StringCol(default=None)
- Machine = StringCol(default=None)
- MyAddress = StringCol(default=None)
-
- CondorPlatform = StringCol(default=None)
- CondorVersion = StringCol(default=None)
- DaemonStartTime = TimestampCol(default=None)
-
-
- def GetLimits(self, callback, Limits):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Limits is not None:
- args.append(Limits)
-
- agent.call_method(self, "GetLimits", callback, args)
-
- def SetLimit(self, callback, Name, Max):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if Max is not None:
- args.append(Max)
-
- agent.call_method(self, "SetLimit", callback, args)
-
- def GetStats(self, callback, Name, Ad):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if Ad is not None:
- args.append(Ad)
-
- agent.call_method(self, "GetStats", callback, args)
-
- def SetPriority(self, callback, Name, Priority):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if Priority is not None:
- args.append(Priority)
-
- agent.call_method(self, "SetPriority", callback, args)
-
- def SetPriorityFactor(self, callback, Name, PriorityFactor):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if PriorityFactor is not None:
- args.append(PriorityFactor)
-
- agent.call_method(self, "SetPriorityFactor", callback, args)
-
- def SetUsage(self, callback, Name, Usage):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if Usage is not None:
- args.append(Usage)
-
- agent.call_method(self, "SetUsage", callback, args)
-
- def GetRawConfig(self, callback, Name, Value):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if Value is not None:
- args.append(Value)
-
- agent.call_method(self, "GetRawConfig", callback, args)
-
- def SetRawConfig(self, callback, Name, Value):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Name is not None:
- args.append(Name)
- if Value is not None:
- args.append(Value)
-
- agent.call_method(self, "SetRawConfig", callback, args)
-
- def Reconfig(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "Reconfig", callback, args)
-
-class NegotiatorStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- negotiator = ForeignKey('Negotiator', cascade='null', default=None)
-
- MonitorSelfAge = BigIntCol(default=None)
- MonitorSelfCPUUsage = FloatCol(default=None)
- MonitorSelfImageSize = FloatCol(default=None)
- MonitorSelfRegisteredSocketCount = BigIntCol(default=None)
- MonitorSelfResidentSetSize = BigIntCol(default=None)
- MonitorSelfTime = TimestampCol(default=None)
-
-
-
-
-class Collector(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('CollectorStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('CollectorStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- Pool = StringCol(default=None)
- System = StringCol(default=None)
- CondorPlatform = StringCol(default=None)
- CondorVersion = StringCol(default=None)
- Name = StringCol(default=None)
- MyAddress = StringCol(default=None)
-
-
-class CollectorStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- collector = ForeignKey('Collector', cascade='null', default=None)
- RunningJobs = BigIntCol(default=None)
- IdleJobs = BigIntCol(default=None)
- HostsTotal = BigIntCol(default=None)
- HostsClaimed = BigIntCol(default=None)
- HostsUnclaimed = BigIntCol(default=None)
- HostsOwner = BigIntCol(default=None)
-
-
-
-
-class Master(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('MasterStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('MasterStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- Pool = StringCol(default=None)
- System = StringCol(default=None)
- Name = StringCol(default=None)
- Machine = StringCol(default=None)
- MyAddress = StringCol(default=None)
- RealUid = IntCol(default=None)
-
- CondorPlatform = StringCol(default=None)
- CondorVersion = StringCol(default=None)
- DaemonStartTime = TimestampCol(default=None)
-
-
- def Start(self, callback, Subsystem):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Subsystem is not None:
- args.append(Subsystem)
-
- agent.call_method(self, "Start", callback, args)
-
- def Stop(self, callback, Subsystem):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if Subsystem is not None:
- args.append(Subsystem)
-
- agent.call_method(self, "Stop", callback, args)
-
-class MasterStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- master = ForeignKey('Master', cascade='null', default=None)
-
- MonitorSelfAge = BigIntCol(default=None)
- MonitorSelfCPUUsage = FloatCol(default=None)
- MonitorSelfImageSize = FloatCol(default=None)
- MonitorSelfRegisteredSocketCount = BigIntCol(default=None)
- MonitorSelfResidentSetSize = BigIntCol(default=None)
- MonitorSelfTime = TimestampCol(default=None)
-
-
-
-
-class Grid(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('GridStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('GridStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- Pool = StringCol(default=None)
- Name = StringCol(default=None)
- ScheddName = StringCol(default=None)
- Owner = StringCol(default=None)
- JobLimit = BigIntCol(default=None)
- SubmitLimit = BigIntCol(default=None)
- GridResourceUnavailableTime = TimestampCol(default=None)
-
-
-class GridStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- grid = ForeignKey('Grid', cascade='null', default=None)
- NumJobs = BigIntCol(default=None)
- SubmitsInProgress = BigIntCol(default=None)
- SubmitsQueued = BigIntCol(default=None)
- SubmitsAllowed = BigIntCol(default=None)
- SubmitsWanted = BigIntCol(default=None)
- RunningJobs = BigIntCol(default=None)
- IdleJobs = BigIntCol(default=None)
-
-
-
-
-class Submission(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SubmissionStats', cascade='null',
default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SubmissionStats', cascade='null',
default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- scheduler = ForeignKey('Scheduler', cascade='null', default=None)
- Name = StringCol(default=None)
- Owner = StringCol(default=None)
-
-
-class SubmissionStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- submission = ForeignKey('Submission', cascade='null', default=None)
- Idle = BigIntCol(default=None)
- Running = BigIntCol(default=None)
- Removed = BigIntCol(default=None)
- Completed = BigIntCol(default=None)
- Held = BigIntCol(default=None)
-
-
-
-
-class Acl(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('AclStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('AclStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- broker = ForeignKey('Broker', cascade='null', default=None)
- policyFile = StringCol(default=None)
- enforcingAcl = BoolCol(default=None)
- transferAcl = BoolCol(default=None)
- lastAclLoad = TimestampCol(default=None)
-
-
- def reloadACLFile(self, callback):
- """Reload the ACL file"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "reloadACLFile", callback, args)
-
-class AclStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- acl = ForeignKey('Acl', cascade='null', default=None)
- aclDenyCount = BigIntCol(default=None)
-
-
-
-
-class Cluster(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('ClusterStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('ClusterStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- broker = ForeignKey('Broker', cascade='null', default=None)
- clusterName = StringCol(default=None)
- clusterID = StringCol(default=None)
- memberID = StringCol(default=None)
- publishedURL = StringCol(default=None)
- clusterSize = IntCol(default=None)
- status = StringCol(default=None)
- members = StringCol(default=None)
- memberIDs = StringCol(default=None)
-
-
- def stopClusterNode(self, callback, brokerId):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if brokerId is not None:
- args.append(brokerId)
-
- agent.call_method(self, "stopClusterNode", callback, args)
-
- def stopFullCluster(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "stopFullCluster", callback, args)
-
-class ClusterStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- cluster = ForeignKey('Cluster', cascade='null', default=None)
-
-
-
-
-class Store(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('StoreStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('StoreStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- broker = ForeignKey('Broker', cascade='null', default=None)
- location = StringCol(default=None)
- defaultInitialFileCount = IntCol(default=None)
- defaultDataFileSize = BigIntCol(default=None)
- tplIsInitialized = BoolCol(default=None)
- tplDirectory = StringCol(default=None)
- tplWritePageSize = BigIntCol(default=None)
- tplWritePages = BigIntCol(default=None)
- tplInitialFileCount = IntCol(default=None)
- tplDataFileSize = BigIntCol(default=None)
- tplCurrentFileCount = BigIntCol(default=None)
-
-
-class StoreStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- store = ForeignKey('Store', cascade='null', default=None)
- tplTransactionDepth = BigIntCol(default=None)
- tplTransactionDepthLow = BigIntCol(default=None)
- tplTransactionDepthHigh = BigIntCol(default=None)
- tplTxnPrepares = BigIntCol(default=None)
- tplTxnCommits = BigIntCol(default=None)
- tplTxnAborts = BigIntCol(default=None)
- tplOutstandingAIOs = BigIntCol(default=None)
- tplOutstandingAIOsLow = BigIntCol(default=None)
- tplOutstandingAIOsHigh = BigIntCol(default=None)
-
-
-
-
-class Journal(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('JournalStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('JournalStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- queue = ForeignKey('Queue', cascade='null', default=None)
- name = StringCol(default=None)
- directory = StringCol(default=None)
- baseFileName = StringCol(default=None)
- writePageSize = BigIntCol(default=None)
- writePages = BigIntCol(default=None)
- readPageSize = BigIntCol(default=None)
- readPages = BigIntCol(default=None)
- initialFileCount = IntCol(default=None)
- autoExpand = BoolCol(default=None)
- currentFileCount = IntCol(default=None)
- maxFileCount = IntCol(default=None)
- dataFileSize = BigIntCol(default=None)
-
-
- def expand(self, callback, by):
- """Increase number of files allocated for this
journal"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if by is not None:
- args.append(by)
-
- agent.call_method(self, "expand", callback, args)
-
-class JournalStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- journal = ForeignKey('Journal', cascade='null', default=None)
- recordDepth = BigIntCol(default=None)
- recordDepthLow = BigIntCol(default=None)
- recordDepthHigh = BigIntCol(default=None)
- enqueues = BigIntCol(default=None)
- dequeues = BigIntCol(default=None)
- txn = BigIntCol(default=None)
- txnEnqueues = BigIntCol(default=None)
- txnDequeues = BigIntCol(default=None)
- txnCommits = BigIntCol(default=None)
- txnAborts = BigIntCol(default=None)
- outstandingAIOs = BigIntCol(default=None)
- outstandingAIOsLow = BigIntCol(default=None)
- outstandingAIOsHigh = BigIntCol(default=None)
- freeFileCount = BigIntCol(default=None)
- freeFileCountLow = BigIntCol(default=None)
- freeFileCountHigh = BigIntCol(default=None)
- availableFileCount = BigIntCol(default=None)
- availableFileCountLow = BigIntCol(default=None)
- availableFileCountHigh = BigIntCol(default=None)
- writeWaitFailures = BigIntCol(default=None)
- writeBusyFailures = BigIntCol(default=None)
- readRecordCount = BigIntCol(default=None)
- readBusyFailures = BigIntCol(default=None)
- writePageCacheDepth = BigIntCol(default=None)
- writePageCacheDepthLow = BigIntCol(default=None)
- writePageCacheDepthHigh = BigIntCol(default=None)
- readPageCacheDepth = BigIntCol(default=None)
- readPageCacheDepthLow = BigIntCol(default=None)
- readPageCacheDepthHigh = BigIntCol(default=None)
-
-
-
-
-class System(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SystemStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SystemStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- systemId = BLOBCol(default=None)
- osName = StringCol(default=None)
- nodeName = StringCol(default=None)
- release = StringCol(default=None)
- version = StringCol(default=None)
- machine = StringCol(default=None)
-
-
-class SystemStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- system = ForeignKey('System', cascade='null', default=None)
-
-
-
-
-class Broker(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('BrokerStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('BrokerStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- system = ForeignKey('System', cascade='null', default=None)
- port = BigIntCol(default=None)
- workerThreads = IntCol(default=None)
- maxConns = IntCol(default=None)
- connBacklog = IntCol(default=None)
- stagingThreshold = BigIntCol(default=None)
- mgmtPubInterval = IntCol(default=None)
- version = StringCol(default=None)
- dataDir = StringCol(default=None)
-
-
- def echo(self, callback, sequence, body):
- """Request a response to test the path to the management
broker"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if sequence is not None:
- args.append(sequence)
- if body is not None:
- args.append(body)
-
- agent.call_method(self, "echo", callback, args)
-
- def connect(self, callback, host, port, durable, authMechanism, username, password,
transport):
- """Establish a connection to another broker"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if host is not None:
- args.append(host)
- if port is not None:
- args.append(port)
- if durable is not None:
- args.append(durable)
- if authMechanism is not None:
- args.append(authMechanism)
- if username is not None:
- args.append(username)
- if password is not None:
- args.append(password)
- if transport is not None:
- args.append(transport)
-
- agent.call_method(self, "connect", callback, args)
-
- def queueMoveMessages(self, callback, srcQueue, destQueue, qty):
- """Move messages from one queue to another"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if srcQueue is not None:
- args.append(srcQueue)
- if destQueue is not None:
- args.append(destQueue)
- if qty is not None:
- args.append(qty)
-
- agent.call_method(self, "queueMoveMessages", callback, args)
-
-class BrokerStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- broker = ForeignKey('Broker', cascade='null', default=None)
- uptime = BigIntCol(default=None)
-
-
-
-
-class Agent(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('AgentStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('AgentStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- clientConnection = ForeignKey('ClientConnection', cascade='null',
default=None)
- label = StringCol(default=None)
- broker = ForeignKey('Broker', cascade='null', default=None)
- systemId = BLOBCol(default=None)
- brokerBank = BigIntCol(default=None)
- agentBank = BigIntCol(default=None)
-
-
-class AgentStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- agent = ForeignKey('Agent', cascade='null', default=None)
-
-
-
-
-class Vhost(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('VhostStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('VhostStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- broker = ForeignKey('Broker', cascade='null', default=None)
- name = StringCol(default=None)
- federationTag = StringCol(default=None)
-
-
-class VhostStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- vhost = ForeignKey('Vhost', cascade='null', default=None)
-
-
-
-
-class Queue(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('QueueStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('QueueStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- vhost = ForeignKey('Vhost', cascade='null', default=None)
- name = StringCol(default=None)
- durable = BoolCol(default=None)
- autoDelete = BoolCol(default=None)
- exclusive = BoolCol(default=None)
- arguments = PickleCol(default=None)
- exchange = ForeignKey('Exchange', cascade='null', default=None)
-
-
- def purge(self, callback, request):
- """Discard all or some messages on a queue"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if request is not None:
- args.append(request)
-
- agent.call_method(self, "purge", callback, args)
-
-class QueueStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- queue = ForeignKey('Queue', cascade='null', default=None)
- msgTotalEnqueues = BigIntCol(default=None)
- msgTotalDequeues = BigIntCol(default=None)
- msgTxnEnqueues = BigIntCol(default=None)
- msgTxnDequeues = BigIntCol(default=None)
- msgPersistEnqueues = BigIntCol(default=None)
- msgPersistDequeues = BigIntCol(default=None)
- msgDepth = BigIntCol(default=None)
- byteDepth = BigIntCol(default=None)
- byteTotalEnqueues = BigIntCol(default=None)
- byteTotalDequeues = BigIntCol(default=None)
- byteTxnEnqueues = BigIntCol(default=None)
- byteTxnDequeues = BigIntCol(default=None)
- bytePersistEnqueues = BigIntCol(default=None)
- bytePersistDequeues = BigIntCol(default=None)
- consumerCount = BigIntCol(default=None)
- consumerCountLow = BigIntCol(default=None)
- consumerCountHigh = BigIntCol(default=None)
- bindingCount = BigIntCol(default=None)
- bindingCountLow = BigIntCol(default=None)
- bindingCountHigh = BigIntCol(default=None)
- unackedMessages = BigIntCol(default=None)
- unackedMessagesLow = BigIntCol(default=None)
- unackedMessagesHigh = BigIntCol(default=None)
- messageLatencyMin = BigIntCol(default=None)
- messageLatencyMax = BigIntCol(default=None)
- messageLatencyAverage = BigIntCol(default=None)
- messageLatencySamples = BigIntCol(default=None)
-
-
-
-
-class Exchange(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('ExchangeStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('ExchangeStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- vhost = ForeignKey('Vhost', cascade='null', default=None)
- name = StringCol(default=None)
- type = StringCol(default=None)
- durable = BoolCol(default=None)
- autoDelete = BoolCol(default=None)
- exchange = ForeignKey('Exchange', cascade='null', default=None)
- arguments = PickleCol(default=None)
-
-
-class ExchangeStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- exchange = ForeignKey('Exchange', cascade='null', default=None)
- producerCount = BigIntCol(default=None)
- producerCountLow = BigIntCol(default=None)
- producerCountHigh = BigIntCol(default=None)
- bindingCount = BigIntCol(default=None)
- bindingCountLow = BigIntCol(default=None)
- bindingCountHigh = BigIntCol(default=None)
- msgReceives = BigIntCol(default=None)
- msgDrops = BigIntCol(default=None)
- msgRoutes = BigIntCol(default=None)
- byteReceives = BigIntCol(default=None)
- byteDrops = BigIntCol(default=None)
- byteRoutes = BigIntCol(default=None)
-
-
-
-
-class Binding(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('BindingStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('BindingStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- exchange = ForeignKey('Exchange', cascade='null', default=None)
- queue = ForeignKey('Queue', cascade='null', default=None)
- bindingKey = StringCol(default=None)
- arguments = PickleCol(default=None)
- origin = StringCol(default=None)
-
-
-class BindingStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- binding = ForeignKey('Binding', cascade='null', default=None)
- msgMatched = BigIntCol(default=None)
-
-
-
-
-class Subscription(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SubscriptionStats', cascade='null',
default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SubscriptionStats', cascade='null',
default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- session = ForeignKey('Session', cascade='null', default=None)
- queue = ForeignKey('Queue', cascade='null', default=None)
- name = StringCol(default=None)
- browsing = BoolCol(default=None)
- acknowledged = BoolCol(default=None)
- exclusive = BoolCol(default=None)
- creditMode = StringCol(default=None)
- arguments = PickleCol(default=None)
-
-
-class SubscriptionStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- subscription = ForeignKey('Subscription', cascade='null',
default=None)
- delivered = BigIntCol(default=None)
-
-
-
-
-class ClientConnection(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('ClientConnectionStats', cascade='null',
default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('ClientConnectionStats', cascade='null',
default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- vhost = ForeignKey('Vhost', cascade='null', default=None)
- address = StringCol(default=None)
- incoming = BoolCol(default=None)
- SystemConnection = BoolCol(default=None)
- federationLink = BoolCol(default=None)
- authIdentity = StringCol(default=None)
- remoteProcessName = StringCol(default=None)
- remotePid = BigIntCol(default=None)
- remoteParentPid = BigIntCol(default=None)
-
-
- def close(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "close", callback, args)
-
-class ClientConnectionStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- clientConnection = ForeignKey('ClientConnection', cascade='null',
default=None)
- closing = BoolCol(default=None)
- framesFromClient = BigIntCol(default=None)
- framesToClient = BigIntCol(default=None)
- bytesFromClient = BigIntCol(default=None)
- bytesToClient = BigIntCol(default=None)
-
-
-
-
-class Link(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('LinkStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('LinkStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- vhost = ForeignKey('Vhost', cascade='null', default=None)
- host = StringCol(default=None)
- port = BigIntCol(default=None)
- transport = StringCol(default=None)
- durable = BoolCol(default=None)
-
-
- def close(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "close", callback, args)
-
- def bridge(self, callback, durable, src, dest, key, tag, excludes, srcIsQueue,
srcIsLocal, dynamic, sync):
- """Bridge messages over the link"""
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
- if durable is not None:
- args.append(durable)
- if src is not None:
- args.append(src)
- if dest is not None:
- args.append(dest)
- if key is not None:
- args.append(key)
- if tag is not None:
- args.append(tag)
- if excludes is not None:
- args.append(excludes)
- if srcIsQueue is not None:
- args.append(srcIsQueue)
- if srcIsLocal is not None:
- args.append(srcIsLocal)
- if dynamic is not None:
- args.append(dynamic)
- if sync is not None:
- args.append(sync)
-
- agent.call_method(self, "bridge", callback, args)
-
-class LinkStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- link = ForeignKey('Link', cascade='null', default=None)
- state = StringCol(default=None)
- lastError = StringCol(default=None)
-
-
-
-
-class Bridge(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('BridgeStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('BridgeStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- link = ForeignKey('Link', cascade='null', default=None)
- channelId = IntCol(default=None)
- durable = BoolCol(default=None)
- src = StringCol(default=None)
- dest = StringCol(default=None)
- key = StringCol(default=None)
- srcIsQueue = BoolCol(default=None)
- srcIsLocal = BoolCol(default=None)
- tag = StringCol(default=None)
- excludes = StringCol(default=None)
- dynamic = BoolCol(default=None)
- syncRsv = IntCol(default=None)
-
-
- def close(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "close", callback, args)
-
-class BridgeStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- bridge = ForeignKey('Bridge', cascade='null', default=None)
-
-
-
-
-class Session(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SessionStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SessionStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- vhost = ForeignKey('Vhost', cascade='null', default=None)
- name = StringCol(default=None)
- channelId = IntCol(default=None)
- clientConnection = ForeignKey('ClientConnection', cascade='null',
default=None)
- detachedLifespan = BigIntCol(default=None)
- attached = BoolCol(default=None)
- expireTime = TimestampCol(default=None)
- maxClientRate = BigIntCol(default=None)
-
-
- def solicitAck(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "solicitAck", callback, args)
-
- def detach(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "detach", callback, args)
-
- def resetLifespan(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "resetLifespan", callback, args)
-
- def close(self, callback):
- try:
- agent = model.agents[self.qmfAgentId]
- except KeyError:
- raise Exception("Agent not found")
-
- args = list()
-
-
- agent.call_method(self, "close", callback, args)
-
-class SessionStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- session = ForeignKey('Session', cascade='null', default=None)
- framesOutstanding = BigIntCol(default=None)
- TxnStarts = BigIntCol(default=None)
- TxnCommits = BigIntCol(default=None)
- TxnRejects = BigIntCol(default=None)
- TxnCount = BigIntCol(default=None)
- clientCredit = BigIntCol(default=None)
-
-
-
-
-class Sysimage(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfAgentId = StringCol(notNone=True, default=None)
- qmfObjectId = StringCol(notNone=True, default=None)
- qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId, unique=True)
- qmfClassKey = StringCol(notNone=True, default=None)
- qmfPersistent = BoolCol(notNone=True, default=None)
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- qmfCreateTime = TimestampCol(notNone=True, default=None)
- qmfDeleteTime = TimestampCol(default=None)
- statsCurr = ForeignKey('SysimageStats', cascade='null', default=None)
- statsCurrIndex = DatabaseIndex(statsCurr)
- statsPrev = ForeignKey('SysimageStats', cascade='null', default=None)
- statsPrevIndex = DatabaseIndex(statsPrev)
- uuid = BLOBCol(default=None)
- osName = StringCol(default=None)
- nodeName = StringCol(default=None)
- release = StringCol(default=None)
- version = StringCol(default=None)
- machine = StringCol(default=None)
- distro = StringCol(default=None)
- memTotal = BigIntCol(default=None)
- swapTotal = BigIntCol(default=None)
-
-
-class SysimageStats(SQLObject):
- class sqlmeta:
- lazyUpdate = True
- qmfUpdateTime = TimestampCol(notNone=True, default=None)
- sysimage = ForeignKey('Sysimage', cascade='null', default=None)
- memFree = BigIntCol(default=None)
- swapFree = BigIntCol(default=None)
- loadAverage1Min = FloatCol(default=None)
- loadAverage5Min = FloatCol(default=None)
- loadAverage10Min = FloatCol(default=None)
- procTotal = BigIntCol(default=None)
- procRunning = BigIntCol(default=None)
-
-
-
-
-classToSchemaNameMap = dict()
-schemaNameToClassMap = dict()
-schemaReservedWordsMap = {"in": "inRsv", "In":
"InRsv",
- "connection": "clientConnection", "Connection":
"ClientConnection",
- "connectionRef": "clientConnectionRef",
- "user": "gridUser", "User": "GridUser",
- "registeredTo": "broker",
- "sync": "syncRsv"}
-
-classToSchemaNameMap['Slot'] = 'Slot'
-schemaNameToClassMap['Slot'] = Slot
-
-Slot.sqlmeta.addJoin(SQLMultipleJoin('SlotStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Scheduler'] = 'Scheduler'
-schemaNameToClassMap['Scheduler'] = Scheduler
-
-Scheduler.sqlmeta.addJoin(SQLMultipleJoin('SchedulerStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Submitter'] = 'Submitter'
-schemaNameToClassMap['Submitter'] = Submitter
-
-Scheduler.sqlmeta.addJoin(SQLMultipleJoin('Submitter',
joinMethodName='submitters'))
-
-
-Submitter.sqlmeta.addJoin(SQLMultipleJoin('SubmitterStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Negotiator'] = 'Negotiator'
-schemaNameToClassMap['Negotiator'] = Negotiator
-
-Negotiator.sqlmeta.addJoin(SQLMultipleJoin('NegotiatorStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Collector'] = 'Collector'
-schemaNameToClassMap['Collector'] = Collector
-
-Collector.sqlmeta.addJoin(SQLMultipleJoin('CollectorStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Master'] = 'Master'
-schemaNameToClassMap['Master'] = Master
-
-Master.sqlmeta.addJoin(SQLMultipleJoin('MasterStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Grid'] = 'Grid'
-schemaNameToClassMap['Grid'] = Grid
-
-Grid.sqlmeta.addJoin(SQLMultipleJoin('GridStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Submission'] = 'Submission'
-schemaNameToClassMap['Submission'] = Submission
-
-Scheduler.sqlmeta.addJoin(SQLMultipleJoin('Submission',
joinMethodName='submissions'))
-
-
-Submission.sqlmeta.addJoin(SQLMultipleJoin('SubmissionStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Acl'] = 'Acl'
-schemaNameToClassMap['Acl'] = Acl
-
-Broker.sqlmeta.addJoin(SQLMultipleJoin('Acl', joinMethodName='acls'))
-
-
-Acl.sqlmeta.addJoin(SQLMultipleJoin('AclStats', joinMethodName='stats'))
-
-classToSchemaNameMap['Cluster'] = 'Cluster'
-schemaNameToClassMap['Cluster'] = Cluster
-
-Broker.sqlmeta.addJoin(SQLMultipleJoin('Cluster',
joinMethodName='clusters'))
-
-
-Cluster.sqlmeta.addJoin(SQLMultipleJoin('ClusterStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Store'] = 'Store'
-schemaNameToClassMap['Store'] = Store
-
-Broker.sqlmeta.addJoin(SQLMultipleJoin('Store',
joinMethodName='stores'))
-
-
-Store.sqlmeta.addJoin(SQLMultipleJoin('StoreStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Journal'] = 'Journal'
-schemaNameToClassMap['Journal'] = Journal
-
-Queue.sqlmeta.addJoin(SQLMultipleJoin('Journal',
joinMethodName='journals'))
-
-
-Journal.sqlmeta.addJoin(SQLMultipleJoin('JournalStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['System'] = 'System'
-schemaNameToClassMap['System'] = System
-
-System.sqlmeta.addJoin(SQLMultipleJoin('SystemStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Broker'] = 'Broker'
-schemaNameToClassMap['Broker'] = Broker
-
-System.sqlmeta.addJoin(SQLMultipleJoin('Broker',
joinMethodName='brokers'))
-
-
-Broker.sqlmeta.addJoin(SQLMultipleJoin('BrokerStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Agent'] = 'Agent'
-schemaNameToClassMap['Agent'] = Agent
-
-ClientConnection.sqlmeta.addJoin(SQLMultipleJoin('Agent',
joinMethodName='agents'))
-
-Broker.sqlmeta.addJoin(SQLMultipleJoin('Agent',
joinMethodName='agents'))
-
-
-Agent.sqlmeta.addJoin(SQLMultipleJoin('AgentStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Vhost'] = 'Vhost'
-schemaNameToClassMap['Vhost'] = Vhost
-
-Broker.sqlmeta.addJoin(SQLMultipleJoin('Vhost',
joinMethodName='vhosts'))
-
-
-Vhost.sqlmeta.addJoin(SQLMultipleJoin('VhostStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Queue'] = 'Queue'
-schemaNameToClassMap['Queue'] = Queue
-
-Vhost.sqlmeta.addJoin(SQLMultipleJoin('Queue', joinMethodName='queues'))
-
-Exchange.sqlmeta.addJoin(SQLMultipleJoin('Queue',
joinMethodName='queues'))
-
-
-Queue.sqlmeta.addJoin(SQLMultipleJoin('QueueStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Exchange'] = 'Exchange'
-schemaNameToClassMap['Exchange'] = Exchange
-
-Vhost.sqlmeta.addJoin(SQLMultipleJoin('Exchange',
joinMethodName='exchanges'))
-
-Exchange.sqlmeta.addJoin(SQLMultipleJoin('Exchange',
joinMethodName='exchanges'))
-
-
-Exchange.sqlmeta.addJoin(SQLMultipleJoin('ExchangeStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Binding'] = 'Binding'
-schemaNameToClassMap['Binding'] = Binding
-
-Exchange.sqlmeta.addJoin(SQLMultipleJoin('Binding',
joinMethodName='bindings'))
-
-Queue.sqlmeta.addJoin(SQLMultipleJoin('Binding',
joinMethodName='bindings'))
-
-
-Binding.sqlmeta.addJoin(SQLMultipleJoin('BindingStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Subscription'] = 'Subscription'
-schemaNameToClassMap['Subscription'] = Subscription
-
-Session.sqlmeta.addJoin(SQLMultipleJoin('Subscription',
joinMethodName='subscriptions'))
-
-Queue.sqlmeta.addJoin(SQLMultipleJoin('Subscription',
joinMethodName='subscriptions'))
-
-
-Subscription.sqlmeta.addJoin(SQLMultipleJoin('SubscriptionStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['ClientConnection'] = 'ClientConnection'
-schemaNameToClassMap['ClientConnection'] = ClientConnection
-
-Vhost.sqlmeta.addJoin(SQLMultipleJoin('ClientConnection',
joinMethodName='clientConnections'))
-
-
-ClientConnection.sqlmeta.addJoin(SQLMultipleJoin('ClientConnectionStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Link'] = 'Link'
-schemaNameToClassMap['Link'] = Link
-
-Vhost.sqlmeta.addJoin(SQLMultipleJoin('Link', joinMethodName='links'))
-
-
-Link.sqlmeta.addJoin(SQLMultipleJoin('LinkStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Bridge'] = 'Bridge'
-schemaNameToClassMap['Bridge'] = Bridge
-
-Link.sqlmeta.addJoin(SQLMultipleJoin('Bridge',
joinMethodName='bridges'))
-
-
-Bridge.sqlmeta.addJoin(SQLMultipleJoin('BridgeStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Session'] = 'Session'
-schemaNameToClassMap['Session'] = Session
-
-Vhost.sqlmeta.addJoin(SQLMultipleJoin('Session',
joinMethodName='sessions'))
-
-ClientConnection.sqlmeta.addJoin(SQLMultipleJoin('Session',
joinMethodName='sessions'))
-
-
-Session.sqlmeta.addJoin(SQLMultipleJoin('SessionStats',
joinMethodName='stats'))
-
-classToSchemaNameMap['Sysimage'] = 'Sysimage'
-schemaNameToClassMap['Sysimage'] = Sysimage
-
-Sysimage.sqlmeta.addJoin(SQLMultipleJoin('SysimageStats',
joinMethodName='stats'))
-
-
-entityClasses = ['Slot', 'Scheduler', 'Submitter',
'Negotiator', 'Collector', 'Master', 'Grid',
'Submission', 'Acl', 'Cluster', 'Store',
'Journal', 'System', 'Broker', 'Agent', 'Vhost',
'Queue', 'Exchange', 'Binding', 'Subscription',
'ClientConnection', 'Link', 'Bridge', 'Session',
'Sysimage']
-
-statsClasses = ['SlotStats', 'SchedulerStats', 'SubmitterStats',
'NegotiatorStats', 'CollectorStats', 'MasterStats',
'GridStats', 'SubmissionStats', 'AclStats',
'ClusterStats', 'StoreStats', 'JournalStats',
'SystemStats', 'BrokerStats', 'AgentStats', 'VhostStats',
'QueueStats', 'ExchangeStats', 'BindingStats',
'SubscriptionStats', 'ClientConnectionStats', 'LinkStats',
'BridgeStats', 'SessionStats', 'SysimageStats']
Deleted: mgmt/newdata/mint/python/mint/schemalocal.py
===================================================================
--- mgmt/newdata/mint/python/mint/schemalocal.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/schemalocal.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,76 +0,0 @@
-from sqlobject import *
-
-from mint.util import *
-
-class Subject(SQLObject):
- class sqlmeta:
- lazyUpdate = True
-
- name = StringCol(unique=True, notNone=True)
- password = StringCol()
- lastChallenged = TimestampCol(default=None)
- lastLoggedIn = TimestampCol(default=None)
- lastLoggedOut = TimestampCol(default=None)
- roles = SQLRelatedJoin("Role",
- intermediateTable="subject_role_mapping",
- createRelatedTable=False)
-
- def getByName(cls, name):
- try:
- return Subject.selectBy(name=name)[0]
- except IndexError:
- pass
-
- getByName = classmethod(getByName)
-
-class Role(SQLObject):
- class sqlmeta:
- lazyUpdate = True
-
- name = StringCol(unique=True, notNone=True)
- subjects = SQLRelatedJoin("Subject",
- intermediateTable="subject_role_mapping",
- createRelatedTable=False)
-
- def getByName(cls, name):
- try:
- return Role.selectBy(name=name)[0]
- except IndexError:
- pass
-
- getByName = classmethod(getByName)
-
-class SubjectRoleMapping(SQLObject):
- class sqlmeta:
- lazyUpdate = True
-
- subject = ForeignKey("Subject", notNull=True, cascade=True)
- role = ForeignKey("Role", notNull=True, cascade=True)
- unique = DatabaseIndex(subject, role, unique=True)
-
-class ObjectNotFound(Exception):
- pass
-
-class MintInfo(SQLObject):
- class sqlmeta:
- lazyUpdate = True
-
- version = StringCol(default="0.1", notNone=True)
-
-class BrokerGroup(SQLObject):
- class sqlmeta:
- lazyUpdate = True
-
- name = StringCol(unique=True, notNone=True)
- brokers = SQLRelatedJoin("Broker",
- intermediateTable="broker_group_mapping",
- createRelatedTable=False)
-
-class BrokerGroupMapping(SQLObject):
- class sqlmeta:
- lazyUpdate = True
-
- broker = ForeignKey("Broker", notNull=True, cascade=True)
- brokerGroup = ForeignKey("BrokerGroup", notNull=True, cascade=True)
- unique = DatabaseIndex(broker, brokerGroup, unique=True)
-
Deleted: mgmt/newdata/mint/python/mint/schemaparser.py
===================================================================
--- mgmt/newdata/mint/python/mint/schemaparser.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/schemaparser.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,298 +0,0 @@
-import mllib
-from sqlobject import *
-
-class SchemaParser:
- """parses broker XML schema"""
-
- def __init__(self, pythonFilePath, sqlTriggersFilePath, xmlFilePaths):
- self.pythonFilePath = pythonFilePath
- self.sqlTriggersFilePath = sqlTriggersFilePath
- self.xmlFilePaths = xmlFilePaths
- self.style = MixedCaseUnderscoreStyle()
- self.additionalPythonOutput = ""
- self.currentClass = ""
- self.pythonOutput = ""
- self.finalPythonOutput = ""
- self.sqlTriggersOutput = ""
- self.entityClasses = []
- self.statsClasses = []
- self.groups = dict()
- # mapping between xml schema types and database column types
- # see xml/MintTypes.xml
- self.dataTypesMap = dict()
- self.dataTypesMap["objId"] = "ForeignKey"
- self.dataTypesMap["uuid"] = "BLOBCol"
- self.dataTypesMap["int32"] = "IntCol"
- self.dataTypesMap["uint8"] = self.dataTypesMap["hilo8"] =
self.dataTypesMap["count8"] = self.dataTypesMap["mma8"] =
"SmallIntCol"
- self.dataTypesMap["hilo16"] = self.dataTypesMap["count16"] =
self.dataTypesMap["mma16"] = "SmallIntCol"
- self.dataTypesMap["uint16"] = "IntCol"
- self.dataTypesMap["hilo32"] = self.dataTypesMap["count32"] =
self.dataTypesMap["mma32"] = "BigIntCol"
- self.dataTypesMap["uint32"] = "BigIntCol"
- self.dataTypesMap["uint64"] = self.dataTypesMap["hilo64"] =
self.dataTypesMap["count64"] = self.dataTypesMap["mma64"] =
self.dataTypesMap["mmaTime"] = "BigIntCol"
- self.dataTypesMap["float"] = self.dataTypesMap["double"] =
"FloatCol"
- self.dataTypesMap["absTime"] = "TimestampCol"
- self.dataTypesMap["deltaTime"] = "BigIntCol"
- self.dataTypesMap["bool"] = "BoolCol"
- self.dataTypesMap["sstr"] = self.dataTypesMap["lstr"] =
"StringCol"
- self.dataTypesMap["map"] = "PickleCol"
- # mapping for identifiers in the XML schema that are reserved words in either SQL or
Python
- self.reservedWords = {"in": "inRsv", "In":
"InRsv",
- "connection": "clientConnection",
"Connection": "ClientConnection",
- "connectionRef": "clientConnectionRef",
- "user": "gridUser", "User":
"GridUser",
- "registeredTo": "broker",
- "sync": "syncRsv"}
-
- def renameReservedWord(self, name):
- if (name in self.reservedWords.keys()):
- print "Notice: %s is a reserved word, automatically translating to %s" %
(name, self.reservedWords[name])
- return self.reservedWords[name]
- else:
- return name
-
- def attrNameFromDbColumn(self, name, removeSuffix=""):
- return self.style.dbColumnToPythonAttr(name.replace(removeSuffix, ""))
-
- def generateAttrib(self, attribName, attribType, params=""):
- if (params.find("default") < 0):
- if (params == ""):
- params = "default=None"
- else:
- params += ", default=None"
- if attribName == "id":
- attribName = "id_"
- # special case for "port" attrib, needs to be a 2-byte unsigned
- # but uint16 converts to a signed int (SmallIntCol), so forcing the next size up
(IntCol)
- if (attribName == "port" and attribType ==
self.dataTypesMap["uint16"]):
- attribType = self.dataTypesMap["uint32"]
- self.pythonOutput += " %s = %s(%s)\n" % (attribName, attribType, params)
-
- def generateTimestampAttrib(self, col, args=""):
- self.generateAttrib("qmf" + col + "Time",
"TimestampCol", args) #, "default=datetime.min")
-
- def generateForeignKeyAttrib(self, name, reference):
- params = "'%s', cascade='null'" % (reference)
- name = self.renameReservedWord(name)
- self.generateAttrib(name, "ForeignKey", params)
-
- def generateForeignKeyAttribWithIndex(self, name, reference):
- self.generateForeignKeyAttrib(name, reference)
- name = self.renameReservedWord(name)
- self.pythonOutput += " %sIndex = DatabaseIndex(%s)\n" % (name, name)
-
- def generateHiLoAttrib(self, name, type):
- self.generateAttrib(name, type)
- self.generateAttrib(name + "Low", type)
- self.generateAttrib(name + "High", type)
-
- def generateMinMaxAvgAttrib(self, name, type):
- self.generateAttrib(name + "Min", type)
- self.generateAttrib(name + "Max", type)
- self.generateAttrib(name + "Average", type)
- self.generateAttrib(name + "Samples", type)
-
- def generateMultipleJoin(self, tableFrom, tableTo, attrib=""):
- if (attrib == ""):
- attrib = tableTo[0].lower() + tableTo[1:] + "s"
- self.additionalPythonOutput +=
"\n%s.sqlmeta.addJoin(SQLMultipleJoin('%s',
joinMethodName='%s'))\n" % (tableFrom, tableTo, attrib)
-
- def generateLazyUpdate(self, lazyUpdate=True):
- self.pythonOutput += " class sqlmeta:\n"
- self.pythonOutput += " lazyUpdate = %s\n" % lazyUpdate
-
- def generateQmfIdsIndex(self):
- self.generateAttrib("qmfAgentId", "StringCol",
"notNone=True")
- self.generateAttrib("qmfObjectId", "StringCol",
"notNone=True")
- self.pythonOutput += " qmfIdsUnique = DatabaseIndex(qmfAgentId, qmfObjectId,
unique=True)\n"
- self.generateAttrib("qmfClassKey", "StringCol",
"notNone=True")
- self.generateAttrib("qmfPersistent", "BoolCol",
"notNone=True")
-
- def generateClassAttribs(self, schemaName, elements):
- if (schemaName == "JournalStats"):
- print schemaName
- for elem in elements:
- elemName = self.renameReservedWord(elem["@name"])
- if (elem["@type"] == "objId"):
- reference = elem["@references"]
- if not reference:
- raise Exception("Attribute of objId type is missing references
value")
- #XXX: TO-DO: properly handle namespaces
- # handle cases where the referenced class is in a different namespace (ie,
contains a "." or a ":");
- # for now, discard namespace
- namespaceIndex = max(reference.rfind("."),
reference.rfind(":"))
- if (namespaceIndex > 0):
- reference = reference[namespaceIndex + 1:]
- reference = self.style.dbTableToPythonClass(reference)
- reference = self.renameReservedWord(reference)
- attrib = reference[0].lower() + reference[1:]
- self.generateForeignKeyAttrib(attrib, reference)
- self.generateMultipleJoin(reference, self.currentClass)
- elif (elem["(a)type"].startswith("hilo")):
- self.generateHiLoAttrib(self.attrNameFromDbColumn(elemName),
self.dataTypesMap[elem["@type"]])
- elif (elem["(a)type"].startswith("mma")):
- self.generateMinMaxAvgAttrib(self.attrNameFromDbColumn(elemName),
self.dataTypesMap[elem["@type"]])
- else:
- self.generateAttrib(self.attrNameFromDbColumn(elemName),
self.dataTypesMap[elem["@type"]])
- self.pythonOutput += "\n"
-
- def startClass(self, schemaName, stats=False):
- schemaName = self.renameReservedWord(schemaName)
- if (stats):
- origPythonName = self.style.dbTableToPythonClass(schemaName)
- pythonName = self.style.dbTableToPythonClass(schemaName + "_stats")
- colPythonName = self.style.dbColumnToPythonAttr(schemaName)
- keyPythonName = self.style.dbTableToPythonClass(schemaName)
- sqlTable = self.style.pythonClassToDBTable(pythonName)
- sqlParentTable =
self.style.pythonClassToDBTable(self.style.dbTableToPythonClass(schemaName))
- self.sqlTriggersOutput += self.sqlTriggerFunction % (sqlParentTable,
sqlParentTable, sqlParentTable)
- self.sqlTriggersOutput += "\n"
- self.sqlTriggersOutput += "CREATE TRIGGER update_%s_stats AFTER INSERT ON %s
\n" % (sqlParentTable, sqlTable)
- self.sqlTriggersOutput += " FOR EACH ROW EXECUTE PROCEDURE update_%s_stats();
\n\n" % (sqlParentTable)
- self.sqlTriggersOutput += "CREATE INDEX %s_update_time ON %s
(qmf_update_time);\n\n" % (sqlTable, sqlTable)
- else:
- pythonName = self.style.dbTableToPythonClass(schemaName)
- statsPythonName = self.style.dbTableToPythonClass(schemaName + "_stats")
- self.currentClass = pythonName
- self.pythonOutput += "\nclass %s(SQLObject):\n" % (pythonName)
- self.generateLazyUpdate()
- if (stats):
- self.statsClasses.append(str(pythonName))
- self.generateTimestampAttrib("Update", "notNone=True")
- self.generateForeignKeyAttrib(colPythonName[0].lower() + colPythonName[1:],
keyPythonName)
- self.generateMultipleJoin(origPythonName, pythonName, "stats")
- else:
- self.entityClasses.append(str(pythonName))
- self.generateQmfIdsIndex()
- self.generateTimestampAttrib("Update", "notNone=True")
- self.generateTimestampAttrib("Create", "notNone=True")
- self.generateTimestampAttrib("Delete")
- self.generateForeignKeyAttribWithIndex("statsCurr", statsPythonName)
- self.generateForeignKeyAttribWithIndex("statsPrev", statsPythonName)
- self.finalPythonOutput += "classToSchemaNameMap['%s'] =
'%s'\n" % (pythonName, schemaName)
- self.finalPythonOutput += "schemaNameToClassMap['%s'] = %s\n" %
(schemaName, pythonName)
-
- def generateMethod(self, elem):
- if (elem["@desc"] != None):
- comment = ' """' + elem["@desc"] +
'"""\n'
- else:
- comment = ""
- formalArgs = ", "
- actualArgs = " args = list()\n\n"
- for arg in elem.query["arg"]:
- formalArgs += "%s, " % (arg["@name"])
- actualArgs += " if %s is not None:\n" % (arg["@name"])
- actualArgs += " args.append(%s)\n" % (arg["@name"])
-
- if (formalArgs != ", "):
- formalArgs = formalArgs[:-2]
- else:
- formalArgs = ""
- self.pythonOutput += "\n def %s(self, callback%s):\n" %
(elem["@name"], formalArgs)
- self.pythonOutput += comment
- self.pythonOutput += " try:\n"
- self.pythonOutput += " agent = model.agents[self.qmfAgentId]\n"
- self.pythonOutput += " except KeyError:\n"
- self.pythonOutput += " raise Exception(\"Agent not
found\")\n\n"
- self.pythonOutput += actualArgs + "\n"
- self.pythonOutput += " agent.call_method(self, \"%s\", " %
elem["@name"]
- self.pythonOutput += "callback, args)\n"
-
- def endClass(self):
- if (self.additionalPythonOutput != ""):
- self.finalPythonOutput += self.additionalPythonOutput + "\n"
- self.additionalPythonOutput = ""
- if (self.pythonOutput.endswith("(SQLObject):\n")):
- self.pythonOutput += " pass\n"
- self.currentClass = ""
-
- def generateCode(self):
-# self.pythonOutput += "import mint\n\n"
-# self.pythonOutput += "from qmf.console import ObjectId\n\n"
- self.pythonOutput += "from sqlobject import *\n\n"
- self.pythonOutput += "from mint.util import *\n\n"
-
- self.pythonOutput += "model = None\n"
-
- self.finalPythonOutput += "\nclassToSchemaNameMap = dict()\n"
- self.finalPythonOutput += "schemaNameToClassMap = dict()\n"
- self.finalPythonOutput += 'schemaReservedWordsMap = {"in":
"inRsv", "In": "InRsv", \n'
- self.finalPythonOutput += ' "connection":
"clientConnection", "Connection": "ClientConnection",
\n'
- self.finalPythonOutput += ' "connectionRef":
"clientConnectionRef", \n'
- self.finalPythonOutput += ' "user": "gridUser",
"User": "GridUser", \n'
- self.finalPythonOutput += ' "registeredTo":
"broker",\n'
- self.finalPythonOutput += ' "sync": "syncRsv"} \n\n'
-
- # TODO: optimize getting the id to the parent table from new.parent_table_id
- self.sqlTriggersOutput += """
-CREATE OR REPLACE FUNCTION create_plpgsql() RETURNS TEXT AS '
- CREATE LANGUAGE plpgsql;
- SELECT ''plpgsql language created''::TEXT;
-' LANGUAGE sql;
-
-SELECT CASE WHEN
- (SELECT true
- FROM pg_language
- WHERE lanname='plpgsql')
- THEN
- (SELECT 'plpgsql language already installed'::TEXT)
- ELSE
- (SELECT create_plpgsql())
-END;
-
-DROP FUNCTION create_plpgsql();
-
-"""
- self.sqlTriggerFunction = """
-CREATE OR REPLACE FUNCTION update_%s_stats() RETURNS trigger AS '
-BEGIN
- UPDATE %s SET stats_prev_id = stats_curr_id, stats_curr_id = new.id WHERE id =
new.%s_id;
- RETURN new;
-END
-' LANGUAGE plpgsql;
-"""
-
- outputFile = open(self.pythonFilePath, "w")
- sqlTriggersFile = open(self.sqlTriggersFilePath, "w")
- for xmlFile in self.xmlFilePaths:
- schema = mllib.xml_parse(xmlFile)
- # parse groups and store their structure as is
- groups = schema.query["schema/group"]
- for grp in groups:
- self.groups[grp["@name"]] = grp.query["property"],
grp.query["statistic"]
-
- # parse class definitions
- classes = schema.query["schema/class"]
- for cls in classes:
- self.startClass(cls["@name"])
- self.generateClassAttribs(cls["@name"],
cls.query["property"])
- # generate properties attribs from any groups included in this class
- for clsGroup in cls.query["group"]:
- self.generateClassAttribs(cls["@name"],
self.groups[clsGroup["@name"]][0])
- for elem in cls.query["method"]:
- self.generateMethod(elem)
- self.endClass()
-
- self.startClass(cls["@name"], stats=True)
- self.generateClassAttribs(cls["@name"],
cls.query["statistic"])
- # generate statistics attribs from any groups included in this class
- for clsGroup in cls.query["group"]:
- self.generateClassAttribs(cls["@name"],
self.groups[clsGroup["@name"]][1])
- self.endClass()
- self.pythonOutput += "\n\n"
- self.finalPythonOutput += "\nentityClasses = %s\n" % (self.entityClasses)
- self.finalPythonOutput += "\nstatsClasses = %s\n" % (self.statsClasses)
- outputFile.write(self.pythonOutput + self.finalPythonOutput)
- outputFile.close()
- sqlTriggersFile.write(self.sqlTriggersOutput)
- sqlTriggersFile.close()
-
-if __name__ == "__main__":
- import sys
-
- if len(sys.argv) < 3:
- print "Usage: schemaparser.py OUTPUT-PYTHON-FILE ",
- print "OUTPUT-SQL-TRIGGERS-FILE INPUT-XML-SCHEMA [INPUT-XML-SCHEMA]*"
- sys.exit(1)
- else:
- parser = SchemaParser(sys.argv[1], sys.argv[2], sys.argv[3:])
- parser.generateCode()
Added: mgmt/newdata/mint/python/mint/session.py
===================================================================
--- mgmt/newdata/mint/python/mint/session.py (rev 0)
+++ mgmt/newdata/mint/python/mint/session.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -0,0 +1,137 @@
+from model import *
+from util import *
+
+from qmf.console import Console, Session
+
+log = logging.getLogger("mint.session")
+
+class MintSession(object):
+ def __init__(self, app, broker_uri):
+ self.app = app
+ self.broker_uri = broker_uri
+
+ self.qmf_session = None
+ self.qmf_brokers = list()
+
+ def add_broker(self, uri):
+ log.info("Adding QMF broker at %s", uri)
+
+ assert self.qmf_session
+
+ qmf_broker = self.qmf_session.addBroker(uri)
+ self.qmf_brokers.append(qmf_broker)
+
+ def check(self):
+ log.info("Checking %s", self)
+
+ def init(self):
+ log.info("Initializing %s", self)
+
+ def start(self):
+ log.info("Starting %s", self)
+
+ assert self.qmf_session is None
+
+ self.qmf_session = Session(MintConsole(self.app.model),
+ manageConnections=True,
+ rcvObjects=self.app.update_enabled)
+
+ self.add_broker(self.broker_uri)
+
+ def stop(self):
+ log.info("Stopping %s", self)
+
+ for qmf_broker in self.qmf_brokers:
+ self.qmf_session.delBroker(qmf_broker)
+
+ def __repr__(self):
+ return "%s(%s)" % (self.__class__.__name__, self.broker_uri)
+
+class MintConsole(Console):
+ def __init__(self, model):
+ self.model = model
+
+ def brokerConnected(self, qmf_broker):
+ log.info("Broker at %s:%i is connected",
+ qmf_broker.host, qmf_broker.port)
+
+ def brokerInfo(self, qmf_broker):
+ log.info("Broker info from %s", qmf_broker)
+
+ def brokerDisconnected(self, qmf_broker):
+ log.info("Broker at %s:%i is disconnected",
+ qmf_broker.host, qmf_broker.port)
+
+ def newAgent(self, qmf_agent):
+ log.info("Creating %s", qmf_agent)
+
+ MintAgent(self.model, qmf_agent)
+
+ def delAgent(self, qmf_agent):
+ log.info("Deleting %s", qmf_agent)
+
+ try:
+ agent = self.model.get_agent(qmf_agent)
+ except KeyError:
+ return
+
+ agent.delete()
+
+ if self.model.app.update_thread.isAlive():
+ up = AgentDelete(self.model, agent)
+ self.model.app.update_thread.enqueue(up)
+
+ def heartbeat(self, qmf_agent, timestamp):
+ timestamp = timestamp / 1000000000
+
+ try:
+ agent = self.model.get_agent(qmf_agent)
+ except KeyError:
+ return
+
+ agent.last_heartbeat = datetime.fromtimestamp(timestamp)
+
+ def newPackage(self, name):
+ log.info("New package %s", name)
+
+ def newClass(self, kind, classKey):
+ log.info("New class %s", classKey)
+
+ # XXX I want to store class keys using this, but I can't,
+ # because I don't get any agent info; instead
+
+ def objectProps(self, broker, obj):
+ agent = self.model.get_agent(obj._agent)
+
+ if self.model.app.update_thread.isAlive():
+ if obj.getTimestamps()[2]:
+ up = ObjectDelete(self.model, agent, obj)
+ else:
+ up = ObjectUpdate(self.model, agent, obj)
+
+ self.model.app.update_thread.enqueue(up)
+
+ def objectStats(self, broker, obj):
+ print "objectStats!", broker, obj
+
+ agent = self.get_agent(obj._agent)
+
+ if self.model.app.update_thread.isAlive():
+ up = ObjectAddSample(self.model, agent, obj)
+ self.model.app.update_thread.enqueue(up)
+
+ def event(self, broker, event):
+ """ Invoked when an event is raised. """
+ pass
+
+ def methodResponse(self, broker, seq, response):
+ log.info("Method response for request %i received from %s",
+ seq, broker)
+ log.debug("Response: %s", response)
+
+ self.model.lock.acquire()
+ try:
+ callback = self.model.outstanding_method_calls.pop(seq)
+ callback(response.text, response.outArgs)
+ finally:
+ self.model.lock.release()
Deleted: mgmt/newdata/mint/python/mint/sql.py
===================================================================
--- mgmt/newdata/mint/python/mint/sql.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/sql.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,251 +0,0 @@
-import logging, mint
-
-from time import clock
-from sqlobject import MixedCaseUnderscoreStyle
-
-log = logging.getLogger("mint.sql")
-
-dbStyle = MixedCaseUnderscoreStyle()
-profile = None
-
-def transform_table(table):
- try:
- table = mint.schema.schemaReservedWordsMap[table]
- except KeyError:
- pass
-
- table = table[0] + table[1:] # XXX why is this necessary?
- table = dbStyle.pythonClassToDBTable(table)
-
- return table
-
-def transform_column(column):
- return dbStyle.pythonAttrToDBColumn(column)
-
-class SqlOperation(object):
- def __init__(self, name):
- self.name = name
-
- self.time = None
- self.text = None
-
- if profile:
- profile.ops.append(self)
-
- def key(self):
- if hasattr(self, "cls"):
- return "%s(%s)" % (self.name, getattr(self,
"cls").__name__)
- else:
- return self.name
-
- def __repr__(self):
- return self.key()
-
- def generate(self):
- pass
-
- def execute(self, cursor, values=None):
- self.text = self.generate()
-
- try:
- if profile:
- start = clock()
- cursor.execute(self.text, values)
- self.time = clock() - start
- else:
- cursor.execute(self.text, values)
- return cursor.rowcount
- except:
- log.warn("Text: %s", self.text)
-
- if values:
- for item in values.items():
- log.warn(" %-20s %r", *item)
-
- raise
-
-class SqlGetId(SqlOperation):
- def __init__(self, cls):
- super(SqlGetId, self).__init__("get_id")
-
- self.cls = cls
-
- def generate(self):
- table = self.cls.sqlmeta.table
-
- return """
- select id
- from %s
- where qmf_agent_id = %%(qmfAgentId)s and qmf_object_id = %%(qmfObjectId)s
- """ % table
-
-class SqlSetStatsRefs(SqlOperation):
- def __init__(self, cls):
- super(SqlSetStatsRefs, self).__init__("set_stats_refs")
-
- self.cls = cls
-
- def generate(self):
- table = self.cls.sqlmeta.table
-
- return """
- update %s
- set stats_curr_id = %%(statsId)s, stats_prev_id = stats_curr_id
- where id = %%(id)s
- """ % table
-
-class SqlInsert(SqlOperation):
- def __init__(self, cls, attrs):
- super(SqlInsert, self).__init__("insert")
-
- self.cls = cls
- self.attrs = attrs
-
- def generate(self):
- table = self.cls.sqlmeta.table
-
- cols = list()
- vals = list()
-
- for name in self.attrs:
- cols.append(transform_column(name))
- vals.append("%%(%s)s" % name)
-
- colsSql = ", ".join(cols)
- valsSql = ", ".join(vals)
-
- insert = "insert into %s (%s) values (%s)" % (table, colsSql, valsSql)
- select = "select currval('%s_id_seq')" % table
-
- sql = "%s; %s" % (insert, select)
-
- return sql
-
-class SqlUpdate(SqlOperation):
- def __init__(self, cls, attrs):
- super(SqlUpdate, self).__init__("update")
-
- self.cls = cls
- self.attrs = attrs
-
- def generate(self):
- table = self.cls.sqlmeta.table
-
- elems = list()
-
- for name in self.attrs:
- elems.append("%s = %%(%s)s" % (transform_column(name), name))
-
- elemsSql = ", ".join(elems)
-
- sql = "update %s set %s where id = %%(id)s" % (table, elemsSql)
-
- return sql
-
-class SqlExpire(SqlOperation):
- def __init__(self, cls, keep_curr_stats):
- super(SqlExpire, self).__init__("expire")
-
- self.cls = cls
- self.keep_curr_stats = keep_curr_stats
-
- def generate(self):
- table = self.cls.sqlmeta.table
-
- if table.endswith("_stats"):
- parent_table = table[0:table.find("_stats")]
- sql = """
- delete from %s
- where qmf_update_time < now() - interval '%%(threshold)s
seconds'
- """ % table
- if self.keep_curr_stats:
- sql += " and id not in (select stats_curr_id from %s)" \
- % parent_table
- else:
- sql = """
- delete from %s
- where qmf_delete_time < now() - interval '%%(threshold)s
seconds'
- and qmf_persistent = 'f'
- """ % table
-
- return sql
-
-class SqlProfile(object):
- def __init__(self):
- self.ops = list()
- self.commit_time = 0.0
-
- def report(self):
- times_by_key = dict()
-
- execute_time = 0.0
-
- for op in self.ops:
- if op.time is not None:
- execute_time += op.time
-
- try:
- times = times_by_key[op.key()]
-
- if op.time is not None:
- times.append(op.time)
- except KeyError:
- if op.time is not None:
- times_by_key[op.key()] = list((op.time,))
-
- fmt = "%-40s %9.2f %9.2f %6i"
- records = list()
-
- for key, values in times_by_key.items():
- count = len(values)
- ttime = sum(values) * 1000
- atime = ttime / float(count)
-
- records.append((key, ttime, atime, count))
-
- print
-
- srecords = sorted(records, key=lambda x: x[1], reverse=True)
-
- for i, rec in enumerate(srecords):
- print fmt % rec
-
- if i >= 10:
- break
-
- print
-
- srecords = sorted(records, key=lambda x: x[2], reverse=True)
-
- for i, rec in enumerate(srecords):
- print fmt % rec
-
- if i >= 10:
- break
-
- print
- print "Total statement execute time: %9.3f seconds" % execute_time
- print "Total commit time: %9.3f seconds" % self.commit_time
-
-
-class SqlAgentDisconnect(SqlOperation):
- def __init__(self, agent):
- super(SqlAgentDisconnect, self).__init__("disconnect_agent")
- self.agent = agent
-
- def generate(self):
- sql = ""
- for cls in mint.schema.entityClasses:
- sql += """
- update %s
- set qmf_delete_time = now()
- where qmf_persistent = 'f'
- and qmf_delete_time is null""" %
(dbStyle.pythonClassToDBTable(cls))
- if self.agent:
- sql += """
- and qmf_agent_id = %(qmf_agent_id)s;
- """
- else:
- sql += """;
- """
- return sql
Modified: mgmt/newdata/mint/python/mint/tools.py
===================================================================
--- mgmt/newdata/mint/python/mint/tools.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/tools.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -242,92 +242,8 @@
def run(self, opts, args):
self.parent.app.database.check_schema()
- class AddUser(DatabaseSubcommand):
- def do_run(self, cursor, opts, args):
- try:
- name = args[1]
- except IndexError:
- raise CommandException(self, "NAME is required")
-
- try:
- password = args[2]
- except IndexError:
- password = prompt_password()
-
- crypted = crypt_password(password)
-
- pkg = self.parent.app.model.rosemary.com_redhat_cumin
-
- for role in pkg.Role.get_selection(cursor, name="user"):
- break
-
- assert role, self
-
- user = pkg.User.create_object(cursor)
- user.name = name
- user.password = crypted
-
- try:
- user.save(cursor)
- except IntegrityError:
- print "Error: a user called '%s' already exists" %
name
- sys.exit(1)
-
- mapping = pkg.UserRoleMapping.create_object(cursor)
- mapping._role_id = role._id
- mapping._user_id = user._id
- mapping.save(cursor)
-
- conn.commit()
-
- assert role, self
-
- print "User '%s' is added" % name
-
- class RemoveUser(DatabaseSubcommand):
- def do_run(self, cursor, opts, args):
- if "force" not in opts:
- msg = "Command remove-user requires --force"
- raise CommandException(self, msg)
-
- try:
- name = args[1]
- except IndexError:
- raise CommandException(self, "NAME is required")
-
- name = args[1]
-
- cls = self.app.model.rosemary.com_redhat_cumin.User
-
- for user in cls.get_selection(cursor, name=name):
- break
-
- if not user:
- raise CommandException(self, "User '%s' is unknown" %
name)
-
- user.delete(cursor)
-
- conn.commit()
-
- print "User '%s' is removed" % name
-
- class ListUsers(Command):
+x class ListRoles(Command):
def run(self, opts, args):
- subjects = Subject.select(orderBy='name')
-
- print " ID Name Roles"
- print "---- -------------------- --------------------"
-
- for subject in subjects:
- roles = ", ".join([x.name for x in list(subject.roles)])
-
- print "%4i %-20s %-20s" % (subject.id, subject.name, roles)
-
- count = subjects.count()
- print "(%i user%s found)" % (count, ess(count))
-
- class ListRoles(Command):
- def run(self, opts, args):
roles = Role.select(orderBy='name')
print " ID Name"
@@ -453,7 +369,7 @@
sleep(2)
- cls = app.model.rosemary.org_apache_qpid_broker.Broker
+ cls = app.model.org_apache_qpid_broker.Broker
conn = app.database.get_connection()
cursor = conn.cursor()
Modified: mgmt/newdata/mint/python/mint/util.py
===================================================================
--- mgmt/newdata/mint/python/mint/util.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/util.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -25,6 +25,7 @@
super(MintDaemonThread, self).__init__()
self.app = app
+ self.name = self.__class__.__name__
self.stop_requested = False
Modified: mgmt/newdata/mint/python/mint/vacuum.py
===================================================================
--- mgmt/newdata/mint/python/mint/vacuum.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/python/mint/vacuum.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -21,7 +21,7 @@
level = conn.isolation_level
conn.set_isolation_level(0)
- for pkg in self.model.rosemary._packages:
+ for pkg in self.model._packages:
for cls in pkg._classes:
self.vacuum(conn, cls)
@@ -31,11 +31,10 @@
def vacuum(self, conn, cls):
cursor = conn.cursor()
+ sql = "vacuum verbose %s"
try:
- cursor.execute("vacuum verbose %s" % cls.sql_table.identifier)
-
- for notice in conn.notices:
- log.debug("Database: %s", notice.replace("\n", "
"))
+ cursor.execute(sql % cls.sql_table.identifier)
+ cursor.execute(sql % cls.sql_samples_table.identifier)
finally:
cursor.close()
Deleted: mgmt/newdata/mint/xml
===================================================================
--- mgmt/newdata/mint/xml 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/mint/xml 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1 +0,0 @@
-link ../rosemary/xml
\ No newline at end of file
Modified: mgmt/newdata/parsley/python/parsley/config.py
===================================================================
--- mgmt/newdata/parsley/python/parsley/config.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/parsley/python/parsley/config.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,76 +1,85 @@
import logging
-from ConfigParser import SafeConfigParser
+from ConfigParser import *
log = logging.getLogger("parsley.config")
class Config(object):
def __init__(self):
- self._params = list()
- self._params_by_name = dict()
+ self.sections = list()
- def init(self):
- for param in self._params:
- param.init()
+ def parse_files(self, paths):
+ parser = SafeConfigParser()
+ found = parser.read(paths)
- def load_file(self, file):
- conf = SafeConfigParser()
- found = conf.read(file)
+ for path in found:
+ log.info("Read config file '%s'", path)
- if found:
- log.info("Read config file '%s'" % file)
- else:
- log.info("Config file '%s' not found" % file)
+ if not found:
+ log.info("No config files found at %s", ",
".join(paths))
- params = dict()
+ sections = ConfigValues()
- if (conf.has_section("main")):
- for key, value in conf.items("main"):
- params[key] = value
+ for section in self.sections:
+ try:
+ values = section.parse(parser)
+ except NoSectionError:
+ continue
- self.load_dict(params)
+ sections[section.name] = values
- def load_dict(self, params):
- for sname, svalue in params.items():
- param = self._params_by_name.get(sname)
+ return sections
- if param:
- name = param.name.replace("-", "_")
- value = param.unmarshal(svalue)
+class ConfigSection(object):
+ def __init__(self, config, name):
+ assert isinstance(config, Config)
- if hasattr(self, name):
- setattr(self, name, value)
- else:
- log.info("Ignoring unrecognized parameter '%s'" %
sname)
+ self.config = config
+ self.config.sections.append(self)
- def prt(self):
- print "Configuration:"
+ self.name = name
- for param in self._params:
- value = getattr(self, param.name.replace("-", "_"))
+ self.parameters = list()
- if value == param.default:
- flag = " [default]"
+ def parse(self, parser):
+ values = ConfigValues()
+
+ for param in self.parameters:
+ name = param.name.replace("-", "_")
+ string = None
+
+ try:
+ string = parser.get(self.name, param.name)
+ except NoOptionError:
+ try:
+ string = parser.get("common", param.name)
+ except NoOptionError:
+ pass
+
+ if string is None:
+ value = param.default
else:
- flag = ""
+ value = param.unmarshal(string)
- print " %s = %s%s" % (param.name, value, flag)
+ values[name] = value
+ return values
+
class ConfigParameter(object):
- def __init__(self, config, name, type):
- self.config = config
+ def __init__(self, section, name, type):
+ assert isinstance(section, ConfigSection)
+
+ self.section = section
+ self.section.parameters.append(self)
+
self.name = name
self.type = type
+
self.default = None
- self.config._params.append(self)
- self.config._params_by_name[self.name] = self
-
- def init(self):
- if hasattr(self.config, self.name):
- raise Exception("Parameter '%s' already present" %
self.name)
-
- setattr(self.config, self.name.replace("-", "_"),
self.default)
-
def unmarshal(self, string):
return self.type(string)
+
+class ConfigValues(dict):
+ def __getattr__(self, name):
+ return self[name]
Modified: mgmt/newdata/parsley/python/parsley/threadingex.py
===================================================================
--- mgmt/newdata/parsley/python/parsley/threadingex.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/parsley/python/parsley/threadingex.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -4,65 +4,21 @@
from threading import *
def print_threads(writer=sys.stdout):
- row = "%-18s %-18s %-18s %-18s"
+ row = "%-28s %-36s %-18s %-8s %-8s %s"
- writer.write(row % ("Name", "Ident", "Alive",
"Daemon"))
+ writer.write(row % ("Class", "Name", "Ident",
"Alive", "Daemon", ""))
writer.write(os.linesep)
- writer.write("-" * 80)
+ writer.write("-" * 120)
writer.write(os.linesep)
- for thread in enumerate():
+ for thread in sorted(enumerate()):
+ cls = thread.__class__.__name__
name = thread.name
ident = thread.ident
alive = thread.is_alive()
daemon = thread.daemon
-
- writer.write(row % (name, ident, alive, daemon))
- writer.write(os.linesep)
+ extra = ""
+ #extra = thread._Thread__target
-class Lifecycle(object):
- def __init__(self):
- super(Lifecycle, self)
-
- self.log = None
-
- def init(self):
- if self.log:
- self.log.debug("Initializing %s" % self)
-
- self.do_init()
-
- if self.log:
- self.log.info("Initialized %s" % self)
-
- def do_init(self):
- pass
-
- def start(self):
- if self.log:
- self.log.debug("Starting %s" % self)
-
- self.do_start()
-
- if self.log:
- self.log.info("Started %s" % self)
-
- def do_start(self):
- pass
-
- def stop(self):
- if self.log:
- self.log.debug("Stopping %s" % self)
-
- self.do_stop()
-
- if self.log:
- self.log.info("Stopped %s" % self)
-
- #print_threads()
-
- def do_stop(self):
- pass
-
- def __str__(self):
- return self.__class__.__name__
+ writer.write(row % (cls, name, ident, alive, daemon, extra))
+ writer.write(os.linesep)
Modified: mgmt/newdata/rosemary/python/rosemary/model.py
===================================================================
--- mgmt/newdata/rosemary/python/rosemary/model.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/rosemary/python/rosemary/model.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -15,7 +15,7 @@
self.sql_logging_enabled = False
- def load_xml_dir(self, path):
+ def load_model_dir(self, path):
assert os.path.isdir(path)
extensions = os.path.join(path, "rosemary.xml")
@@ -30,12 +30,12 @@
continue
if file_path.endswith(".xml"):
- self.load_xml_file(file_path)
+ self.load_model_file(file_path)
if os.path.isfile(extensions):
self.load_extensions(extensions)
- def load_xml_file(self, path):
+ def load_model_file(self, path):
tree = ElementTree()
file = open(path, "r")
@@ -291,31 +291,48 @@
self.add_constraints()
- def get_object(self, cursor, id):
- assert id
+ def get_object(self, cursor, **criteria):
+ columns = self.sql_table._columns
+ options = SqlQueryOptions()
- obj = RosemaryObject(self, id)
+ for name in criteria:
+ # XXX need to translate ref=obj args here
+
+ column = self.sql_table._columns_by_name[name]
+ value = "%%(%s)s" % name
+ filter = SqlComparisonFilter(None, column, value)
- self.load_object(cursor, obj)
+ options.filters.append(filter)
+ self.sql_select.execute(cursor, columns, criteria, options)
+
+ record = cursor.fetchone()
+
+ if not record:
+ return
+
+ obj = RosemaryObject(self, None)
+
+ self.set_object_attributes(obj, columns, record)
+
return obj
- def get_selection(self, cursor, **kwargs):
+ def get_selection(self, cursor, **criteria):
selection = list()
columns = self.sql_table._columns
options = SqlQueryOptions()
- for name in kwargs:
+ for name in criteria:
# XXX need to translate ref=obj args here
+ column = self.sql_table._columns_by_name[name]
value = "%%(%s)s" % name
+ filter = SqlComparisonFilter(None, column, value)
- column = self.sql_table._columns_by_name[name]
- filter = SqlComparisonFilter(None, column, value)
options.filters.append(filter)
- self.sql_select.execute(cursor, columns, kwargs, options)
+ self.sql_select.execute(cursor, columns, criteria, options)
for record in cursor.fetchall():
obj = RosemaryObject(self, None)
@@ -326,6 +343,15 @@
return selection
+ def get_object_by_id(self, cursor, id):
+ assert id
+
+ obj = RosemaryObject(self, id)
+
+ self.load_object_by_id(cursor, obj)
+
+ return obj
+
def get_object_by_qmf_id(self, cursor, agent_id, object_id):
assert isinstance(obj, RosemaryObject)
@@ -351,7 +377,7 @@
return RosemaryObject(self, id)
- def load_object(self, cursor, obj):
+ def load_object_by_id(self, cursor, obj):
assert isinstance(obj, RosemaryObject)
assert obj._id, obj
@@ -616,7 +642,7 @@
# XXX prefix these with _
def load(self, cursor):
- self._class.load_object(cursor, self)
+ self._class.load_object_by_id(cursor, self)
def save(self, cursor, columns=None):
self._class.save_object(cursor, self, columns)
Modified: mgmt/newdata/rosemary/python/rosemary/sqlquery.py
===================================================================
--- mgmt/newdata/rosemary/python/rosemary/sqlquery.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/rosemary/python/rosemary/sqlquery.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -45,7 +45,8 @@
if options:
if options.group_column:
- tokens.append(self.group_by.emit(options.group_column,
options.group_having))
+ tokens.append(self.group_by.emit(options.group_column,
+ options.group_having))
if options.sort_column:
tokens.append(self.order_by.emit(options.sort_column,
Modified: mgmt/newdata/wooly/python/wooly/__init__.py
===================================================================
--- mgmt/newdata/wooly/python/wooly/__init__.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/wooly/python/wooly/__init__.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -1,4 +1,3 @@
-from parsley.threadingex import Lifecycle
from cStringIO import StringIO
from urllib import quote, unquote_plus, unquote
from copy import copy
@@ -469,7 +468,7 @@
from parameters import DictParameter
-class Application(Lifecycle):
+class Application(object):
def __init__(self):
self.pages = list()
self.pages_by_name = dict()
@@ -481,7 +480,7 @@
self.devel_enabled = False
- def do_init(self):
+ def init(self):
for page in self.pages:
page.init()
page.seal()
Modified: mgmt/newdata/wooly/python/wooly/server.py
===================================================================
--- mgmt/newdata/wooly/python/wooly/server.py 2010-05-04 19:21:42 UTC (rev 3945)
+++ mgmt/newdata/wooly/python/wooly/server.py 2010-05-05 15:58:15 UTC (rev 3946)
@@ -3,8 +3,6 @@
from traceback import print_exc
from datetime import datetime, timedelta
from threading import Thread
-from time import strptime
-from parsley.threadingex import Lifecycle
from wooly import *
from util import *
@@ -12,33 +10,24 @@
log = logging.getLogger("wooly.server")
-class WebServer(Lifecycle):
+class WebServer(object):
http_date = "%a, %d %b %Y %H:%M:%S %Z"
http_date_gmt = "%a, %d %b %Y %H:%M:%S GMT"
- def __init__(self, app, addr, port):
+ def __init__(self, app, host, port):
self.log = log
self.app = app
- self.addr = addr
+ self.host = host
self.port = port
- self.server = CherryPyWSGIServer \
- ((self.addr, self.port), self.service_request)
- self.server.environ["wsgi.version"] = (1, 1)
- self.server._interrupt = True
+ self.dispatch_thread = WebServerDispatchThread(self)
self.client_sessions_by_id = dict()
self.client_session_expire_thread = ClientSessionExpireThread(self)
- def set_ssl_cert_path(self, path):
- self.server.ssl_certificate = path
-
- def set_ssl_key_path(self, path):
- self.server.ssl_private_key = path
-
- def do_init(self):
+ def init(self):
return # XXX urgh
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
@@ -46,24 +35,28 @@
try:
for i in range(60):
try:
- s.bind((self.addr, self.port))
+ s.bind((self.host, self.port))
return
except socket.error:
log.warn("Address %s:%i is taken; retrying",
- self.addr, self.port)
+ self.host, self.port)
time.sleep(5)
finally:
s.close()
- raise Exception("Failed to bind to %s:%i" % (self.addr, self.port))
+ raise Exception("Failed to bind to %s:%i" % (self.host, self.port))
- def do_start(self):
- self.server.start()
+ def start(self):
+ log.info("Starting %s", self)
+
+ self.dispatch_thread.start()
self.client_session_expire_thread.start()
- def do_stop(self):
- self.server.stop()
+ def stop(self):
+ log.info("Stopping %s", self)
+ self.dispatch_thread.stop()
+
def get_page(self, env):
name = env["PATH_INFO"][1:]
@@ -108,7 +101,7 @@
modified = page.get_last_modified(session).replace(microsecond=0)
try:
- since = datetime(*strptime(str(ims), self.http_date)[0:6])
+ since = datetime(*time.strptime(str(ims), self.http_date)[0:6])
if modified <= since:
return self.send_not_modified(response, headers)
@@ -271,24 +264,47 @@
return ()
+ def __repr__(self):
+ return "%s(%s,%i)" % (self.__class__.__name__, self.host, self.port)
+
+class WebServerDispatchThread(Thread):
+ def __init__(self, server):
+ super(WebServerDispatchThread, self).__init__()
+
+ self.server = server
+ self.name = self.__class__.__name__
+
+ self.setDaemon(True)
+
+ self.wsgi_server = CherryPyWSGIServer \
+ ((self.server.host, self.server.port), self.server.service_request)
+ self.wsgi_server.environ["wsgi.version"] = (1, 1)
+
+ def run(self):
+ self.wsgi_server.start()
+
+ def stop(self):
+ self.wsgi_server.stop()
+
class ClientSessionExpireThread(Thread):
def __init__(self, server):
super(ClientSessionExpireThread, self).__init__()
self.server = server
+ self.name = self.__class__.__name__
self.setDaemon(True)
def run(self):
while True:
self.expire_sessions()
- sleep(60)
+ time.sleep(60)
def expire_sessions(self):
when = datetime.now() - timedelta(hours=1)
count = 0
- for session in self.client_sessions_by_id.values():
+ for session in self.server.client_sessions_by_id.values():
if session.visited < when:
del self.server.client_sessions_by_id[session.id]
count += 1
Modified: mgmt/newdata/wooly/python/wooly/wsgiserver/__init__.py
===================================================================
--- mgmt/newdata/wooly/python/wooly/wsgiserver/__init__.py 2010-05-04 19:21:42 UTC (rev
3945)
+++ mgmt/newdata/wooly/python/wooly/wsgiserver/__init__.py 2010-05-05 15:58:15 UTC (rev
3946)
@@ -21,7 +21,7 @@
d = WSGIPathInfoDispatcher({'/': my_crazy_app, '/blog':
my_blog_app})
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 80), d)
-Want SSL support? Just set server.ssl_adapter to an SSLAdapter instance.
+Want SSL support? Just set server.ssl_dapter to an SSLAdapter instance.
This won't call the CherryPy engine (application side) at all, only the
WSGI server, which is independent from the rest of CherryPy. Don't