JBoss Enterprise Data Services 5

Data Services Developer Guide

for Developers

Edition 5.3.1

Legal Notice

Copyright © 2013 Red Hat, Inc..
The text of and illustrations in this document are licensed by Red Hat under the GNU Lesser General Public License (LGPL) version 2.1. A copy of this license can be found at Appendix C, GNU Lesser General Public License 2.1.
This manual is based on the Teiid Developer Guide. Further details about Teiid can be found at the project's website http://www.jboss.org/teiid.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
All other trademarks are the property of their respective owners.


This guide contains information for developers creating custom solutions for their corporation with the Data Services component of the JBoss Enterprise SOA Platform.
1. Document Conventions
1.1. Typographic Conventions
1.2. Pull-quote Conventions
1.3. Notes and Warnings
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
2.2. Give us Feedback
1. The Enterprise Data Services Platform
1.1. Data Integration
1.2. Enterprise Data Services
1.3. Insurance Use Case
1.4. Enterprise Data Services Overview
1.5. EDS Service
1.6. Design Tools for EDS
1.7. Administration Tools for EDS
2. Developing for Data Services
2.1. Introduction to the Data Services Connector Architecture
2.2. Provided Translators
2.3. Custom Translators
2.4. Provided Resource Adapters
2.5. Custom Resource Adapters
2.6. Other Data Services Development
3. Resource Adapter Development
3.1. Developing JCA Adapters
3.2. Define a Managed Connection Factory
3.3. Specify Configuration Properties in an ra.xml File
3.4. Define a Connection Factory
3.5. Define a Connection
3.6. XA Transactions
3.7. Packaging the Adapter
3.8. Deploying the Adapter
4. Translator Development
4.1. Extending the ExecutionFactory Class
4.1.1. ConnectionFactory
4.1.2. Connection
4.1.3. Configuration Properties
4.1.4. Initializing the Translator
4.1.5. TranslatorCapabilities
4.1.6. Execution (and sub-interfaces)
4.1.7. Metadata
4.1.8. Logging
4.1.9. Exceptions
4.1.10. Default Name
4.2. Connections to Source
4.2.1. Obtaining connections
4.2.2. Releasing Connections
4.3. Executing Commands
4.3.1. Execution Modes
4.3.2. ExecutionContext
4.3.3. ResultSetExecution
4.3.4. Update Execution
4.3.5. Procedure Execution
4.3.6. Asynchronous Executions
4.3.7. Bulk Execution
4.3.8. Command Completion
4.3.9. Command Cancellation
4.4. Command Language
4.4.1. Language
4.4.2. Language Utilities
4.4.3. Runtime Metadata
4.4.4. Language Visitors
4.4.5. Translator Capabilities
4.5. Large Objects
4.5.1. Data Types
4.5.2. Why Use Large Object Support?
4.5.3. Handling Large Objects
4.5.4. Inserting or Updating Large Objects
4.6. Delegating Translator
4.7. Packaging
4.8. Deployment
5. Extending The JDBC Translator
5.1. Capabilities Extension
5.2. SQL Translation Extension
5.3. Results Translation Extension
5.4. Adding Function Support
5.4.1. Using FunctionModifiers
5.5. Installing Extensions
6. User Defined Functions
6.1. UDF Definition
6.2. Source Supported UDF
6.3. Non-pushdown Support for User-Defined Functions
6.3.1. Java Code
6.3.2. Post Code Activities
6.4. Installing user-defined functions
6.5. User Defined Functions in Dynamic VDBs
7. AdminAPI
7.1. Connecting
7.2. Admin Methods
8. Logging
8.1. Customized Logging
8.1.1. Command Logging API
8.1.2. Audit Logging API
9. Custom Security
9.1. Login Modules
9.1.1. Built-in LoginModules
9.1.2. Custom LoginModules
9.2. Custom Authorization
10. Runtime Updates
10.1. Data Updates
10.2. Runtime Metadata Updates
10.2.1. Costing Updates
10.2.2. Schema Updates
A. ra.xml file Template
B. Advanced Topics
B.1. Security Migration From Previous Versions
C. GNU Lesser General Public License 2.1
D. Revision History


1. Document Conventions

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later include the Liberation Fonts set by default.

1.1. Typographic Conventions

Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keys and key combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from an individual key by the plus sign that connects each part of a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination: a set of three keys pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:
Choose SystemPreferencesMouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, select the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
To insert a special character into a gedit file, choose ApplicationsAccessoriesCharacter Map from the main menu bar. Next, choose SearchFind… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose EditPaste from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.
The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.

1.2. Pull-quote Conventions

Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman and presented thus:
books        Desktop   documentation  drafts  mss    photos   stuff  svn
books_tests  Desktop1  downloads      images  notes  scripts  svgs
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;

import javax.naming.InitialContext;

public class ExClient
   public static void main(String args[]) 
       throws Exception
      InitialContext iniCtx = new InitialContext();
      Object         ref    = iniCtx.lookup("EchoBean");
      EchoHome       home   = (EchoHome) ref;
      Echo           echo   = home.create();

      System.out.println("Created Echo");

      System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));

1.3. Notes and Warnings

Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.


Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.


Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data loss but may cause irritation and frustration.


Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

2. Getting Help and Giving Feedback

2.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. Through the customer portal, you can:
  • search or browse through a knowledgebase of technical support articles about Red Hat products.
  • submit a support case to Red Hat Global Support Services (GSS).
  • access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.

2.2. Give us Feedback

If you find a typographical error, or know how this guide can be improved, we would love to hear from you. Submit a report in Bugzilla against the product JBoss Enterprise SOA Platform and the component doc-Data_Services_Developer_Guide. The following link will take you to a pre-filled bug report for this product: http://bugzilla.redhat.com/.
Fill out the following template in Bugzilla's Description field. Be as specific as possible when describing the issue; this will help ensure that we can fix it quickly.
Document URL:

Section Number and Name:

Describe the issue:

Suggestions for improvement:

Additional information:

Be sure to give us your name so that you can receive full credit for reporting the issue.

Chapter 1. The Enterprise Data Services Platform

1.1. Data Integration

Businesses increasingly need to access data residing in multiple disparate data sources. Therefore, we need to consider ways of making this information readily available for them to use.
Data integration provides a unified virtualized view of information combined from multiple disparate sources. This enables users and applications to query and manage the integrated data as if it were located in a single database via a single uniform API.
Instead of copying or moving data, a virtual database (VDB) is used to map physical data sources to integrated views. At runtime, queries submitted against these views are coordinated among the dependent physical data sources, according to query criteria and the mappings defined by the VDB. This approach minimizes information flow and avoids inconsistencies from duplication of data.
Data integration hides details about the physical data sources, such as location, structure, API, access language, and storage technology. This allows for more effort to be spent on data analysis and manipulation rather than on technical issues regarding the physical separation of the data.

1.2. Enterprise Data Services

JBoss Enterprise Data Services (EDS) is a data integration solution that runs as a service on the JBoss Enterprise Service Oriented Architecture Platform (SOA-P).
EDS can be used to integrate data from any sources, including relational databases, text files, web services, and ERP/CRM mainframe systems.
Red Hat provides various tools to help with the design, deployment and ongoing management of an EDS instance.

1.3. Insurance Use Case

The Situation
The CEO of Acme Home Insurance has decided that in order to operate more effectively, it is time to improve the company's in-house data analysis. The company requires a more comprehensive and accurate view of data relating to its customers and associated factors contributing to the company's risk management and overall business strategy. Since the quality of data analysis depends on the quality of data integration, the company is first reviewing its data integration solution.
Customer information is stored in a Tenacicle database on the company network. However, it also needs to draw upon an assortment of data from other sources including, for example:
  • Occurances of fire and flood across the nation, provided by the Federal Department of Emergency Services, which is stored in an online IntegriSQL database.
  • Average building costs in metropolitan and regional areas for each state, provided by several state Building Associations, some of which are stored in online databases and others stored as downloadable text files. Because these sources are maintained independently, the tables have no standard column names and metrics. For example, some tables refer to Cost where others refer to Price and some costs are given per meter (per m) where others (for the same materials) are given per millimeter (per mm).
The Solution
Using the Teiid Designer tool, developed to work with JBoss Enterprise Data Services (EDS), the information technology team at Acme Home Insurance created a virtual database (VDB) to integrate the data:
  1. They created source models for each of the required data sources by directly importing metadata from each source. The flexibility of the connector framework enabled seamless integration of the different data source types (for example, Tenacicle, IntegriSQL and plain text files).
  2. The team reconciled semantic differences regarding the meaning, interpretation and intended use of data across the source models; for example, all of the integrated tables present the cost of materials as Cost (per mm).
  3. They then created a series of customized views to present the integrated data in formats desired by the analysts.
The Result
The analysts could access all of the data they required with a single API, allowing them to focus their efforts on applying advanced analytical techniques, without concern for the physical whereabouts, or technical or semantic differences between the multiple data sources.

1.4. Enterprise Data Services Overview

A complete Enterprise Data Services (EDS) solution consists of the following:
EDS Service
The EDS Service is positioned between business applications and one or more data sources. It coordinates integration of these data sources so they can be accessed by the business applications at runtime.
Design Tools
Various design tools are available to assist users in setting up an EDS Service for a particular data integration solution.
Administration Tools
Various management tools are available for administrators to configure and monitor a deployed EDS Service.
Enterprise Data Services Overview

Figure 1.1. Enterprise Data Services Overview

1.5. EDS Service

The EDS Service is positioned between business applications and one or more data sources, and coordinates the integration of those data sources for access by the business applications at runtime.
An EDS Service manages the following components:
Virtual Database
A virtual database (VDB) provides a unified view of data residing in multiple physical repositories. A VDB is composed of various data models and configuration information that describes which data sources are to be integrated and how. In particular, source models are used to represent the structure and characteristics of the physical data sources, and view models represent the structure and characteristics of the integrated data exposed to applications.
Access Layer
The access layer is the interface through which applications submit queries (relational, XML, XQuery and procedural) to the VDB via JDBC, ODBC or Web services.
Query Engine
When applications submit queries to a VDB via the access layer, the query engine produces an optimized query plan to provide efficient access to the required physical data sources as determined by the SQL criteria and the mappings between source and view models in the VDB. This query plan dictates processing order to ensure physical data sources are accessed in the most efficient manner.
Connector Framework
Translators and resource adapters are used to provide transparent connectivity between the query engine and the physical data sources. A translator is used to convert queries into source-specific commands, and a resource adapter provides communication with the source.

1.6. Design Tools for EDS

The following design tools are available to assist users in setting up an EDS Service for their desired data integration solution:
Teiid Designer
Teiid Designer is a plug-in for JBoss Developer Studio, providing a graphical user interface to design and test virtual databases (VDBs).
Connector Development Kit
The Connector Development Kit is a Java API that allows users to customize the connector framework (translators and resource adapters) for specific integration needs.
ModeShape Tools
ModeShape Tools is a set of plug-ins for JBoss Developer Studio, providing a graphical user interface to publish and manage Enterprise Data Services (EDS) artifacts (such as VDBs and accompanying models) in the ModeShape Metadata Repository.

1.7. Administration Tools for EDS

The following administration tools are available for administrators to configure and monitor a deployed EDS Service.
AdminShell provides a script-based programming environment enabling users to access, monitor and control an EDS Service.
Admin Console
The JBoss Enterprise Application Platform (EAP) Admin Console is a web-based tool allowing system administrators to monitor and configure services deployed within a running EAP instance, including Enterprise Data Services (EDS).
JBoss Operations Network
JBoss Operations Network (JBoss ON) provides a single interface to deploy, manage, and monitor an entire deployment of JBoss Enterprise Middleware applications and services, including EDS.
Admin API
EDS includes a Java API ( org.teiid.adminapi ) that enables developers to connect to and configure an EDS Service at runtime from within other applications.

Chapter 2. Developing for Data Services

JBoss Enterprise Data Services provides several translators and resource adapters to enable communication with various datasources.
If none of the included translators and resource adapters meet your needs, you can extend them or create your own. One of the most common examples of custom translator development is the extension of the JDBC translator for new JDBC drivers and database versions.

2.1. Introduction to the Data Services Connector Architecture

The process of integrating data from an enterprise information system into Data Services, requires one to two components:
  1. a translator (mandatory) and
  2. a resource adapter (optional), also known as a connector. Most of the time, this will be a Java EE Connector Architecture (JCA) Adapter.
A translator is used to:
  • translate Data Services commands into commands understood by the datasource for which the translator is being used,
  • execute those commands,
  • return batches of results from the datasource, translated into the formats that Data Services is expecting.
A resource adapter (or connector):
  • handles all communications with individual enterprise information systems, (which can include databases, data feeds, flat files and so forth),
  • can be a JCA Adapter or any other custom connection provider (the JCA specification ensures the writing, packaging and configuration are undertaken in a consistent manner),


    Many software vendors provide JCA Adapters to access different systems. Red Hat recommends using vendor-supplied JCA Adapters when using JMS with JCA.
  • removes concerns such as connection information, resource pooling, and authentication for translators.
With a suitable translator (and optional resource adapter), any datasource or Enterprise Information System can be integrated with Enterprise Data Services.

2.2. Provided Translators

Data Services provides the following translators:
JDBC Translator
This works with many relational databases.
You can find a list of the systems it supports at http://www.jboss.com/products/platforms/soa/supportedconfigurations/
File Translator
This provides a procedural way to access the file system in order to handle text files.
WS Translator
This provides procedural access to XML content by using web services.
LDAP Translator
This provides access to LDAP directory services.
Salesforce Translator
This works with Salesforce® interfaces.


More information about these translators can be found in the JBoss Enterprise Data Services Reference Guide.

2.3. Custom Translators

To create a new custom translator:
  1. Create a new (or reuse an existing) resource adapter for the datasource, to be used with this translator.
  2. Implement the required classes defined by the translator API.
    • Create an ExecutionFactory – extend the org.teiid.translator.ExecutionFactory class
    • Create relevant Executions (and sub-interfaces) – specifies how to execute each type of command
  3. Define the template for exposing configuration properties. Refer to Section 4.7, “Packaging”.
  4. Deploy your translator. Refer to Section 4.8, “Deployment”.
  5. Deploy a virtual database (VDB) that uses your translator.
  6. Execute queries using Enterprise Data Services.
For sample translator code, refer to the teiid-VERSION/connectors directory of the Data Services 5.3.x Source Code ZIP file which can be downloaded from the Red Hat Customer Portal at https://access.redhat.com.

2.4. Provided Resource Adapters

Every translator that needs to gather data from external source systems requires a resource adapter (or connector).
The following resource adapters are provided by Enterprise Data Services.
Datasource Connector
This is provided by the JBoss Enterprise Application Platform server, used by the JDBC Translator.
File Connector
Provides a JCA Adapter to access defined directory on the file system, used by the File Translator.
WS Connector
Provides a JCA Adapter to invoke Web Services using JBoss Web Services stack, used by the WS Translator.
LDAP Connector
Provides a JCA Adapter to access LDAP, used by the LDAP Translator.
Salesforce Connector
Provides a JCA Adapter to access Salesforce by invoking their Web Service interface, used by the Salesforce Translator.
If these resource adapters are not suitable for your system then you can develop a custom one.

2.5. Custom Resource Adapters

To create a new custom resource adapter:
  1. Extend the following classes:
    • BasicConnectionFactory – defines the Connection Factory
    • BasicConnection – represents a connection to the source
    • BasicResourceAdapter – specifies the resource adapter class
  2. Package your resource adapter. Refer to Section 3.7, “Packaging the Adapter”.
  3. Deploy your resource adapter. Refer to Section 3.8, “Deploying the Adapter”.
For sample resource adapter code, refer to the teiid-VERSION/connectors directory of the Data Services 5.3.x Source Code ZIP file which can be downloaded from the Red Hat Customer Portal at https://access.redhat.com.


Base classes for all of the required supporting JCA SPI (Service Provider Interface) classes are provided by the Data Services API. The JCA CCI (Common Client Interface) support is not provided because Data Services uses the translator API as its common client interface.

2.6. Other Data Services Development

Data Services is highly extensible in other ways:

Chapter 3. Resource Adapter Development

3.1. Developing JCA Adapters

A framework is provided by the Enterprise Data Services API for developers to create custom JCA Adapters. If you already have a JCA Adapter or some other mechanism to get data from your source system, there is no need to develop your own.
If you are not familiar with JCA API, please read the JCA 1.5 Specification at http://docs.oracle.com/cd/E15523_01/integration.1111/e10231/intro.htm.
The process for developing an Enterprise Data Services JCA Adapter is as follows:
  • Define a Managed Connection Factory by extending the BasicManagedConnectionFactory class
  • Specify configuration properties in an ra.xml file
  • Define a Connection Factory by extending the BasicConnectionFactory class
  • Define a Connection by extending the BasicConnection class


The examples contained in this book are simplified and do not include support for transactions or security which would add significant complexity.


The Enterprise Data Services connector framework does not make use of JCA's CCI framework, only the JCA's SPI interfaces.

3.2. Define a Managed Connection Factory

  • Extend the org.teiid.resource.spi.BasicManagedConnectionFactory class, providing an implementation for the createConnectionFactory() method. This method will create and return a Connection Factory object.
  • Define an attribute for each configuration variable, and then provide both "getter" and "setter" methods for them. This class will define various configuration variables (such as user, password, and URL) used to connect to the datasource.
See the following code for an example.
public class MyManagedConnectionFactory extends BasicManagedConnectionFactory 
   public Object createConnectionFactory() throws ResourceException 
      return new MyConnectionFactory();

   // config property name (metadata for these are defined inside the ra.xml)
   String userName;
   public String getUserName()          {  return this.userName;  }
   public void setUserName(String name){  this.userName = name;  }

   // config property count  (metadata for these are defined inside the ra.xml)
   Integer count;
   public Integer getCount()            {  return this.count;  }
   public void setCount(Integer value) {  this.count = value; }               



Use only java.lang objects as the attributes. DO NOT use Java primitives for defining and accessing the properties.


You can navigate to examples of existing connectors from within the teiid-VERSION/connectors directory of the Data Services 5.3.x Source Code ZIP file which can be downloaded from the Red Hat Customer Portal at https://access.redhat.com.

3.3. Specify Configuration Properties in an ra.xml File

Every configuration property defined inside the new Managed Connection Factory class must also be configured in the ra.xml file. These properties are used to configure each instance of the connector.
The ra.xml file is located in the META-INF directory of the relevant connectors's RAR file under SOA_ROOT/jboss-as/server/PROFILE/deploy/teiid/connectors/. An example file is provided in Appendix A, ra.xml file Template.
The following is the format for a single entry:
      {$display:"display-name",$description:"description", $allowed:"allowed", 
      $required:"true|false", $defaultValue:"default-value"}
For example:
      {$display:"User Name",$description:"The name of the user.", $required="true"}
The format and contents of the <description> element may be used as extended metadata for tooling. This use of the special format and all properties is optional and must follow these rules:
  • The special format must begin and end with curly braces e.g. { }.
  • Property names begin with $.
  • Property names and the associated value are separated with a colon (:).
  • Double quotes (") identifies a single value.
  • A pair of square brackets ([ ]), containing comma separated double quoted entries indicates a list value.
The following are optional properties:
  • $display: Display name of the property
  • $description: Description about the property
  • $required: The property is a required property; or optional and a default is supplied
  • $allowed: If property value must be in certain set of legal values, this defines all the allowed values
  • $masked: The tools need to mask the property; Do not show in plain text; used for passwords
  • $advanced: Notes this as Advanced property
  • $readOnly: Property can be modified; or read-only


Although these are optional properties, in the absence of this metadata, Data Services tooling may not work as expected.

3.4. Define a Connection Factory

Extend the org.teiid.resourse.spi.BasicConnectionFactory class, and provide an implementation for the getConnection() method. This method will create and return a Connection object.
public class MyConnectionFactory extends BasicConnectionFactory 
   public MyConnection getConnection() throws ResourceException 
      return new MyConnection();
Since the Managed Connection Factory object creates a Connection Factory object, it has access to all the configuration parameters so that the getConnection() method can pass credentials to the requesting application. Therefore, the Connection Factory class can reference the calling user's javax.security.auth.Subject from within the getConnection() method.
Subject subject = ConnectionContext.getSubject();
A Subject object can give access to logged-in user's credentials and roles that are defined. This may be null.


You can define a security-domain for this resource adapter that is separate from the default Data Services security-domain for validating the JDBC user. It is the user's responsibility to perform the necessary logins before the application server's thread accesses this resource adapter. This can become very complex for the end user.

3.5. Define a Connection

Extend the org.teiid.resource.spi.BasicConnection class, and provide an implementation based on your access of the Connection object in your translator. If your connection is stateful, then override the isAlive() and cleanup() methods and provide suuitable implementations. These are called to check if a connection is stale and needs flushing from the connection pool by the application server.
public class MyConnection extends BasicConnection 
   public void doSomeOperation(command)
      // do some operation with requesting application..
      // This is method you use in the Translator, you should know
      // what need to be done here for your source..
   public boolean isAlive() 
      return true;
   public void cleanUp() 

3.6. XA Transactions

If the requesting application can participate in XA transactions, then your Connection object must override the getXAResource() method and provide the XAResource object for the application. Refer to Section 3.5, “Define a Connection”. To participate in crash recovery you must also extend the BasicResourceAdapter class and implement the public XAResource[] getXAResources(ActivationSpec[] specs) method.
Data Services can make XA-capable resource adapters participate in distributed transactions. If they are not XA-capable, then they can only participate in distributed queries. Transaction semantics are determined by whether the data source file (SOA_ROOT/jboss-as/server/PROFILE/deploy/DATASOURCE-ds.xml) is defined with local-tx or no-tx.

3.7. Packaging the Adapter

When development is complete, the resource adapter files are packaged into an artifact called a Resource Adapter Archive or RAR file. The file format is defined by the JCA specification and should not be confused with the RAR file compression format.
The method of creating a RAR artifact will depend on your build system:
JBoss Developer Studio
If you create a Java Connector project in JBoss Developer Studio, it will include a build target that produces a RAR file.
Apache Ant
When using Apache Ant, you can use the standard rar build task.
Apache Maven
When using Apache Maven, set the value of the <packaging> element to rar. Since Data Services uses Maven, you can refer to any of the Connector projects; for example, pom.xml shown below.
   <name>Name Connector</name>
   <description>This connector is a sample</description>


Make sure that the RAR file, under its META-INF directory contains the ra.xml file. If you are using Apache Maven, refer to http://maven.apache.org/plugins/maven-rar-plugin/. In the root of the RAR file, you can embed the JAR file containing your connector code and any dependent library JAR files.

3.8. Deploying the Adapter

Once the RAR file is built, deploy it by copying the RAR file into the server's PROFILE/deploy directory. The server will not need to be restarted when a new RAR file is added. You can also use the web-based Administration Console to deploy the RAR file.
Once the adapter's RAR file has been deployed you can create an instance of this connector to use with your Translator. Creating an instance of this adapter is the same as creating a Connection Factory. There are two ways you can do this:
  1. Create the name-ds.xml file, and copy it into the server/PROFILE/deploy/ directory of your server.
    <!DOCTYPE connection-factories PUBLIC 
       "-//JBoss//DTD JBOSS JCA Config 1.5//EN" 
       define all the properties defined in the "ra.xml" that required or needs to be 
       modified from defaults each property is defined in single element 
             <config-property name="prop-name" type="java.lang.String">prop-value</config-property>
    There are more properties that you can define in this file; for example, for pooling, transactions, and security. Refer to the JBoss Enterprise Application Platform documentation for all the available properties, http://docs.redhat.com/docs/en-US/JBoss_Enterprise_Application_Platform/5/.
  2. You can use the web-based Administration Console to create a new ConnectionFactory.

Chapter 4. Translator Development

4.1. Extending the ExecutionFactory Class

The Connector Manager is a component that controls access to your translator. This chapter reviews the basics of how the Connector Manager interacts with your translator while leaving reference details and advanced topics to be covered in later chapters.
A custom translator must extend the org.teiid.translator.ExecutionFactory class to connect and query an enterprise data source. This extended class must provide a constructor with no arguments that can be constructed using Java reflection libraries. This Execution Factory must override the following elements.

4.1.1. ConnectionFactory

Defines the ConnectionFactory interface that is expected from resource adapter. This is defined as part of class definition using generics when extending the ExecutionFactory class.

4.1.2. Connection

Defines the Connection interface that is expected from resource adapter. This is defined as part of class definition using generics when extending the ExecutionFactory class.

4.1.3. Configuration Properties

If this translator needs configurable properties then:
  1. define a variable for every property as an attribute in the extended ExecutionFactory class,
  2. define "get" and "set" methods for each attribute,
  3. and annotate each "get" method with @TranslatorProperty annotation and provide the metadata about the property.
For example, if you need a property called foo, by providing the annotation on these properties, Data Services will automatically interrogate and provide a graphical way to configure your Translator.
private String foo = "blah";
@TranslatorProperty(display="Foo property", description="description about Foo") 
public String getFoo() 
   return foo;

public void setFoo(String value) 
   return this.foo = value;
Only java primitive (int), primitive object wrapper (java.lang.Integer), or Enum types are supported as Translator properties. The default value will be derived from calling the getter, if available, on a newly constructed instance. All properties should have a default value. If there is no applicable default, then the property should be marked in the annotation as required. Initialization will fail if a required property value is not provided.
The @TranslatorProperty defines the following metadata that you can define about your property.
  • display: Display name of the property
  • description: Description about the property
  • required: The property is a required property
  • advanced: This is advanced property; A default should be provided. A property can not be "advanced" and "required" at same time.
  • masked: The tools need to mask the property; Do not show in plain text; used for passwords

4.1.4. Initializing the Translator

Override and implement the start() method if your translator needs to do any initialization before it is used by the Data Services engine. This method must call super.start(). This method is called by Data Services once all the configuration properties are injected into the class.

4.1.5. TranslatorCapabilities

These are various methods that typically begin with method signature supports on the ExecutionFactory class. These methods need to be overridden to describe the execution capabilities of the Translator. Refer to Section 4.4.5, “Translator Capabilities” for more on these methods.

4.1.6. Execution (and sub-interfaces)

Based on the types of executions you are supporting, the following methods need to be overridden with your own implementations that extend the respective interfaces.
  • createResultSetExecution - Define if you are doing read based operation that is returning a rows of results.
  • createUpdateExecution - Define if you are doing write based operations.
  • createProcedureExecution - Define if you are doing procedure based operations.
You can choose to implement all the execution modes or just what you need. See more details on this below.

4.1.7. Metadata

Override and implement the method getMetadata(), if you want to expose the metadata about the source for use in Dynamic VDBs. This defines the tables, column names, procedures, parameters, etc. for use in the query engine.


This method is not yet used by the Teiid Designer tool in JBoss Developer Studio 4. This means that the Teiid Designer cannot be used to import metadata for a custom translator. Currently Teiid Designer has import facilities of its own for all the popular data sources that Data Services supports. Support for getMetadata() is planned in a future version of JBoss Developer Studio.

4.1.8. Logging

Enterprise Data Services provides the org.teiid.logging.LogManager class for logging purposes, based on the Apache Log4j logging services.
Logging messages will be sent automatically to the main Enterprise Data Services logs. You can edit the SOA_ROOT/jboss-as/server/PROFILE/conf/jboss-log4j.xml file to add the custom logging.

4.1.9. Exceptions

If you need to bubble up any exception use org.teiid.translator.TranslatorException class.

4.1.10. Default Name

You can define a default instance of your Translator by defining the annotation @Translator on the ExecutionFactory. After deployment, a default instance of this Translator can be used by any VDB by referencing it by this name in its vdb.xml configuration file.
A VDB can also override the default properties and define another instance of this Translator too. The name you give here is the short name used every where else in the Data Services configuration to refer to this translator.

4.2. Connections to Source

4.2.1. Obtaining connections

The extended ExecutionFactory must implement the getConnection() method to allow the Connector Manager to obtain a connection.

4.2.2. Releasing Connections

Connections are only used for the lifetime of the request. When the request completes, the closeConnection() method is called on the ExecutionFactory. You must override this method to close the connection properly.
If the resource adapter is JEE JCA Connector based, connection pooling is automatically provided. If your resource adapter does not implement the JEE JCA, then connection pooling semantics are left to the user to define on their own.
Red Hat recommends the use of connection pooling when a connection is stateful or when connections are expensive to create.

4.3. Executing Commands

4.3.1. Execution Modes

The Data Services query engine uses the ExecutionFactory class to obtain the Execution interface for the command it is executing. The query is sent to the translator as a set of objects. Refer to Section 4.4, “Command Language” for more details.
Translators are allowed to support any subset of the available execution modes.

Table 4.1. Types of Execution Modes

Execution Interface Command interface(s) Description
ResultSetExecution QueryExpression A query corresponding to a SQL SELECT or set query statement.
UpdateExecution Insert, Update, Delete, BatchedUpdates An insert, update, or delete, corresponding to a SQL INSERT, UPDATE, or DELETE command
ProcedureExecution Call A procedure execution that may return a result set and/or output values.

All of the execution interfaces extend the base Execution interface that defines how executions are cancelled and closed. ProcedureExecution also extends ResultSetExecution, since procedures may also return resultsets.

4.3.2. ExecutionContext

The org.teiid.translator.ExecutionContext class provides information related to the current execution. An instance of ExecutionContext is available for each Execution. Various 'get' methods are provided; for example, ExecutionContext.getRequestIdentifier() and ExecutionContext.getSession() are provided for logging purposes. Specific usage is highlighted in this guide where applicable. Source Hints

The Data Services source meta-hint is used to provide hints directly to source executions via user or transformation queries. See the reference for more on source hints. If specified and applicable, the general and source specific hint will be supplied via the ExecutionContext methods getGeneralHint and getSourceHint. See the source for the OracleExecutionFactory for an example of how this source hint information can be utilized.

4.3.3. ResultSetExecution

Typically most commands executed against translators are QueryExpression. While the command is being executed, the translator provides results via the ResultSetExecution.next() method. This method returns null to indicate the end of results. Note: the expected batch size can be obtained using the ExecutionContext.getBatchSize() method and used as a hint in fetching results from the EIS.

4.3.4. Update Execution

Each execution returns the update count(s) expected by the update command. If possible BatchedUpdates should be executed atomically. The ExecutionContext.isTransactional() method can be used to determine if the execution is already under a transaction.

4.3.5. Procedure Execution

Procedure commands correspond to the execution of a stored procedure or some other functional construct. A procedure takes zero or more input values and can return a result set and zero or more output values.  Examples of procedure execution would be a stored procedure in a relational database or a call to a web service.
If a result set is expected when a procedure is executed, all rows from it will be retrieved via the ResultSetExecution interface first. Then, if any output values are expected, they will be retrieved using the getOutputParameterValues() method.

4.3.6. Asynchronous Executions

In some scenarios, a translator needs to execute asynchronously and allow the executing thread to perform other work. To allow this, you should throw a DataNotAvailableException during a retrieval method, rather than explicitly waiting or sleeping for the results.


The DataNotAvailableException should not be thrown by the execute method. A non-negative value may be included as a parameter to the constructor, indicating how long the system should wait before polling for results.
The DataNotAvailableException.NO_POLLING exception (or any DataNotAvailableException with a negative delay) can be thrown so that processing will resume (via ExecutionContext.dataAvailable()).
Since the execution (and the associated connection) is not closed until the work has completed, care should be taken if using asynchronous executions that hold a lot of state.

4.3.7. Bulk Execution

Non batched Insert, Update, Delete commands may have Literal values marked as multiValued if the capabilities shows support for BulkUpdate. Commands with multiValued Literals represent multiple executions of the same command with different values. As with BatchedUpdates, bulk operations should be executed atomically if possible.

4.3.8. Command Completion

All normal command executions end with the calling of close() on the Execution object.  Your implementation of this method should do the appropriate clean-up work for all state created in the Execution object.

4.3.9. Command Cancellation

Commands submitted to Data Services may be aborted in several scenarios:
  • Client cancellation via the JDBC API (or other client APIs)
  • Administrative cancellation
  • Clean-up during session termination
  • Clean-up if a query fails during processing
Unlike the other execution methods, which are handled in a single-threaded manner, calls to cancel happen asynchronously with respect to the execution thread.
Your connector implementation may choose to do nothing in response to this cancellation message. In this instance, Data Services will call close() on the execution object after current processing has completed. Implementing the cancel() method allows for faster termination of queries being processed and may allow the underlying data source to terminate its operations faster as well.

4.4. Command Language

4.4.1. Language

Data Services sends commands to your Translator in object form. These classes are all defined in the org.teiid.language package. These objects can be combined to represent any possible command that Data Services may send to the Translator. However, it is possible to notify Data Services that your Translator can only accept certain kinds of constructs via the capabilities defined on the ExecutionFactory class. Refer to Section 4.4.5, “Translator Capabilities” for more information.
The language objects all extend from the LanguageObject interface. Language objects should be thought of as a tree where each node is a language object that has zero or more child language objects of types that are dependent on the current node.
All commands sent to your Translator are in the form of these language trees, where the root of the tree is a subclass of Command. Command has several sub-interfaces, namely:
  • QueryExpression
  • Insert
  • Update
  • Delete
  • BatchedUpdates
  • Call
Important components of these commands are expressions, criteria, and joins, which are examined in closer detail below. For more on the classes and interfaces described here, refer to the Data Services JavaDocs. Expressions

An expression represents a single value in context, although in some cases that value may change as the query is evaluated.  For example, a literal value, such as 5 represents an integer value.  An column reference such as "table.EmployeeName" represents a column in a data source and may take on many values while the command is being evaluated.
  • Expression – base expression interface
  • ColumnReference – represents an column in the data source
  • Literal – represents a literal scalar value, but may also be multi-valued in the case of bulk updates.
  • Function – represents a scalar function with parameters that are also Expressions
  • AggregateFunction – represents an aggregate function which holds a single expression
  • WindowFunction – represents a window function which holds an AggregateFunction (which is also used to represent analytical functions) and a WindowSpecification
  • ScalarSubquery – represents a subquery that returns a single value
  • SearchedCase, SearchedWhenClause – represents a searched CASE expression.  The searched CASE expression evaluates the criteria in WHEN clauses until one of them evaluates to TRUE, then evaluates the associated THEN clause. Condition

A criteria is a combination of expressions and operators that evaluates to true, false, or unknown.  Criteria are most commonly used in the WHERE or HAVING clauses.
  • Condition – the base criteria interface
  • Not – used to NOT another criteria
  • AndOr – used to combine other criteria via AND or OR
  • SubuqeryComparison – represents a comparison criteria with a subquery including a quantifier such as SOME or ALL
  • Comparison – represents a comparison criteria with =, >, <, etc.
  • BaseInCondition – base class for an IN criteria
  • In – represents an IN criteria that has a set of expressions for values
  • SubqueryIn – represents an IN criteria that uses a subquery to produce the value set
  • IsNull – represents an IS NULL criteria
  • Exists – represents an EXISTS criteria that determines whether a subquery will return any values
  • Like – represents a LIKE/SIMILAR TO/LIKE_REGEX criteria that compares string values The FROM Clause

The FROM clause contains a list of TableReference's.  
  • NamedTable – represents a single Table
  • Join – has a left and right TableReference and information on the join between the items
  • DerivedTable – represents a table defined by an inline QueryExpression
A list of TableReference are used by default, in the pushdown query when no outer joins are used. If an outer join is used anywhere in the join tree, there will be a tree of Join s with a single root. This latter form is the ANSI perfered style. If you wish all pushdown queries containing joins to be in ANSI style have the capability "useAnsiJoin" return true. Refer to Section, “Command Form” for more information. QueryExpression Structure

QueryExpression is the base for both SELECT queries and set queries. It may optionally take an OrderBy (representing a SQL ORDER BY clause) and a Limit (represent a SQL LIMIT clause) or a With (represents a SQL WITH clause). Select Structure

Each QueryExpression can be a Select describing the expressions (typically elements) being selected and an TableReference specifying the table or tables being selected from, along with any join information.  The Select may optionally also supply an Condition (representing a SQL WHERE clause), a GroupBy (representing a SQL GROUP BY clause), a Condition (representing a SQL HAVING clause). SetQuery Structure

A QueryExpression can also be a SetQuery that represents the SQL set operations (UNION, INTERSECT, EXCEPT) on two QueryExpressions. The all flag may be set to indicate UNION ALL (currently INTERSECT and EXCEPT ALL are not supported). With Structure

A With clause contains named QueryExpressions held by WithItems that can be referenced as tables in the main QueryExpression. Insert Structure

Each Insert will have a single NamedTable specifying the table being inserted into.  It will also has a list of ColumnReference specifying the columns of the NamedTable that are being inserted into. It also has InsertValueSource, which will be a list of Expressions (ExpressionValueSource), or a QueryExpression, or an Iterator (IteratorValueSource) Update Structure

Each Update will have a single NamedTable specifying the table being updated and list of SetClause entries that specify ColumnReference and Expression pairs for the update. The Update may optionally provide a criteria Condition specifying which rows should be updated. Delete Structure

Each Delete will have a single NamedTable specifying the table being deleted from. It may also optionally have a criteria specifying which rows should be deleted. Call Structure

Each Call has zero or more Argument objects. The Argument objects describe the input parameters, the output result set, and the output parameters. BatchedUpdates Structure

Each BatchedUpdates has a list of Command objects (which must be either Insert, Update or Delete) that compose the batch.

4.4.2. Language Utilities

This section covers utilities available when using, creating, and manipulating the language interfaces. Data Types

The Translator API contains an interface TypeFacility that defines data types and provides value translation facilities. This interface can be obtained from calling the ExecutionFactory.getTypeFacility() method.
The TypeFacility interface has methods that support data type transformation and detection of appropriate runtime or JDBC types.  The TypeFacility.RUNTIME_TYPES and TypeFacility.RUNTIME_NAMES interfaces defines constants for all Data Services runtime data types.  All Expression instances define a data type based on this set of types.  These constants are often needed in understanding or creating language interfaces. Language Manipulation

In Translators that support a richer set of capabilities, there is often a need to manipulate or create language interfaces with a similar syntax to those being translated to. This is often the case when translating to a language comparable to SQL. Some utilities are provided for this purpose.
Similar to the TypeFacility, you can call getLanguageFactory() method on the ExecutionFactory to get a reference to the LanguageFactory instance for your translator.  This interface is a factory that can be used to create new instances of all the concrete language interface objects.  
Some helpful utilities for working with Condition objects are provided in the LanguageUtil class.  This class has methods to combine Condition with AND or to break a Condition apart based on AND operators.  These utilities are helpful for breaking apart a criteria into individual filters that your translator can implement.

4.4.3. Runtime Metadata

Data Services uses a library of metadata, known as runtime metadata for each virtual database that is deployed in Data Services. The runtime metadata is a subset of metadata as defined by models in the Data Services models that compose the virtual database.  While building your VDB in the Designer, you can define what called an Extension Model, that defines any number of arbitrary properties on a model and its objects. At runtime, using the runtime metadata interface, you can use properties that were defined at design time to define execution behavior.
Translator gets access to the RuntimeMetadata interface at the time of Excecution creation. Translators can access runtime metadata by using the interfaces defined in org.teiid.metadata package.  This package defines API representing a Schema, Table, Columns and Procedures, and ways to navigate these objects. Metadata Objects

All the language objects extend AbstractMetadataRecord class
  • Column - returns Column metadata record
  • Table - returns a Table metadata record
  • Procedure - returns a Procedure metadata record
  • ProcedureParameter - returns a Procedure Parameter metadata record
Once a metadata record has been obtained, it is possible to use its metadata about that object or to find other related metadata. Access to Runtime Metadata

The RuntimeMetadata interface is passed in for the creation of an "Execution". See "createExecution" method on the "ExecutionFactory" class. It provides the ability to look up metadata records based on their fully qualified names in the VDB.

Example 4.1. Obtaining Metadata Properties

The process of getting a Table's properties is sometimes needed for translator development.  For example to get the "NameInSource" property or all extension properties:
//getting the Table metadata from an Table is straight-forward
Table table = runtimeMetadata.getTable("table-name");
String contextName = table.getNameInSource();

//The props will contain extension properties
Map<String, String> props = table.getProperties();

4.4.4. Language Visitors Framework

The API provides a language visitor framework in the org.teiid.language.visitor package.  The framework provides utilities useful in navigating and extracting information from trees of language objects.
The visitor framework is a variant of the Visitor design pattern, which is documented in several popular design pattern references.  The visitor pattern encompasses two primary operations: traversing the nodes of a graph (also known as iteration) and performing some action at each node of the graph.  In this case, the nodes are language interface objects and the graph is really a tree rooted at some node.  The provided framework allows for customization of both aspects of visiting.
The base AbstractLanguageVisitor class defines the visit methods for all leaf language interfaces that can exist in the tree.  The LanguageObject interface defines an acceptVisitor() method – this method will call back on the visit method of the visitor to complete the contract.  A base class with empty visit methods is provided as AbstractLanguageVisitor.  The AbstractLanguageVisitor is just a visitor shell – it performs no actions when visiting nodes and does not provide any iteration.
The HierarchyVisitor provides the basic code for walking a language object tree.  The HierarchyVisitor performs no action as it walks the tree – it just encapsulates the knowledge of how to walk it.  If your translator wants to provide a custom iteration that walks the objects in a special order (to exclude nodes, include nodes multiple times, conditionally include nodes, etc) then you must either extend HierarchyVisitor or build your own iteration visitor.  In general, that is not necessary.
The DelegatingHierarchyVisitor is a special subclass of the HierarchyVisitor that provides the ability to perform a different visitor’s processing before and after iteration.  This allows users of this class to implement either pre- or post-order processing based on the HierarchyVisitor.  Two helper methods are provided on DelegatingHierarchyVisitor to aid in executing pre- and post-order visitors. Provided Visitors

The SQLStringVisitor is a special visitor that can traverse a tree of language interfaces and output the equivalent Data Services SQL.  This visitor can be used to print language objects for debugging and logging.  The SQLStringVisitor does not use the HierarchyVisitor described in the last section; it provides both iteration and processing type functionality in a single custom visitor.    
The CollectorVisitor is a handy utility to collect all language objects of a certain type in a tree. Some additional helper methods exist to do common tasks such as retrieving all elements in a tree, retrieving all groups in a tree, and so on. Writing a Visitor

Writing your own visitor can be quite easy if you use the provided facilities.  If the normal method of iterating the language tree is sufficient, then just follow these steps:
Create a subclass of AbstractLanguageVisitor.  Override any visit methods needed for your processing.  For instance, if you wanted to count the number of elements in the tree, you need only override the visit(ColumnReference) method.  Collect any state in local variables and provide accessor methods for that state.
Decide whether to use pre-order or post-order iteration. Note that visitation order is based upon syntax ordering of SQL clauses - not processing order.
Write code to execute your visitor using the utility methods on DelegatingHierarchyVisitor:
// Get object tree 
LanguageObject objectTree = …

// Create your visitor initialize as necessary
MyVisitor visitor = new MyVisitor();

// Call the visitor using pre-order visitation
DelegatingHierarchyVisitor.preOrderVisit(visitor, objectTree);

// Retrieve state collected while visiting
int count = visitor.getCount();

4.4.5. Translator Capabilities

The ExecutionFactory class defines all the methods that describe the capabilities of a Translator. These are used by the Connector Manager to determine what kinds of commands the translator is capable of executing. A base ExecutionFactory class implements all the basic capabilities methods, which says your translator does not support any capabilities. Your extended ExecutionFactory class must override the necessary methods to specify which capabilities your translator supports.  You should consult the debug log of query planning (set showplan debug) to see if desired pushdown requires additional capabilities. Capability Scope

Note that if your capabilities will remain unchanged for the lifetime of the translator, since the engine will cache them for reuse by all instances of that translator. Capabilities based on connection/user are not supported. Capabilities

The following table lists the capabilities that can be specified in the ExecutionFactory class.

Table 4.2. Available Capabilities

Translator can support SELECT DISTINCT in queries.
Translator can support SELECT of more than just column references.
Translator can support Tables in the FROM clause that have an alias.
Translator can support inner and cross joins
AliasedGroups and at least on of the join type supports.
Translator can support a self join between two aliased versions of the same Table.
Translator can support LEFT and RIGHT OUTER JOIN.
Translator can support FULL OUTER JOIN.
Join and base subquery support, such as ExistsCriteria
Translator can support subqueries in the ON clause. Defaults to true.
Translator can support a named subquery in the FROM clause.
Not currently used - between criteria is rewriten as compound comparisions.
Translator can support comparison criteria with the operator "=".
Translator can support comparison criteria with the operator ">" or "<".
Translator can support LIKE criteria.
Translator can support LIKE criteria with an ESCAPE character clause.
Translator can support SIMILAR TO criteria.
Translator can support LIKE_REGEX criteria.
Translator can support IN predicate criteria.
Translator can support IN predicate criteria where values are supplied by a subquery.
Translator can support IS NULL predicate criteria.
Translator can support the OR logical criteria.
Translator can support the NOT logical criteria. IMPORTANT: This capability also applies to negation of predicates, such as specifying IS NOT NULL, "<=" (not ">"), ">=" (not "<"), etc.
Translator can support EXISTS predicate criteria.
Translator can support a quantified comparison criteria using the ALL quantifier.
Translator can support a quantified comparison criteria using the SOME or ANY quantifier.
Translator can support the ORDER BY clause in queries.
Translator can support ORDER BY items that are not directly specified in the select clause.
Translator can support ORDER BY items with NULLS FIRST/LAST.
Translator can support an explict GROUP BY clause.
GROUP BY is restricted to only non-join queries.
Translator can support the HAVING clause.
Translator can support the AVG aggregate function.
Translator can support the COUNT aggregate function.
Translator can support the COUNT(*) aggregate function.
At least one of the aggregate functions.
Translator can support the keyword DISTINCT inside an aggregate function.  This keyword indicates that duplicate values within a group of rows will be ignored.
Translator can support the MAX aggregate function.
Translator can support the MIN aggregate function.
Translator can support the SUM aggregate function.
Translator can support the VAR_SAMP, VAR_POP, STDDEV_SAMP, STDDEV_POP aggregate functions.
Translator can support the use of a subquery in a scalar context (wherever an expression is valid).
At least one of the subquery pushdown capabilities.
Translator can support a correlated subquery that refers to an element in the outer query.
Not currently used - simple case is rewriten as searched case.
Translator can support "searched" CASE expressions anywhere that expressions are accepted.
Translator support UNION and UNION ALL
Translator supports INTERSECT
Translator supports Except
Unions, Intersect, or Except
Translator supports set queries with an ORDER BY
Translator can support the limit portion of the limit clause
Translator can support the offset portion of the limit clause
Translator can support non-column reference grouping expressions.
Translator supports INSERT statements with values specified by an QueryExpression.
Translator supports a batch of INSERT, UPDATE and DELETE commands to be executed together.
Translator supports updates with multiple value sets
Translator supports inserts with an iterator of values. The values would typically be from an evaluated QueryExpression.
Translator supports the WITH clause.
Translator supports window functions and analytic functions RANK, DENSE_RANK, and ROW_NUMBER.
Translator supports windowed aggregates with a window order by clause.
ElementaryOlapOperations, AggregatesDistinct
Translator supports windowed distinct aggregates.
Translator supports aggregate conditions.
function support for a parse/format function and an implementation of the supportsFormatLiteral method.
Translator supports only literal format patterns that must be validated by the supportsFormatLiteral method
Translator supports the given literal format string.

Note that any pushdown subquery must itself be compliant with the Translator capabilities. Command Form

The method ExecutionFactory.useAnsiJoin() should return true if the Translator prefers the use of ANSI style join structure for join trees that contain only INNER and CROSS joins.
The method ExecutionFactory.requiresCriteria() should return true if the Translator requires criteria for any Query, Update, or Delete. This is a replacement for the model support property "Where All". Scalar Functions

The method ExecutionFactory.getSupportedFunctions() can be used to specify which scalar functions the Translator supports.  The set of possible functions is based on the set of functions supported by Data Services.  This set can be found in the Data Services Reference Guide.   If the Translator states that it supports a function, it must support all type combinations and overloaded forms of that function.
There are also five standard operators that can also be specified in the supported function list: +, -, *, /, and ||.
The constants interface SourceSystemFunctions contains the string names of all possible built-in pushdown functions. Note that not all system functions appear in this list. This is because some system functions will always be evaluted in Data Services, are simple aliases to other functions, or are rewriten to a more standard expression. Physical Limits

The method ExecutionFactory.getMaxInCriteriaSize() can be used to specify the maximum number of values that can be passed in an IN criteria.  This is an important constraint as an IN criteria is frequently used to pass criteria between one source and another using a dependent join.
The method ExecutionFactory.getMaxDependentInPredicates() is used to specify the maximum number of IN predicates (of at most MaxInCriteriaSize) that can be passed as part of a dependent join. For example if there are 10000 values to pass as part of the dependent join and a MaxInCriteriaSize of 1000 and a MaxDependentInPredicates setting of 5, then the dependent join logic will form two source queries each with 5 IN predicates of 1000 values each combined by OR.
The method ExecutionFactory.getMaxFromGroups() can be used to specify the maximum number of FROM Clause groups that can used in a join. -1 indicates there is no limit. Update Execution Modes

The method ExecutionFactory.supportsBatchedUpdates() can be used to indicate that the Translator supports executing the BatchedUpdates command.
The method ExecutionFactory.supportsBulkUpdate() can be used to indicate that the Translator accepts update commands containg multi valued Literals.
Note that if the translator does not support either of these update modes, the query engine will compensate by issuing the updates individually. Default Behavior

The method ExecutionFactory.getDefaultNullOrder() specifies the default null order. Can be one of UNKNOWN, LOW, HIGH, FIRST, LAST. This is only used if ORDER BY is supported, but null ordering is not.

4.5. Large Objects

This section examines how to use facilities provided by the Data Services API to use large objects such as blobs, clobs, and xml in your Translator.

4.5.1. Data Types

Data Services supports three large object runtime data types:  blob, clob, and xml.  A blob is a "binary large object", a clob is a "character large object", and "xml" is a "xml document".  Columns modeled as a blob, clob, or xml are treated similarly by the translator framework to support memory-safe streaming.  

4.5.2. Why Use Large Object Support?

Data Services allows a Translator to return a large object through the Data Services translator API by just returning a reference to the actual large object.  Access to that LOB will be streamed as appropriate rather than retrieved all at once.  This is useful for several reasons:
  1. Reduces memory usage when returning the result set to the user.
  2. Improves performance by passing less data in the result set.
  3. Allows access to large objects when needed rather than assuming that users will always use the large object data.
  4. Allows the passing of arbitrarily large data values.
However, these benefits can only truly be gained if the Translator itself does not materialize an entire large object all at once.  For example, the Java JDBC API supports a streaming interface for blob and clob data.

4.5.3. Handling Large Objects

The Translator API automatically handles large objects (Blob/Clob/SQLXML) through the creation of special purpose wrapper objects when it retrieves results.
Once the wrapped object is returned, the streaming of LOB is automatically supported. These LOB objects then can for example appear in client results, in user defined functions, or sent to other translators.
A Execution is usually closed and the underlying connection is either closed/released as soon as all rows for that execution have been retrieved. However, LOB objects may need to be read after their initial retrieval of results.  When LOBs are detected the default closing behavior is prevented by setting a flag using the ExecutionContext.keepAlive() method.
When the "keepAlive" alive flag is set, then the execution object is only closed when user's Statement is closed.

4.5.4. Inserting or Updating Large Objects

LOBs will be passed to the Translator in the language objects as Literal containing a java.sql.Blob, java.sql.Clob, or java.sql.SQLXML.  You can use these interfaces to retrieve the data in the large object and use it for insert or update.

4.6. Delegating Translator

In some instances, you may wish to extend multiple translators with the same functionality. Rather than create separate subclasses for each extension, functionality that is common to multiple extensions can be added to a subclass of BaseDelegatingExecutionFactory. Within this subclass, delegation methods can be overridden to perform the common functionality.
public class MyTranslator extends BaseDelegatingExecutionFactory<Object, Object> {
	public Execution createExecution(Command command,
			ExecutionContext executionContext, RuntimeMetadata metadata,
			Object connection) throws TranslatorException {
		if (command instanceof Select) {
			//modify the command or return a different execution
		//the super call will be to the delegate instance
		return super.createExecution(command, executionContext, metadata, connection);
You will bundle and deploy your custom delegating translator just like any other custom translator development. To use your delegating translator in a vdb, you define a translator override that wires in the delegate.
<translator type="custom-delegator" name="my-translator">

     <property value="delegateName" name="name of the delegate instance"/>

     <!-- any custom properties you may have on your custom translator -->

From the previous example the translator type is custom-delegator. Now my-translator can be used as a translator-name on a source and will proxy all calls to whatever delegate instance you assign.


The delegate instance can be any translator instance whether configured by its own translator entry or just the name of a standard translator type. Using a BaseDelegatingExecutionFactory by default means that standard override translator property settings on your instance will have no effect, since the underlying delegate is called instead.
You may also wish to use a different class hierarchy and instead make your custom translator just implement DelegatingExecutionFactory instead.

4.7. Packaging

Once the "ExecutionFactory" class is implemented, package it in a JAR file. The only additional requirement is provide a file called "jboss-beans.xml" in the "META-INF" directory of the JAR file, with following contents. Replace ${name} with name of your translator, and replace ${execution-factory-class} with your overridden ExecutionFactory class name. This will register the Translator for use with tooling and Admin API.
<?xml version="1.0" encoding="UTF-8"?>
<deployment xmlns="urn:jboss:bean-deployer:2.0">

   <bean name="translator-${name}-template" class="org.teiid.templates.TranslatorDeploymentTemplate">
      <property name="info"><inject bean="translator-${name}"/></property>
      <property name="managedObjectFactory"><inject bean="ManagedObjectFactory"/></property>

   <bean name="translator-${name}" class="org.teiid.templates.TranslatorTemplateInfo">
      <constructor factoryMethod="createTemplateInfo">
         <factory bean="TranslatorDeploymentTemplateInfoFactory"/>
         <parameter class="java.lang.Class">org.teiid.templates.TranslatorTemplateInfo</parameter>
         <parameter class="java.lang.Class">${execution-factory-class}</parameter>
         <parameter class="java.lang.String">translator-${name}</parameter>
         <parameter class="java.lang.String">${name}</parameter>         


4.8. Deployment

To deploy your Translator, copy the Translator JAR file into the server/PROFILE/deploy/ directory of the server. The translator will be deployed automatically and will not require a server restart.
If your Translator has external dependencies to other JAR libraries, they need to be placed inside the server/PROFILE/lib directory of server. This will require a server restart.


Optionally you can include all the required JAR libraries in the Translator JAR file. This will remove the requirement to restart the server for the deployment of the JAR libraries but could introduce conflicts if any of those dependencies are already available in the server. Any conflicts must be resolved, usually by removing the JAR files from the Translator JAR, before it can be deployed successfully.

Chapter 5. Extending The JDBC Translator

New custom Translators can be created by extending the JDBC Translator. This is one of the most common use-cases for custom Translator development and is often done to add support for JDBC drivers and database versions. This chapter describes this process.
To design a JDBC Translator for any relational database management system (RDBMS) that is not already supported by Data Services, extend the org.teiid.translator.jdbc.JDBCExecutionFactory class in the translator-jdbc module. There are three types of methods that you can override from the base class to define the behavior of the Translator.

Table 5.1. Extensions

Specify the SQL syntax and functions the source supports.
SQL Translation
Customize what SQL syntax is used, how source-specific functions are supported, how procedures are executed.
Results Translation
Customize how results are retrieved from JDBC and translated.

5.1. Capabilities Extension

This extension must override the methods that begin with "supports" that describe translator capabilities. Refer to Section 4.4.5, “Translator Capabilities” for all the available translator capabilities.
The most common example is adding support for a scalar function – this requires both declaring that the translator has the capability to execute the function and often modifying the SQL Translator to translate the function appropriately for the source.
Another common example is turning off unsupported SQL capabilities (such as outer joins or subqueries) for less sophisticated JDBC sources.

5.2. SQL Translation Extension

The JDBCExcecutionFactory provides several methods to modify the command and the string form of the resulting syntax before it is sent to the JDBC driver, including:
  • Change basic SQL syntax options. See the useXXX methods, e.g. useSelectLimit returns true for SQLServer to indicate that limits are applied in the SELECT clause.
  • Register one or more FunctionModifiers that define how a scalar function should be modified or transformed.
  • Modify a LanguageObject. - see the translate, translateXXX, and FunctionModifiers.translate methods. Modify the passed in object and return null to indicate that the standard syntax output should be used.
  • Change the way SQL strings are formed for a LanguageObject. - - see the translate, translateXXX, and FunctionModifiers.translate methods. Return a list of parts, which can contain strings and LanguageObjects, that will be appended in order to the SQL string. If the in coming LanguageObject appears in the returned list it will not be translated again.

5.3. Results Translation Extension

The JDBCExecutionFactory provides several methods to modify the java.sql.Statement and java.sql.ResultSet interactions, including:
  1. Overriding the createXXXExecution to subclass the corresponding JDBCXXXExecution. The JDBCBaseExecution has protected methods to get the appropriate statement (getStatement, getPreparedStatement, getCallableStatement) and to bind prepared statement values bindPreparedStatementValues.
  2. Retrieve values from the JDBC ResultSet or CallableStatement - see the retrieveValue methods.

5.4. Adding Function Support

Refer to Chapter 6, User Defined Functions for adding new functions to Data Services. This example will show you how to declare support for the function and modify how the function is passed to the data source.
Following is a summary of all coding steps in supporting a new scalar function:
  1. Override the capabilities method to declare support for the function (REQUIRED)
  2. Implement a FunctionModifier to change how a function is translated and register it for use (OPTIONAL)
There is a capabilities method getSupportedFunctions() that declares all supported scalar functions.
An example of an extended capabilities class to add support for the "abs" absolute value function:
package my.connector;

import java.util.ArrayList;
import java.util.List;

public class ExtendedJDBCExecutionFactory extends JDBCExecutionFactory 
   public List getSupportedFunctions() 
      List supportedFunctions = new ArrayList();
      return supportedFunctions;
In general, it is a good idea to call super.getSupportedFunctions() to ensure that you retain any function support provided by the translator you are extending.
This may be all that is needed to support a Data Services function if the JDBC data source supports the same syntax as Data Services. The built-in SQL translation will translate most functions as: "function(arg1, arg2, …)".

5.4.1. Using FunctionModifiers

In some cases you may need to translate the function differently or even insert additional function calls above or below the function being translated. The JDBC translator provides an abstract class FunctionModifier for this purpose.
During the start method a modifier instance can be registered against a given function name via a call to JDBCExecutionFactory.registerFunctionModifier.
The FunctionModifier has a method called translate. Use the translate method to change the way the function is represented.
An example of overriding the translate method to change the MOD(a, b) function into an infix operator for Sybase (a % b). The translate method returns a list of strings and language objects that will be assembled by the translator into a final string. The strings will be used as is and the language objects will be further processed by the translator.
public class ModFunctionModifier implements FunctionModifier 
   public List translate(Function function) 
      List parts = new ArrayList();
      Expression[] args = function.getParameters();
      parts.add(" % "); 
      return parts;
In addition to building your own FunctionModifiers, there are a number of pre-built generic function modifiers that are provided with the translator.

Table 5.2. Common Modifiers

Handles simply renaming a function ("ucase" to "upper" for example)
Wraps a function in the standard JDBC escape syntax for functions: {fn xxxx()}

To register the function modifiers for your supported functions, you must call the ExecutionFactory.registerFunctionModifier(String name, FunctionModifier modifier) method.
public class ExtendedJDBCExecutionFactory extends JDBCExecutionFactory
   public void start() 

      // register functions.
      registerFunctionModifier("abs", new MyAbsModifier()); 
      registerFunctionModifier("concat", new AliasModifier("concat2")); 
Support for the two functions being registered ("abs" and "concat") must be declared in the capabilities as well. Functions that do not have modifiers registered will be translated as usual.

5.5. Installing Extensions

Once you have developed an extension to the JDBC translator, you must install it into the Data Services Server. The process of packaging or deploying the extended JDBC translators is exactly as any other other translator. Since the RDMS is accessible already through its JDBC driver, there is no need to develop a resource adapter for this source as JBoss EAP provides a wrapper JCA connector (DataSource) for any JDBC driver.

Chapter 6. User Defined Functions

If you need to extends Data Services' scalar function library, then Data Services provides a means to define custom scalar functions or User Defined Functions(UDF). The following steps need to be taken in creating a UDF.

6.1. UDF Definition

A {FunctionDefinition}.xmi file provides metadata to the query engine on User Defined Functions. See the Designer Documentation for more on creating a Function Definition Model.
The following are used to define a UDF.
  • Function Name When you create the function name, keep these requirements in mind:
    • You cannot overload existing Teiid System functions.
    • The function name must be unique among user-defined functions in its model for the number of arguments.  You can use the same function name for different numbers of types of arguments.  Hence, you can overload your user-defined functions.
    • The function name cannot contain the '.' character.
    • The function name cannot exceed 255 characters.
  • Input Parameters - defines a type specific signature list. All arguments are considered required.
  • Return Type - the expected type of the returned scalar value.
  • Pushdown - can be one of REQUIRED, NEVER, ALLOWED. Indicates the expected pushdown behavior. If NEVER or ALLOWED are specified then a Java implementation of the function should be supplied. If REQUIRED is used, then user must extend the Translator for the source and add this function to its pushdown function library.
  • invocationClass/invocationMethod - optional properties indicating the static method to invoke when the UDF is not pushed down.
  • Deterministic - if the method will always return the same result for the same input parameters.
Even pushdown required functions need to be added as a UDF to allow Data Services to properly parse and resolve the function. Pushdown scalar functions differ from normal user-defined functions in that no code is provided for evaluation in the engine. An exception will be raised if a pushdown required function cannot be evaluated by the appropriate source.

Dynamic VDBs

Currently there is no provision to add UDF when you are working with the Dynamic VDBs. However, you can extend the Translator to define source pushdown functions.

6.2. Source Supported UDF

While Data Services provides an extensive scalar function library, it contains only those functions that can be evaluated within the query engine. In many circumstances, especially for performance, a user defined function allows for calling a source specific function.
For example, suppose you want to use the Oracle-specific functions score and contains:
SELECT score(1), ID, FREEDATA FROM Docs WHERE contains(freedata, 'nick', 1) > 0
The score and contains functions are not part of built-in scalar function library. While you could write your own custom scalar function to mimic their behavior, it is more likely that you would want to use the actual Oracle functions that are provided by Oracle when using the Oracle Free Text functionality.
In addition to the normal steps outlined in the section to create and install a function model (FunctionDefinitions.xmi), you will need to extend the appropriate connector(s).
For example, to extend the Oracle Connector
  • Required - extend the OracleExecutionFactory and add SCORE and CONTAINS as supported pushdown functions by either overriding or adding additional functions in "getPushDownFunctions" method. For this example, we'll call the class MyOracleExecutionFactory. Add the org.teiid.translator.Translator annotation to the class, e.g. @Translator(name="myoracle")
  • Optionally register new FunctionModifiers on the start of the ExecutionFactory to handle translation of these functions. Given that the syntax of these functions is same as other typical functions, this probably is not needed - the default translation should work.
  • Create a new translator jar containing your custom ExecutionFactory. Refer to Section 4.7, “Packaging” and Section 4.8, “Deployment” for instructions on using the JAR file. Once this is extended translator is deployed in the Teiid Server, use "myoracle" as translator name instead of the "oracle" in your VDB's Oracle source configuration.

6.3. Non-pushdown Support for User-Defined Functions

Non-pushdown support requires a Java function that matches the metadata supplied in the FunctionDefinitions.xmi file. You must create a Java method that contains the function’s logic. This Java method should accept the necessary arguments, which the Data Services System will pass to it at runtime, and function should return the calculated or altered value.

6.3.1. Java Code

Code Requirements
  • The java class containing the function method must be defined public.


    You can declare multiple user-defined functions for a given class.
  • The function method must be public and static.
  • Number of input arguments and types must match the function metadata defined in Section 6.1, “UDF Definition”.
  • Any exception can be thrown, but Data Services will rethrow the exception as a FunctionExecutionException .
You may optionally add an additional org.teiid.CommandContext argument as the first parameter. The CommandContext interface provides access to information about the current command, such as the executing user, Subject, the vdb, the session id, etc. This CommandContext parameter should not be delared in the function metadata.
package org.something;

public class TempConv 
   * Converts the given Celsius temperature to Fahrenheit, and returns the
   * value.
   * @param doubleCelsiusTemp 
   * @return Fahrenheit 
   public static Double celsiusToFahrenheit(Double doubleCelsiusTemp)
      if (doubleCelsiusTemp == null) 
         return null;
      return (doubleCelsiusTemp)*9/5 + 32;
package org.something;

public class SessionInfo 
   * @param context 
   * @return the created Timestamp 
   public static Timestamp sessionCreated(CommandContext context)
      return new Timestamp(context.getSession().getCreatedTime());
The corresponding user-defined function would be declared as Timestamp sessionCreated().

6.3.2. Post Code Activities

  1. After coding the functions you should compile the Java code into a Java Archive (JAR) file.
  2. The JAR should be available in the classpath of Data Services - this could be the server profile lib, or the deployers/teiid.deployer directory depending upon your preference.

6.4. Installing user-defined functions

Once a user-defined function model (FunctionDefinitions.xmi) has been created in in the Designer Tool, it can be added to the VDB for use by Data Services.

6.5. User Defined Functions in Dynamic VDBs

Dynamic VDBs do not use Designer generated artifacts, such as a FunctionDefinition.xmi file. Even with that limitation dynamic vdbs may still utilize UDFs through custom coding. The ExecutionFactory.getMetadata call allows for the definition of metadata via a MetadataFactory. Use the MetadataFactory.addFunction to add function for use only by that translator instance. Functions added directly to the source schema are specific to that schema - their fully qualified name will include the schema and the function can not be pushed to a different source.
The ExecutionFactory.getPushdownFunctions method can be used to describe functions that are valid against all instances of a given translator type. The function names are expected to be prefixed by the translator type, or some other logical grouping, e.g. salesforce.includes. The full name of the function once imported into the system will qualified by the SYS schema, e.g. SYS.salesforce.includes.
Any funcitons added via these mechanisms do not need to be declared in ExecutionFactory.getSupportedFunctions. Any of the additional handling, such as adding a FunctionModifier, covered above is also applicable here. All pushdown functions will have function name set to only the simple name. Schema or other qualification will be removed. Handling, such as function modifiers, can check the function metadata if there is the potential for an ambiguity.

Chapter 7. AdminAPI

In most circumstances the admin operations will be performed through the admin console or AdminShell tooling, but it is also possible to invoke admin functionality directly in Java through the AdminAPI.
All classes for the AdminAPI are in the client jar under the org.teiid.adminapi package.

7.1. Connecting

An AdminAPI connection, which is represented by the org.teiid.adminapi.Admin interface, is obtained through the org.teiid.adminapi.AdminFactory.createAdmin methods. AdminFactory is a singleton, see AdminFactory.getInstance(). The Admin instance automatically tests its connection and reconnects to a server in the event of a failure. The close method should be called to terminate the connection.
See your Data Services installation for the appropriate admin port - the default is 31443.

7.2. Admin Methods

Admin methods exist for monitoring, server administration, and configuration purposes. Note that the objects returned by the monitoring methods, such as getRequests, are read-only and cannot be used to change server state. See the JavaDocs for all of the details.

Chapter 8. Logging

8.1. Customized Logging

The Data Services system provides a wealth of information using logging. To control logging level, contexts, and log locations, you should be familiar with log4j and the application server's jboss-log4j.xml configuration file. Refer to the Data Service Administrator Guide for more details about different Data Services contexts available. Refer to http://logging.apache.org/log4j/ for more information about log4j.
If the default log4j logging mechanisms are not sufficient for your logging needs you may need a different appender, refer to the log4j javadocs at http://logging.apache.org/log4j/1.2/apidocs/index.html. Note that log4j already provides quite a few appenders including JMS, RDBMS, and SMTP.
If you want a custom appender, follow the Log4J directions to write a custom appender. Refer to the instructions at http://logging.apache.org/log4net/release/faq.html. If you develop a custom logging solution, the implementation jar should be placed in the "lib" directory of the JBoss EAP server profile that Data Services is installed in.

8.1.1. Command Logging API

If you want to build a custom appender for command logging that will have access to log4j "LoggingEvents" to the "COMMAND_LOG" context, the appender will receive a message that is an instance of org.teiid.logging.CommandLogMessage. The relevant Teiid classes are defined in the teiid-api-[versionNumber].jar. The CommmdLogMessage includes information about vdb, session, command sql, etc. CommandLogMessages are logged at the DEBUG level. An example follows.
package org.something;
import org.apache.log4j.AppenderSkeleton;
import org.apache.log4j.spi.LoggingEvent;
import org.teiid.logging.*;

public class CustomAppender extends AppenderSkeleton 
  protected void append(LoggingEvent event) {
    if (event.getMessage() instanceof CommandLogMessage) {
      CommandLogMessage clMessage = (CommandLogMessage)event.getMessage();
      String sql = clMessage.getSql();
      //log to a database, trigger an email, etc.

8.1.2. Audit Logging API

log4j "LoggingEvents" to the "org.teiid.AUDIT_LOG" context, the appender will receive a message that is an instance of org.teiid.logging.AuditMessage. The relevant Teiid classes are defined in the teiid-api-[versionNumber].jar. The AuditMessage includes information about user, the action, and the target(s) of the action. AuditMessages are logged at the DEBUG level. An example follows.
package org.something;
import org.apache.log4j.AppenderSkeleton;
import org.apache.log4j.spi.LoggingEvent;
import org.teiid.logging.*;

public class CustomAppender extends AppenderSkeleton 
  protected void append(LoggingEvent event) {
    if (event.getMessage() instanceof AuditMessage) {
      AuditMessage auditMessage = (AuditMessage)event.getMessage();
      String activity = auditMessage.getActivity();
      //log to a database, trigger an email, etc.

Chapter 9. Custom Security

9.1. Login Modules

The Data Services system provides a range of built-in and extensible security features to enable the secure access of data.  Refer to the Data Services Administrator Guide for details about how to configure the available security features.
LoginModules are an essential part of the JAAS security framework and provide Data Services with customizable user authentication and the ability to reuse existing LoginModules defined for JBoss. Refer to the JBoss Enterprise Application Platform Security Integration Guide for information about configuring security, http://docs.redhat.com/docs/en-US/JBoss_Enterprise_Application_Platform/5/html/JBoss_Security_Integration_Guide/.

9.1.1. Built-in LoginModules

JBoss Application Server provides several LoginModules for common authentication needs, such as authenticating from text files or LDAP.
Below are are some of those available in JBoss Application Server:
UserRoles LoginModule
Login module that uses simple file based authentication.
LDAP LoginModule
Login module that uses LDAP based authentication.
Database LoginModule
Login module that uses Database-based authentication.
Cert LoginModule
Login module that uses X509 certificate based authentication.
For all the available login modules refer to http://community.jboss.org/docs/DOC-11287.

9.1.2. Custom LoginModules

If the provided LoginModules do not satisfy your authentication needs go, refer to the JAAS LoginModule Developer's Guide, http://download.oracle.com/javase/6/docs/technotes/guides/security/jaas/JAASLMDevGuide.html.
If you are extending one of the built-in LoginModules, refer to http://community.jboss.org/docs/DOC-9466.

9.2. Custom Authorization

In situations where Teiid's built-in role mechanism is not sufficient, a custom org.teiid.PolicyDecider can be installed via the jboss-beans configuration file under the "AuthorizationValidator" bean.
	<bean name="AuthorizationValidator" class="org.teiid.dqp.internal.process.DefaultAuthorizationValidator">
        <property name="enabled">true</property>
        <property name="policyDecider"><inject bean="PolicyDecider"/></property>
    <bean name="PolicyDecider" class="com.company.CustomPolicyDecider">
        <property name="someProperty">some value</property>
Your custom PolicyDecider should be installed in a jar that is made available to the same classloader as Teiid, typically the profile lib directory. A PolicyDecider may be consulted many times for a single user command, but it is only called to make decisions based upon resources that appear in user queries. Any further access of resources through views or stored procedures, just as with data roles, is not checked against a PolicyDecider.

Chapter 10. Runtime Updates

Teiid supports several mechanisms for updating the runtime system.

10.1. Data Updates

Data change events are used by Teiid to invalidate result set cache entries. Result set cache entires are tracked by the tables that contributed to their results. By default Teiid will capture internal data events against physical sources and distribute them across the cluster. This approach has several limitations. First updates are scoped only to their originating VDB/version. Second updates made out side of Teiid are not captured. To increase data consistency external change data capture tools can be used to send events to Teiid. From within a Teiid cluster the org.teiid.events.EventDistributorFactory and org.teiid.events.EventDistributor can be used to distribute change events. The EventDistributorFactory is implemented by the RuntimeEngineDeployer bean and should be looked up by its name "teiid/engine-deployer". See the example below.
InitialContext ctx = new InitialContext();
EventDistributorFactory edf = (EventDistributorFactory)ctx.lookup("teiid/engine-deployer");
EventDistributor ed = edf.getEventDistributor();
ed.dataModification(vdbName, vdbVersion, schema, tableName);
This will distribute a change event for schema.tableName in vdb vdbName.vdbVersion.
When externally capturing all update events, jboss-beans RuntimeEngineDeployer.detectingChangeEvents can be set to false, to not duplicate change events.
The use of the other EventDistributor methods to manual distribute other events is not recommended.

10.2. Runtime Metadata Updates

Runtime updates via system procedures and DDL statements are by default ephemeral. They are effective across the cluster only for the currently running vdbs. With the next vdb start the values will revert to whatever is stored in the vdb. Updates may be made persistent though by configuring a org.teiid.metadata.MetadataRepository. An instance of a MetadataRepository can be installed via the teiid-deployer-beans; file in the VDBRepository bean. The MetadataRepository repository instance may fully implement as many of the methods as needed and return null from any unneeded getter.


It is not recommended to directly manipulate org.teiid.metadata.AbstractMetadataRecord instances. System procedures and DDL statements should be used instead since the effects will be distributed through the cluster and will not introduce inconsistencies.
org.teiid.metadata.AbstractMetadataRecord objects passed to the MetadataRepository have not yet been modified. If the MetadataRepository cannot persist the update, then a RuntimeException should be thrown to prevent the update from being applied by the runtime engine.


The MetadataRepository can be accessed by multiple threads both during load (if using dynamic vdbs) or at runtime with through DDL statements. Your implementation should handle any needed synchronization.

10.2.1. Costing Updates

See the Reference for the system procedures SYSADMIN.setColumnStats and SYSADMIN.setTableStats. To make costing updates persistent MetadataRepository implementations should be provided for:
TableStats getTableStats(String vdbName, int vdbVersion, Table table);
void setTableStats(String vdbName, int vdbVersion, Table table, TableStats tableStats);
ColumnStats getColumnStats(String vdbName, int vdbVersion, Column column);
void setColumnStats(String vdbName, int vdbVersion, Column column, ColumnStats columnStats);

10.2.2. Schema Updates

See the Reference for supported DDL statements. To make schema updates persistent implementations should be provided for:
String getViewDefinition(String vdbName, int vdbVersion, Table table);
void setViewDefinition(String vdbName, int vdbVersion, Table table, String viewDefinition);
String getInsteadOfTriggerDefinition(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation);
void setInsteadOfTriggerDefinition(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation, String triggerDefinition);
boolean isInsteadOfTriggerEnabled(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation);
void setInsteadOfTriggerEnabled(String vdbName, int vdbVersion, Table table, Table.TriggerEvent triggerOperation, boolean enabled);
String getProcedureDefinition(String vdbName, int vdbVersion, Procedure procedure);
void setProcedureDefinition(String vdbName, int vdbVersion, Procedure procedure, String procedureDefinition);			
LinkedHashMap<String, String> getProperties(String vdbName, int vdbVersion, AbstractMetadataRecord record);
void setProperty(String vdbName, int vdbVersion, AbstractMetadataRecord record, String name, String value);

ra.xml file Template

This appendix contains an example of the ra.xml file that can be used as a template when creating a new Connector.
<?xml version="1.0" encoding="UTF-8"?>
<connector xmlns="http://java.sun.com/xml/ns/j2ee"
   http://java.sun.com/xml/ns/j2ee/connector_1_5.xsd" version="1.5">

      <description>${license text}</description>


            <!-- repeat for every configuration property -->
                  $required:"${required-boolean}", $defaultValue:"${default-value}"}

            <!-- use the below as is if you used the Connection Factory interface -->









${...} indicates a value to be supplied by the developer.

Advanced Topics

B.1. Security Migration From Previous Versions

It is recommended that customers who have utilized the internal JDBC membership domain from releases prior to MetaMatrix 5.5 migrate those users and groups to an LDAP compliant directory server.  
Refer to the JBoss Security Integration Guide for directions on using an LDAP directory server, http://docs.redhat.com/docs/en-US/JBoss_Enterprise_Application_Platform/5/html/JBoss_Security_Integration_Guide/. Please contact technical support if you require additional guidance in the migration process.
Several free and open source directory servers include:

GNU Lesser General Public License 2.1

                       Version 2.1, February 1999

 Copyright (C) 1991, 1999 Free Software Foundation, Inc.
 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

[This is the first released version of the Lesser GPL.  It also counts
 as the successor of the GNU Library Public License, version 2, hence
 the version number 2.1.]


  The licenses for most software are designed to take away your
freedom to share and change it.  By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.

  This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it.  You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations below.

  When we speak of free software, we are referring to freedom of use,
not price.  Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.

  To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights.  These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.

  For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you.  You must make sure that they, too, receive or can get the source
code.  If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it.  And you must show them these terms so they know their rights.

  We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.

  To protect each distributor, we want to make it very clear that
there is no warranty for the free library.  Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.

  Finally, software patents pose a constant threat to the existence of
any free program.  We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder.  Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.

  Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License.  This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License.  We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.

  When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library.  The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom.  The Lesser General
Public License permits more lax criteria for linking other code with
the library.

  We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License.  It also provides other free software developers Less
of an advantage over competing non-free programs.  These disadvantages
are the reason we use the ordinary General Public License for many
libraries.  However, the Lesser license provides advantages in certain
special circumstances.

  For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it becomes
a de-facto standard.  To achieve this, non-free programs must be
allowed to use the library.  A more frequent case is that a free
library does the same job as widely used non-free libraries.  In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.

  In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software.  For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating

  Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.

  The precise terms and conditions for copying, distribution and
modification follow.  Pay close attention to the difference between a
"work based on the library" and a "work that uses the library".  The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.


  0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".

  A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.

  The "Library", below, refers to any such software library or work
which has been distributed under these terms.  A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language.  (Hereinafter, translation is
included without limitation in the term "modification".)

  "Source code" for a work means the preferred form of the work for
making modifications to it.  For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.

  Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope.  The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it).  Whether that is true depends on what the Library does
and what the program that uses the Library does.

  1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the

  You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a

  2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:

    a) The modified work must itself be a software library.

    b) You must cause the files modified to carry prominent notices
    stating that you changed the files and the date of any change.

    c) You must cause the whole of the work to be licensed at no
    charge to all third parties under the terms of this License.

    d) If a facility in the modified Library refers to a function or a
    table of data to be supplied by an application program that uses
    the facility, other than as an argument passed when the facility
    is invoked, then you must make a good faith effort to ensure that,
    in the event an application does not supply such function or
    table, the facility still operates, and performs whatever part of
    its purpose remains meaningful.

    (For example, a function in a library to compute square roots has
    a purpose that is entirely well-defined independent of the
    application.  Therefore, Subsection 2d requires that any
    application-supplied function or table used by this function must
    be optional: if the application does not supply it, the square
    root function must still compute square roots.)

These requirements apply to the modified work as a whole.  If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works.  But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote

Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.

In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.

  3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library.  To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License.  (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.)  Do not make any other change in
these notices.

  Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.

  This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.

  4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.

  If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.

  5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library".  Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.

  However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library".  The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.

  When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library.  The
threshold for this to be true is not precisely defined by law.

  If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work.  (Executables containing this object code plus portions of the
Library will still fall under Section 6.)

  Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.

  6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.

  You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License.  You must supply a copy of this License.  If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License.  Also, you must do one
of these things:

    a) Accompany the work with the complete corresponding
    machine-readable source code for the Library including whatever
    changes were used in the work (which must be distributed under
    Sections 1 and 2 above); and, if the work is an executable linked
    with the Library, with the complete machine-readable "work that
    uses the Library", as object code and/or source code, so that the
    user can modify the Library and then relink to produce a modified
    executable containing the modified Library.  (It is understood
    that the user who changes the contents of definitions files in the
    Library will not necessarily be able to recompile the application
    to use the modified definitions.)

    b) Use a suitable shared library mechanism for linking with the
    Library.  A suitable mechanism is one that (1) uses at run time a
    copy of the library already present on the user's computer system,
    rather than copying library functions into the executable, and (2)
    will operate properly with a modified version of the library, if
    the user installs one, as long as the modified version is
    interface-compatible with the version that the work was made with.

    c) Accompany the work with a written offer, valid for at
    least three years, to give the same user the materials
    specified in Subsection 6a, above, for a charge no more
    than the cost of performing this distribution.

    d) If distribution of the work is made by offering access to copy
    from a designated place, offer equivalent access to copy the above
    specified materials from the same place.

    e) Verify that the user has already received a copy of these
    materials or that you have already sent this user a copy.

  For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it.  However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.

  It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system.  Such a contradiction means you cannot
use both them and the Library together in an executable that you

  7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:

    a) Accompany the combined library with a copy of the same work
    based on the Library, uncombined with any other library
    facilities.  This must be distributed under the terms of the
    Sections above.

    b) Give prominent notice with the combined library of the fact
    that part of it is a work based on the Library, and explaining
    where to find the accompanying uncombined form of the same work.

  8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License.  Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License.  However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.

  9. You are not required to accept this License, since you have not
signed it.  However, nothing else grants you permission to modify or
distribute the Library or its derivative works.  These actions are
prohibited by law if you do not accept this License.  Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.

  10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions.  You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.

  11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all.  For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.

If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.

It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices.  Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.

This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.

  12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded.  In such case, this License incorporates the limitation as if
written in the body of this License.

  13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.

Each version is given a distinguishing version number.  If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation.  If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.

  14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission.  For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this.  Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.

                            NO WARRANTY



                     END OF TERMS AND CONDITIONS

           How to Apply These Terms to Your New Libraries

  If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change.  You can do so by permitting
redistribution under these terms (or, alternatively, under the terms of the
ordinary General Public License).

  To apply these terms, attach the following notices to the library.  It is
safest to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.

    <one line to give the library's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This library is free software; you can redistribute it and/or
    modify it under the terms of the GNU Lesser General Public
    License as published by the Free Software Foundation; either
    version 2.1 of the License, or (at your option) any later version.

    This library is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    Lesser General Public License for more details.

    You should have received a copy of the GNU Lesser General Public
    License along with this library; if not, write to the Free Software
    Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA

Also add information on how to contact you by electronic and paper mail.

You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the library, if
necessary.  Here is a sample; alter the names:

  Yoyodyne, Inc., hereby disclaims all copyright interest in the
  library `Frob' (a library for tweaking knobs) written by James Random Hacker.

  <signature of Ty Coon>, 1 April 1990
  Ty Coon, President of Vice

That's all there is to it!

Revision History

Revision History
Revision 5.3.1-10.4002013-10-31Rüdiger Landmann
Rebuild with publican 4.0.0
Revision 5.3.1-10Fri Jan 25 2013B Long
Updated for release 5.3.1
Revision 5.3.0-0Wed Apr 4 2012B Long
Updated for release 5.3.0
Revision 5.2.0-0Wed Jun 22 2011David Le Sage
Updated for release 5.2.0
Revision 5.1.0-0Thu Mar 3 2011Darrin Mison
Book created from Teiid source