Red Hat Training

A Red Hat training course is available for Red Hat JBoss Data Virtualization

5.2. Implementing the Framework

Translators may contribute cache entries to the result set cache by the use of the CacheDirective object. Translators wishing to participate in caching should return a CacheDirective from the ExecutionFactory.getCacheDirective method, which is called prior to execution. The command passed to getCacheDirective will already have been vetted to ensure that the results are eligible for caching. For example update commands or commands with pushed dependent sets will not be eligible for caching.
If the translator returns null for the CacheDirective, which is the default implementation, the engine will not cache the translator results beyond the current command. It is up to your custom translator or custom delegating translator to implement your desired caching policy.

Note

In special circumstances where the translator has performed its own caching, it can indicate to the engine that the results should not be cached or reused by setting the Scope to Scope.NONE.
The returned CacheDirective will be set on the ExecutionContext and is available via the ExecutionContext.getCacheDirective() method. Having ExeuctionFactory.getCacheDirective called prior to execution allows the translator to potentially be selective about which results to even attempt to cache. Since there is a resource overhead with creating and storing the cached results it may not be desirable to attempt to cache all results if it is possible to return large results that have a low usage factor. If you are unsure about whether to cache a particular command result you may return an initial CacheDirective then change the Scope to Scope.NONE at any time prior to the final cache entry being created and the engine will give up creating the entry and release its resources.

Note

If you plan on modifying the CacheDirective during execution, ensure that you return a new instance from the ExecutionFactory.getCacheDirective call, rather than returning a shared instance.
The CacheDirective readAll Boolean field is used to control whether the entire result should be read if not all of the results were consumed by the engine. If readAll is false then any partial usage of the result will not result in it being added as a cache entry. Partial use is determined after any implicit or explicit limit has been applied. The other fields on the CacheDirective object map to the cache hint options.

Table 5.1. Options

Option Default
scope
Session
ttl
rs cache ttl
readAll
true
updatable
true
prefersMemory
false
Teiid sends commands to your Translator in object form. These classes are all defined in the "org.teiid.language" package. These objects can be combined to represent any possible command that Teiid may send to the Translator. However, it is possible to notify Teiid that your Translator can only accept certain kinds of constructs via the capabilities defined on the "ExecutionFactory" class.
The language objects all extend from the LanguageObject interface. Language objects should be thought of as a tree where each node is a language object that has zero or more child language objects of types that are dependent on the current node.
All commands sent to your Translator are in the form of these language trees, where the root of the tree is a subclass of Command. Command has several sub-interfaces, namely:
  • QueryExpression
  • Insert
  • Update
  • Delete
  • BatchedUpdates
  • Call
An expression represents a single value in context, although in some cases that value may change as the query is evaluated. For example, a literal value, such as 5 represents an integer value. An column reference such as "table.EmployeeName" represents a column in a data source and may take on many values while the command is being evaluated.
  • Expression – base expression interface.
  • ColumnReference – represents an column in the data source.
  • Literal – represents a literal scalar value.
  • Parameter – represents a parameter with multiple values. The command should be an instance of BatchedCommand, which provides all values via getParameterValues.
  • Function – represents a scalar function with parameters that are also Expressions.
  • AggregateFunction – represents an aggregate function which can hold a single expression.
  • WindowFunction – represents an window function which holds an AggregateFunction (which is also used to represent analytical functions) and a WindowSpecification.
  • ScalarSubquery – represents a subquery that returns a single value.
  • SearchedCase, SearchedWhenClause – represents a searched CASE expression. The searched CASE expression evaluates the criteria in WHEN clauses till one evaluates to TRUE, then evaluates the associated THEN clause.
  • Array – represents an array of expressions, used by the engine in multi-attribute dependent joins.
A criteria is a combination of expressions and operators that evaluates to true, false, or unknown. Criteria are most commonly used in the WHERE or HAVING clauses.
  • Condition – the base criteria interface
  • Not – used to NOT another criteria
  • AndOr – used to combine other criteria via AND or OR
  • SubuqeryComparison – represents a comparison criteria with a subquery including a quantifier such as SOME or ALL
  • Comparison – represents a comparison criteria with =, >, and so on.
  • BaseInCondition – base class for an IN criteria
  • In – represents an IN criteria that has a set of expressions for values
  • SubqueryIn – represents an IN criteria that uses a subquery to produce the value set
  • IsNull – represents an IS NULL criteria
  • Exists – represents an EXISTS criteria that determines whether a subquery will return any values.
  • Like – represents a LIKE/SIMILAR TO/LIKE_REGEX criteria that compares string values.
The FROM clause contains a list of TableReferences:
  • NamedTable – represents a single Table
  • Join – has a left and right TableReference and information on the join between the items
  • DerivedTable – represents a table defined by an inline QueryExpression A list of TableReference are used by default, in the pushdown query when no outer joins are used. If an outer join is used anywhere in the join tree, there will be a tree of joins with a single root. This latter form is the ANSI preferred style. If you wish all pushdown queries containing joins to be in ANSI style have the capability "useAnsiJoin" return true.
QueryExpression is the base for both SELECT queries and set queries. It may optionally take an OrderBy (representing a SQL ORDER BY clause), a Limit (represent a SQL LIMIT clause), or a With (represents a SQL WITH clause).
Each QueryExpression can be a Select describing the expressions (typically elements) being selected and an TableReference specifying the table or tables being selected from, along with any join information. The Select may optionally also supply an Condition (representing a SQL WHERE clause), a GroupBy (representing a SQL GROUP BY clause), an an Condition (representing a SQL HAVING clause).
A QueryExpression can also be a SetQuery that represents on of the SQL set operations (UNION, INTERSECT, EXCEPT) on two QueryExpression. The all flag may be set to indicate UNION ALL (currently INTERSECT and EXCEPT ALL are not allowed in Teiid)
A With clause contains named QueryExpressions held by WithItems that can be referenced as tables in the main QueryExpression.
Each Insert will have a single NamedTable specifying the table being inserted into. It will also has a list of ColumnReference specifying the columns of the NamedTable that are being inserted into. It also has InsertValueSource, which will be a list of Expressions (ExpressionValueSource) or a QueryExpression
Each Update will have a single NamedTable specifying the table being updated and list of SetClause entries that specify ColumnReference and Expression pairs for the update. The Update may optionally provide a criteria Condition specifying which rows should be updated.
Each Delete will have a single NamedTable specifying the table being deleted from. It may also optionally have a criteria specifying which rows should be deleted.
Each Call has zero or more Argument objects. The Argument objects describe the input parameters, the output result set, and the output parameters.
Each BatchedUpdates has a list of Command objects (which must be either Insert, Update or Delete) that compose the batch.
This section covers utilities available when using, creating, and manipulating the language interfaces.
The Translator API contains an interface TypeFacility that defines data types and provides value translation facilities. This interface can be obtained from calling "getTypeFacility()" method on the "ExecutionFactory" class.
The TypeFacitlity interface has methods that support data type transformation and detection of appropriate runtime or JDBC types. The TypeFacility.RUNTIME_TYPES and TypeFacility.RUNTIME_NAMES interfaces defines constants for all Teiid runtime data types. All Expression instances define a data type based on this set of types. These constants are often needed in understanding or creating language interfaces.
In Translators that support a fuller set of capabilities (those that generally are translating to a language of comparable to SQL), there is often a need to manipulate or create language interfaces to move closer to the syntax of choice. Some utilities are provided for this purpose:
Similar to the TypeFacility, you can call "getLanguageFactory()" method on the "ExecutionFactory" to get a reference to the LanguageFactory instance for your translator. This interface is a factory that can be used to create new instances of all the concrete language interface objects.
Some helpful utilities for working with Condition objects are provided in the LanguageUtil class. This class has methods to combine Condition with AND or to break an Condition apart based on AND operators. These utilities are helpful for breaking apart a criteria into individual filters that your translator can implement.
Teiid uses a library of metadata, known as "runtime metadata" for each virtual database that is deployed in Teiid. The runtime metadata is a subset of metadata as defined by models in the Teiid models that compose the virtual database. While building your VDB in the Designer, you can define what called "Extension Model", that defines any number of arbitary properties on a model and its objects. At runtime, using this runtime metadata interface, you get access to those set properties defined during the design time, to define/hint any execution behavior.
Translator gets access to the RuntimeMetadata interface at the time of Excecution creation. Translators can access runtime metadata by using the interfaces defined in org.teiid.metadata package. This package defines API representing a Schema, Table, Columns and Procedures, and ways to navigate these objects.
All the language objects extend AbstractMetadataRecord class:
  • Column - returns Column metadata record
  • Table - returns a Table metadata record
  • Procedure - returns a Procedure metadata record
  • ProcedureParameter - returns a Procedure Parameter metadata record
Once a metadata record has been obtained, it is possible to use its metadata about that object or to find other related metadata.
The RuntimeMetadata interface is passed in for the creation of an "Execution". See "createExecution" method on the "ExecutionFactory" class. It provides the ability to look up metadata records based on their fully qualified names in the VDB.
The process of getting a Table's properties is sometimes needed for translator development. For example to get the "NameInSource" property or all extension properties:
//getting the Table metadata from an Table is straight-forward
Table table = runtimeMetadata.getTable("table-name");
String contextName = table.getNameInSource();
 
//The props will contain extension properties
Map<String, String> props = table.getProperties();
The API provides a language visitor framework in the org.teiid.language.visitor package. The framework provides utilities useful in navigating and extracting information from trees of language objects.
The visitor framework is a variant of the Visitor design pattern, which is documented in several popular design pattern references. The visitor pattern encompasses two primary operations: traversing the nodes of a graph (also known as iteration) and performing some action at each node of the graph. In this case, the nodes are language interface objects and the graph is really a tree rooted at some node. The provided framework allows for customization of both aspects of visiting.
The base AbstractLanguageVisitor class defines the visit methods for all leaf language interfaces that can exist in the tree. The LanguageObject interface defines an acceptVisitor() method – this method will call back on the visit method of the visitor to complete the contract. A base class with empty visit methods is provided as AbstractLanguageVisitor. The AbstractLanguageVisitor is just a visitor shell – it performs no actions when visiting nodes and does not provide any iteration.
The HierarchyVisitor provides the basic code for walking a language object tree. The HierarchyVisitor performs no action as it walks the tree – it just encapsulates the knowledge of how to walk it. If your translator wants to provide a custom iteration that walks the objects in a special order (to exclude nodes, include nodes multiple times, conditionally include nodes, etc) then you must either extend HierarchyVisitor or build your own iteration visitor. In general, that is not necessary.
The DelegatingHierarchyVisitor is a special subclass of the HierarchyVisitor that provides the ability to perform a different visitor’s processing before and after iteration. This allows users of this class to implement either pre- or post-order processing based on the HierarchyVisitor. Two helper methods are provided on DelegatingHierarchyVisitor to aid in executing pre- and post-order visitors.
The SQLStringVisitor is a special visitor that can traverse a tree of language interfaces and output the equivalent Teiid SQL. This visitor can be used to print language objects for debugging and logging. The SQLStringVisitor does not use the HierarchyVisitor described in the last section; it provides both iteration and processing type functionality in a single custom visitor.
The CollectorVisitor is a handy utility to collect all language objects of a certain type in a tree. Some additional helper methods exist to do common tasks such as retrieving all elements in a tree, retrieving all groups in a tree, and so on.
Writing your own visitor can be quite easy if you use the provided facilities. If the normal method of iterating the language tree is sufficient, then follow these steps:

Procedure 5.3. Write a Visitor

  1. Create a subclass of AbstractLanguageVisitor. Override any visit methods needed for your processing. For instance, if you wanted to count the number of elements in the tree, you need only override the visit(ColumnReference) method. Collect any state in local variables and provide accessor methods for that state.
  2. Decide whether to use pre-order or post-order iteration. Note that visitation order is based upon syntax ordering of SQL clauses - not processing order.
  3. Write code to execute your visitor using the utility methods on DelegatingHierarchyVisitor:
    // Get object tree
    LanguageObject objectTree = …
     
    // Create your visitor initialize as necessary
    MyVisitor visitor = new MyVisitor();
     
    // Call the visitor using pre-order visitation
    DelegatingHierarchyVisitor.preOrderVisit(visitor, objectTree);
     
    // Retrieve state collected while visiting
    int count = visitor.getCount();
    
The extended "ExecutionFactory" must implement the getConnection() method to allow the Connector Manager to obtain a connection.
Once the Connector Manager has obtained a connection, it will use that connection only for the lifetime of the request. When the request has completed, the closeConnection() method called on the "ExecutionFactory". You must also override this method to properly close the connection.
In cases (such as when a connection is stateful and expensive to create), connections should be pooled. If the resource adapter is JEE JCA connector based, then pooling is automatically provided by the JBoss EAP container. If your resource adapter does not implement the JEE JCA, then connection pooling semantics are left to the user to define on their own.
Dependent joins are a technique used in federation to reduce the cost of cross source joins. Join values from one side of a join are made available to the other side which reduces the number of tuples needed to preform the join. Translators may indicate support for dependent join pushdown via the supportsDependentJoin and supportsFullDependentJoin capabilities. The handling of pushdown dependent join queries can be complicated.
The more simplistic mode of dependent join pushdown is to push only the key (equi-join) values to effectively evaluate a semi-join - the full join will still be processed by the engine after the retrieval. The ordering (if present) and all of the non-dependent criteria constructs on the pushdown command must be honored. The dependent criteria, which will be a Comparison with a Parameter (possibly in Array form), may be ignored in part or in total to retrieve a superset of the tuples requested.
Pushdown key dependent join queries will be instances of Select with the relevant dependent values available via Select.getDependentValues(). A dependent value tuple list is associated to Parameters by id via the Parameter.getDepenentValueId() identifier. The dependent tuple list provide rows that are referenced by the column positions (available via Parameter.getValueIndex()). Care should be taken with the tuple values as they may guaranteed to be ordered, but will be unique with respect to all of the Parameter references against the given dependent value tuple list.
In some scenarios, typically with small independent data sets or extensive processing above the join that can be pushed to the source, it is advantageous for the source to handle the dependent join pushdown. This feature is marked as supported by the supportsFullDependentJoin capability. Here the source is expected to process the command exactly as specified - the dependent join is not optional
Full pushdown dependent join queries will be instances of QueryExpression with the relevant dependent values available via special common table definitions using QueryExpression.getWith(). The independent side of a full pushdown join will appear as a common table WithItem with a dependent value tuple list available via WithItem.getDependentValues(). The dependent value tuples will positionally match the columns defined by WithItem.getColumns(). The dependent value tuple list is not guaranteed to be in any particular order.
The Teiid query engine uses the "ExecutionFactory" class to obtain the "Execution" interface for the command it is executing. The actual queries themselves are sent to translators in the form of a set of objects, which are further described in Command Language. Translators are allowed to support any subset of the available execution modes.

Table 5.2. Execution Modes

Execution Interface Command Interface Description
ResultSetExecution
QueryExpression
A query corresponding to a SQL SELECT or set query statement.
UpdateExecution
Insert, Update, Delete, BatchedUpdates
An insert, update, or delete, corresponding to a SQL INSERT, UPDATE, or DELETE command.
ProcedureExecution
Call
A procedure execution that may return a result set and/or output values.
All of the execution interfaces extend the base Execution interface that defines how executions are cancelled and closed. ProcedureExecution also extends ResultSetExecution, since procedures may also return resultsets.
The org.teiid.translator.ExecutionContext provides a considerable amount of information related to the current execution. An ExecutionContext instance is made available to each Execution. Specific usage is highlighted in this guide where applicable, but you may use any informational getter method as desired. Example usage would include calling ExecutionContext.getRequestId(), ExecutionContext.getSession(), etc. for logging purposes.
An org.teiid.CommandContext is available via the ExecutionContext.getCommandContext() method. The CommandContext contains information about the current user query, including the VDB, the ability to add client warnings - addWarning, or handle generated keys - isReturnAutoGeneratedKeys, returnGeneratedKeys, and getGeneratedKeys.
To see if the user query expects generated keys to be returned, consult the CommandContext.isReturnAutoGeneratedKeys() method. If you wish to return generated keys, you must first create a GeneratedKeys instance to hold the keys with the returnGeneratedKeys method passing the column names and types of the key columns. Only one GeneratedKeys may be associated with the CommandContext at any given time.
The Teiid source meta-hint is used to provide hints directly to source executions via user or transformation queries. See the reference for more on source hints. If specified and applicable, the general and source specific hint will be supplied via the ExecutionContext methods getGeneralHint and getSourceHint. See the source for the OracleExecutionFactory for an example of how this source hint information can be utilized.
Typically most commands executed against translators are QueryExpression. While the command is being executed, the translator provides results via the ResultSetExecution's "next" method. The "next" method should return null to indicate the end of results. Note: the expected batch size can be obtained from the ExecutionContext.getBatchSize() method and used as a hint in fetching results from the EIS.
Each execution returns the update count(s) expected by the update command. If possible BatchedUpdates should be executed atomically. The ExecutionContext.isTransactional() method can be used to determine if the execution is already under a transaction.
Procedure commands correspond to the execution of a stored procedure or some other functional construct. A procedure takes zero or more input values and can return a result set and zero or more output values. Examples of procedure execution would be a stored procedure in a relational database or a call to a web service.
If a result set is expected when a procedure is executed, all rows from it will be retrieved via the ResultSetExecution interface first. Then, if any output values are expected, they will be retrieved via the getOutputParameterValues() method.
In some scenarios, a translator needs to execute asynchronously and allow the executing thread to perform other work. To allow asynchronous execution, you should throw a DataNotAvailableExecption during a retrieval method, rather than explicitly waiting or sleeping for the results. The DataNotAvailableException may take a delay parameter or a Date in its constructor to indicate when to poll next for results. Any non-negative delay value indicates the time in milliseconds until the next polling should be performed. The DataNotAvailableException.NO_POLLING exception (or any DataNotAvailableException with a negative delay) can be thrown to indicate that the execution will call ExecutionContext.dataAvailable() to indicate processing should resume.

Important

A DataNotAvailableException should not be thrown by the execute method, as that can result in the execute method being called multiple times.

Important

Since the execution and the associated connection are not closed until the work has completed, care should be taken if using asynchronous executions that hold a lot of state.
A positive retry delay is not a guarantee of when the translator will be polled next. If the DataNotAvailableException is consumed while the engine thinks more work can be performed or there are other shorter delays issued from other translators, then the plan may be re-queued earlier than expected. You should simply rethrow a DataNotAvailableException if your execution is not yet ready. Alternatively the DataNotAvailableException may be marked as strict, which does provide a guarantee that the Execution will not be called until the delay has expired or the given Date has been reached. Using the Date constructor makes the DataNotAvailableException automatically strict. Due to engine thread pool contention, platform time resolution, etc. a strict DataNotAvailableException is not a real-time guarantee of when the next poll for results will occur, only that it will not occur before then.

Important

If your ExecutionFactory returns only asynch executions that perform minimal work, then consider having ExecutionFactory.isForkable return false so that the engine knows not to spawn a separate thread for accessing your Execution.
A translator may return instances of ReusableExecutions for the expected Execution objects. There can be one ReusableExecution per query executing node in the processing plan. The lifecycle of a ReusableExecution is different that a normal Execution. After a normal creation/execute/close cycle the ReusableExecution.reset is called for the next execution cycle. This may occur indefinitely depending on how many times a processing node executes its query. The behavior of the close method is no different than a regular Execution, it may not be called until the end of the statement if lobs are detected and any connection associated with the Execution will also be closed. When the user command is finished, the ReusableExecution.dispose() method will be called.
In general ReusableExecutions are most useful for continuous query execution and will also make use of the ExecutionCotext.dataAvailable() method for Asynchronous Executions. See the Client Developer's Guide for executing continuous statements. In continuous mode the user query will be continuously re-executed. A ReusableExecution allows the same Execution object to be associated with the processing plan for a given processing node for the lifetime of the user query. This can simplify asynch resource management, such as establishing queue listeners. Returning a null result from the next() method ReusableExecution just as with normal Executions indicates that the current pushdown command results have ended. Once the reset() method has been called, the next set of results should be returned again terminated with a null result.
Non batched Insert, Update, Delete commands may have multi-valued Parameter objects if the capabilities shows support for BulkUpdate. Commands with multi-valued Parameters represent multiple executions of the same command with different values. As with BatchedUpdates, bulk operations should be executed atomically if possible.
All normal command executions end with the calling of close() on the Execution object. Your implementation of this method should do the appropriate clean-up work for all state created in the Execution object.
Commands submitted to Teiid may be aborted in several scenarios:
  • Client cancellation via the JDBC API (or other client APIs)
  • Administrative cancellation
  • Clean-up during session termination
  • Clean-up if a query fails during processing
Unlike the other execution methods, which are handled in a single-threaded manner, calls to cancel happen asynchronously with respect to the execution thread.
Your connector implementation may choose to do nothing in response to this cancellation message. In this instance, Teiid will call close() on the execution object after current processing has completed. Implementing the cancel() method allows for faster termination of queries being processed and may allow the underlying data source to terminate its operations faster as well.
The main class in the translator implementation is ExecutionFactory. A base class is provided in the Teiid API, so a custom translator must extend org.teiid.translator.ExecutionFactory to connect and query an enterprise data source. This extended class must provide a no-arg constructor that can be constructed using Java reflection libraries. This Execution Factory needs to define/override the following elements.
package org.teiid.translator.custom;
 
@Translator(name="custom", description="Connect to My EIS")
public class CustomExecutionFactory extends ExecutionFactory<MyConnectionFactory, MyConnection> {
 
    public CustomExecutionFactory() {
    }
}                
Define the annotation @Translator on extended "ExecutionFactory" class. This annotation defines the name, which is used as the identifier during deployment, and the description of your translator. This name is what you will be using in the VDB and else where in the configuration to refer to this translator.
ConnectionFactory defines the "ConnectionFactory" interface that is defined in resource adapter. This is defined as part of the class definition of the extended "ExecutionFactory" class.
Connection defines the "Connection" interface that is defined in the resource adapter. This is defined as part of class definition of extended "ExecutionFactory" class.
If the translator requires external configuration, that defines ways for the user to alter the behavior of a program, then define an attribute variable in the class and define "get" and "set" methods for that attribute. Also, annotate each "get" method with @TranslatorProperty annotation and provide the metadata about the property.
For example, if you need a property called "foo", by providing the annotation on these properties, the Teiid tooling can automatically interrogate and provide a graphical way to configure your Translator while designing your VDB:
private String foo = "blah";
@TranslatorProperty(display="Foo property", description="description about Foo")
public String getFoo()
{
   return foo;
}
 
public void setFoo(String value)
{
   return this.foo = value;
} 
The @TranslatorProperty defines the following metadata that you can set about your property:
  • display: Display name of the property
  • description: Description about the property
  • required: The property is a required property
  • advanced: This is advanced property; A default value must be provided. A property can not be "advanced" and "required" at same time.
  • masked: The tools need to mask the property; Do not show in plain text; used for passwords
Only java primitive (int, boolean), primitive object wrapper (java.lang.Integer), or Enum types are supported as Translator properties. Complex objects are not supported. The default value will be derived from calling the getter method, if available, on a newly constructed instance. All properties should have a default value. If there is no applicable default, then the property should be marked in the annotation as required. Initialization will fail if a required property value is not provided.
Override and implement the start method (be sure to call "super.start()") if your translator needs to do any initializing before it is used by the Teiid engine. This method will be called by Teiid, once after all the configuration properties set above are injected into the class.
Extended Translator Capabiities are various methods that typically begin with method signature "supports" on the "ExecutionFactory" class. These methods need to be overridden to describe the execution capabilities of the Translator.
Based on types of executions you are supporting, the following methods need to be overridden to provide implementations for their respective return interfaces:
  • createResultSetExecution - Override if you are doing read based operation that is returning a rows of results. For ex: select
  • createUpdateExecution - Override if you are doing write based operations. For example, insert, update and delete
  • createProcedureExecution- Overide if you are doing procedure based operations. For example; stored procedures. This works well for non-relational sources.
You can choose to implement all the execution modes or just what you need.
Override and implement the method getMetadataProcessor(), if you want to expose the metadata about the source for use in Dynamic VDBs. This defines the tables, column names, procedures, parameters, etc. for use in the query engine. This method is used by Designer tooling when the Teiid Connection importer is used. A sample MetadataProcessor may look like this:
public class MyMetadataProcessor implements MetadataProcessor<Connection> {
 
     public void process(MetadataFactory mf, Connection conn) {
            Object somedata = connection.getSomeMetadata();
 
            Table table = mf.addTable(tableName);
            Column col1 = mf.addColumn("col1", TypeFacility.RUNTIME_NAMES.STRING, table);
            column col2 = mf.addColumn("col2", TypeFacility.RUNTIME_NAMES.STRING, table);
     }
}
If your MetadataProcessor needs external properties that are needed during the import process, you can define them on MetadataProcessor. For example, to define a import property called "Column Name Pattern", which can be used to filter which columns are defined on the table, can be defined in the code like this:
@TranslatorProperty(display="Column Name Pattern", category=PropertyType.IMPORT, description="Pattern to derive column names")
public String getColumnNamePattern() {
    return columnNamePattern;
}
 
public void setColumnNamePattern(String columnNamePattern) {
    this.columnNamePattern = columnNamePattern;
}
Note the category type. The configuration property defined in the previous section is different from this one. Configuration properties define the runtime behavior of translator, where as "IMPORT" properties define the metadata import behavior, and aid in controlling what metadata is exposed by your translator.
These properties can be automatically injected through "import" properties set through Designer when using the "Teiid Connection" importer or the properties can be defined under the model construct in the vdb.xml file, like this:
<vdb name="myvdb" version="1">
   <model name="legacydata" type="PHYSICAL">
      <property name="importer.ColumnNamePattern" value="col*"/>
      ....
      <source name = .../>
   </model>
</vdb>
There may be times when implementing a custom translator, the built in metadata about your schema is not enough to process the incoming query due to variance of semantics with your source query. To aid this issue, Teiid provides a mechanism called "Extension Metadata", which is a mechanism to define custom properties and then add those properties on metadata object (table, procedure, function, column, index etc.). For example, in a custom translator a table represents a file on disk. This is how you could define such a custom metadata property:
public class MyMetadataProcessor implements MetadataProcessor<Connection> {
     public static final String NAMESPACE = "{http://my.company.corp}";
 
      @ExtensionMetadataProperty(applicable={Table.class}, datatype=String.class, display="File name", description="File Name", required=true)
     public static final String FILE_PROP = NAMESAPCE+"FILE";
 
     public void process(MetadataFactory mf, Connection conn) {
            Object somedata = connection.getSomeMetadata();
 
            Table table = mf.addTable(tableName);
            table.setProperty(FILE_PROP, somedata.getFileName());
 
            Column col1 = mf.addColumn("col1", TypeFacility.RUNTIME_NAMES.STRING, table);
            column col2 = mf.addColumn("col2", TypeFacility.RUNTIME_NAMES.STRING, table);
         
     }
}
The @ExtensionMetadataProperty defines the following metadata that you can define about your property:
  • applicable: Metadata object this is applicable on. This is array of metadata classes like Table.class, Column.class.
  • datatype: The java class indicating the data type
  • display: Display name of the property
  • description: Description about the property
  • required: Indicates if the property is a required property
When you define an extension metadata property like above, during the runtime you can obtain the value of that property. If you get the query object which contains 'SELECT * FROM MyTable', MyTable will be represented by an object called "NamedTable":
for (TableReference tr:query.getFrom()) {
    NamedTable t = (NameTable) tr;
    Table table = t.getMetadataObject();
    String file = table.getProperty(FILE_PROP);
    ..
}
Now you have accessed the file name you set during the construction of the Table schema object, and you can use this value however you seem feasible to execute your query. With the combination of built in metadata properties and extension metadata properties you can design and execute queries for a variety of sources.
Teiid provides org.teiid.logging.LogManager class for logging purposes. Create a logging context and use the LogManager to log your messages. These will be automatically sent to the main Teiid logs. You can edit the "jboss-log4j.xml" inside "conf" directory of the JBoss EAP's profile to add the custom context. Teiid uses Log4J as its underlying logging system.
If you need to trace any exception use org.teiid.translator.TranslatorException class.
Teiid supports three large object runtime data types: blob, clob, and xml. A blob is a "binary large object", a clob is a "character large object", and "xml" is a "xml document". Columns modeled as a blob, clob, or xml are treated similarly by the translator framework to support memory-safe streaming.
Teiid allows a Translator to return a large object through the Teiid translator API by just returning a reference to the actual large object. Access to that LOB will be streamed as appropriate rather than retrieved all at once. This is useful for several reasons:
  • Reduces memory usage when returning the result set to the user.
  • Improves performance by passing less data in the result set.
  • Allows access to large objects when needed rather than assuming that users will always use the large object data.
  • Allows the passing of arbitrarily large data values.
These benefits can only truly be gained if the Translator itself does not materialize an entire large object all at once. For example, the Java JDBC API supports a streaming interface for blob and clob data.
The Translator API automatically handles large objects (Blob/Clob/SQLXML) through the creation of special purpose wrapper objects when it retrieves results.
Once the wrapped object is returned, the streaming of LOB is automatically supported. These LOB objects then can for example appear in client results, in user defined functions, or sent to other translators.
An execution is usually closed and the underlying connection is either closed/released as soon as all rows for that execution have been retrieved. However, LOB objects may need to be read after their initial retrieval of results. When LOBs are detected the default closing behavior is prevented by setting a flag via the ExecutionContext.keepAlive method.
When the "keepAlive" flag is set, then the execution object is only closed when user's statement is closed:
executionContext.keepExecutionAlive(true);
LOBs will be passed to the Translator in the language objects as Literal containing a java.sql.Blob, java.sql.Clob, or java.sql.SQLXML. You can use these interfaces to retrieve the data in the large object and use it for insert or update.
The ExecutionFactory class defines all the methods that describe the capabilities of a Translator. These are used by the Connector Manager to determine what kinds of commands the translator is capable of executing. A base ExecutionFactory class implements all the basic capabilities methods, which says your translator does not support any capabilities. Your extended ExecutionFactory class must override the necessary methods to specify which capabilities your translator supports. Consult the debug log of query planning (set showplan debug) to see if the pushdown you desire requires additional capabilities.
Note capabilities are determined and cached for the lifetime of the translator. Capabilities based on connection/user are not supported.
These capabilities can be specified in the ExecutionFactory class.