Red Hat Training

A Red Hat training course is available for Red Hat JBoss Data Virtualization

Administration and Configuration Guide

Red Hat JBoss Data Virtualization 6.3

This guide is for administrators.

Red Hat Customer Content Services

Abstract

Read this guide to learn how to administer Red Hat JBoss Data Virtualization.

Part I. Introduction

Chapter 1. Read Me

1.1. Back Up Your Data

Warning

Red Hat recommends that you back up your system settings and data before undertaking any of the configuration tasks mentioned in this book.

1.2. Variable Name: EAP_HOME

EAP_HOME refers to the root directory of the Red Hat JBoss Enterprise Application Platform installation on which JBoss Data Virtualization has been deployed.

1.3. Variable Name: MODE

MODE will either be standalone or domain depending on whether JBoss Data Virtualization is running in standalone or domain mode. Substitute one of these whenever you see MODE in a file path in this documentation. (You need to set this variable yourself, based on where the product has been installed in your directory structure.)

1.4. Red Hat Documentation Site

Red Hat's official documentation site is available at https://access.redhat.com/site/documentation/. There you will find the latest version of every book, including this one.

Part II. Introduction to Administering JBoss Data Virtualization

Chapter 2. Administration Tools

2.1. Administration Tools Overview

The following tools are available for you to use to administer Red Hat JBoss Data Virtualization:
Management Console
Management Command Line Interface (CLI)
AdminShell
Admin API
JBoss Operations Network

2.2. Management Console

2.2.1. Management Console and JBoss Data Virtualization

Red Hat JBoss Data Virtualization provides a plug-in for the web-based Management Console. It provides a web interface that allows you to configure and monitor Data Virtualization services deployed upon a running JBoss EAP instance.

Note

For more information about general use of the Management Console, not specific to JBoss Data Virtualization, see the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide.

2.2.2. Log in to the Management Console

Prerequisites

  • JBoss EAP 6 must be running.
  • You must have already created a user with permissions to access the Console.
  1. Launch your web browser and go to this address: http://localhost:9990/console/App.html

    Note

    Port 9990 is predefined as the Management Console socket binding.
  2. Enter your username and password to log in to the Management Console.
    The login screen for the Management console.

    Figure 2.1. Log in screen for the Management Console

Result

Once logged in, you are redirected to the following address and the Management Console landing page appears: http://localhost:9990/console/App.html#home

2.2.3. Management Console - Configuration tab

The Configuration tab contains both general and Data Virtualization-specific configuration properties. Click Teiid in the left-hand navigational tree to view Teiid configuration. These are the sub-categories:
Configuration tab

Figure 2.2. Configuration tab

  • Query Engine - view and configure core Teiid engine properties.
  • Translators - view and remove the Translators configured in Teiid.
  • Transports - view, add and remove transports to the Teiid engine.
  • Audit Logs - turn on or off audit logging for Teiid.
Use the Configuration tab to change any aspect of Teiid's configuration Various different configuration properties are organised on different tabs. Click the Need Help link on each tab to see description of each field.

Note

Some properties require you to restart the server before changes you have made will take affect.

2.2.4. Management Console - Runtime tab

The Runtime tab shows runtime information about JBoss EAP server and Teiid subsystem. You can view runtime information about Teiid by clicking Virtual Databases in the left-hand side navigational tree.
Runtime tab

Figure 2.3. Runtime tab

All the VDBs deployed on the server are shown in top level table. When you select and highlight a VDB, more details about that VDB are displayed in the tabs below. Each of these tabs are divided into grouping of the functionality.
  • Summary: This panel shows the details about the selected VDB and also any properties that are associated with that VDB, along with any other VDBs this VDB imports. This tab is designed to give a quick overview of the VDB status.
  • Models: This panel shows all the models that are defined in a given VDB, and shows each model's translator name and source connection JNDI name. It also shows the type of models and if it is multi-source or not. When a particular model is selected it shows all properties of that model that are defined and also shows any errors associated with the model. When your VDB is not deployed in the active status, you must verify these errors and fix to resolve any deployment issues.
    The DDL button in the Models panel shows the schema for the given model. Note that this does not work for XML based models that are designed using Teiid Designer. The tool lets you edit the JNDI name by double clicking on them and modifying them. This useful if you would like to change the JNDI name in a given environment.
  • Overrides: This panel shows the all the overridden translators in the Teiid Designer and their properties.
  • Teiid: This screen shows runtime Teiid metrics.
  • Caching: Caching panel shows caching statistics of resultset cache as to how effectively the cache is being used. It also shows all the internal materialized views in the VDB and their load status as to when they were loaded. It also gives options to invalidate a specific view or all the views in a VDB, so that they can refresh or reload the contents from source. This panel also provides a UI to flush the entire the resultset cache contents or prepared plan cache contents for the selected VDB.
  • Data Roles: Data Roles panel shows the all the policies that defined in the VDB using the Teiid Designer or hand coded in the vdb.xml file. For each selected policy, it also lists the permissions for that policy as to what kind of authorizations user has and shows the mapped enterprise role assignments to that policy. You can even add or remove an enterprise role from the policy using the this UI.
  • Requests: This panel shows all the current requests against the selected VDB at the time of VDB selection. You can click refresh to get a more up to date requests. The top table in the panel shows the user submitted requests to the teiid engine, when one of those requests are selected, then the bottom table shows all the source level queries that are submitted to the physical sources by Teiid engine. Using this UI, user can also submit a cancel request to a user level query. Since cancel is an asynchronous operation, the operation is not guaranteed as query may already been finished, by the time cancel is submitted.
  • Sessions: This panel shows all the active sessions that are connected to the selected VDB. It shows their connection properties and also gives an option to terminate either a selected session or all the sessions.

2.2.5. Management Console - Administration tab

You can configure roles and groups using Administration tab. Roles for a user to be included in and excluded from can be configured in the Management Console. You can also configure groups to be included or excluded from a role. Only users in the SuperUser or Administrator roles can perform this configuration.
Administration tab

Figure 2.4. Administration tab

For more information related to User Management, see Red JBoss EAP Administration and Configuration guide.

2.3. Management CLI

2.3.1. Launch the Management CLI

Procedure 2.1. Launch CLI in Linux or Microsoft Windows Server

    • Launch the CLI in Linux

      Run the EAP_HOME/bin/jboss-cli.sh file by entering the following at a command line:
      $ EAP_HOME/bin/jboss-cli.sh
    • Launch the CLI in Microsoft Windows Server

      Run the EAP_HOME\bin\jboss-cli.bat file by double-clicking it, or by entering the following at a command line:
      C:\>EAP_HOME\bin\jboss-cli.bat

2.3.2. Quit the Management CLI

From the Management CLI, enter the quit command:
[domain@localhost:9999 /] quit

2.3.3. Connect to a Managed Server Instance Using the Management CLI

Procedure 2.2. Connect to a Managed Server Instance

  • Run the connect command

    From the Management CLI, enter the connect command:
    [disconnected /] connect
    Connected to domain controller at localhost:9999
    • Alternatively, to connect to a managed server when starting the Management CLI on a Linux system, use the --connect parameter:
      $ EAP_HOME/bin/jboss-cli.sh --connect
    • The --connect parameter can be used to specify the host and port of the server. To connect to the address 192.168.0.1 with the port value 9999 the following would apply:
      $ EAP_HOME/bin/jboss-cli.sh --connect --controller=192.168.0.1:9999

2.3.4. Obtain Help with the Management CLI

Summary

Sometimes you might need guidance if you need to learn a CLI command or feel unsure about what to do. The Management CLI features a help dialog with general and context-sensitive options. (Note that the help commands dependent on the operation context require an established connection to either a standalone or domain controller. These commands will not appear in the listing unless the connection has been established.)

  1. For general help

    From the Management CLI, enter the help command:
    [standalone@localhost:9999 /] help
  2. Obtain context-sensitive help

    From the Management CLI, enter the help -commands extended command:
    [standalone@localhost:9999 /] help --commands
  3. For a more detailed description of a specific command, enter the command, followed by --help.
    [standalone@localhost:9999 /] deploy --help
Result

The CLI help information is displayed.

2.4. AdminShell

2.4.1. AdminShell Features

AdminShell provides the following features:
Administration
AdminShell can be used to connect to a JBoss Data Virtualization instance in order to perform various administrative tasks.
Data Access
AdminShell can be used to connect to a virtual database (VDB) and run SQL commands to query VDB data and view results.
Migration
AdminShell can be used to develop scripts that will move VDBs and associated components from one development environment to another. (Users can test and automate migration scripts before executing them in production deployments.)
Testing
The built-in JUnit Test Framework allows users to write regression tests to check system health and data integrity. The written tests validate system functionality automatically, removing the need for manual verification by QA Personnel.

2.4.2. AdminShell Scripting

Use the Groovy language to write AdminShell scripts to automate tasks. AdminShell provides a custom library of Groovy functions to help administer JBoss Data Virtualization. These custom functions can be used in isolation or alongside other Groovy code.
Here are some basic rules. Memorise these to become familiar with the syntax:
  • All commands and functions are case sensitive.
  • Groovy commands are not required to end with a semicolon; this is optional.
  • A function will take zero or more parameters as input.
    • Function parameters are provided within parentheses.
    • String parameters must be provided within single or double quotes.
    connectAsAdmin("localhost", "9999", "user", "password", "conn1")
    
  • Other Java classes and methods can be accessed and invoked from within scripts as long as required libraries are already in the class path. (JAR files added to the lib directory will be included in the class path automatically.)
    import my.package.*;
    myObject = new MyClass();
    myObject.doSomething();
    

Note

You should disconnect from JBoss Data Virtualization before exiting AdminShell by entering the disconnect() command.

References

2.4.3. AdminShell Help

All of the custom administrative methods available in AdminShell can be listed by using the adminHelp() method (note that only the custom AdminShell methods will be shown):
adminHelp()
If you require a specific method's definition and input parameters, use adminHelp("METHOD"):
adminHelp("deploy")

/*
*Deploy a VDB from file
*/
void deploy(
String /* file name */)
throws AdminException
throws FileNotFoundException
Use the sqlHelp() method to list all SQL extension methods:
sqlHelp()
If you want to obtain a specific method's definition and input parameters, use sqlHelp("METHOD").

2.4.4. AdminShell Basic Commands

Table 2.1. AdminShell Basic Commands

Command Description
adminHelp();
Displays all of the custom AdminShell methods available.
adminHelp("METHOD");
Displays the definition and input parameters for the AdminShell method supplied.
sqlHelp();
Displays all of the custom SQL AdminShell methods available.
sqlHelp("METHOD");
Displays the definition and input parameters for the SQL AdminShell method supplied.
println "STRING";
Print something to console.
sql = connect();
Get an extended Groovy SQL connection using the default connection specified in the connection.properties file.
sql.execute("SQL");
Run any SQL command.
connectAsAdmin();
Connects as an admin user using the default connection specified in the connection.properties file. This command does not require a VDB name. Note that this is an admin connection, not a JDBC connection, so you cannot issue SQL commands using this connection. Note that if SSL is being used, you would need to adjust the connection URL and the client SSL settings as necessary (typically only for 2-way SSL).
println getConnectionName();
Returns the active connection name.
useConnection("cNAME");
Switches to using the given connection.
disconnect();
Disconnects the active connection.
disconnectAll();
Disconnects all connections (both active and suspended).

2.4.5. Execute a Script in Non-interactive AdminShell

Procedure 2.3. Execute a Script in Non-interactive AdminShell

  1. Open a Command Line Terminal

  2. Run the Script

    Run the /adminshell.sh load [-Dparam=value] PATH/FILENAME command.

Note

Optional properties passed in using -D can be accessed from within the script file by using System.getProperty().
value = System.getProperty("param")

Important

Any connections established within the script should be disconnected before the script finishes.

2.4.6. Launch the Interactive AdminShell

Procedure 2.4. Launch the Interactive AdminShell

  1. Navigate to the adminshell Directory

    1. Open a command line terminal.
    2. Navigate to the AdminShell directory: cd /EAP_HOME/dataVirtualization/teiid-adminshell/.
  2. Unzip the AdminShell: unzip teiid-VERSION-adminshell-dist.zip
  3. Launch the adminshell Script

    Run the ./adminshell.sh command.
    The log output is as follows:
    ======================================================================
    
      Teiid AdminShell Bootstrap Environment
    
      TEIID_HOME  = /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION
      CLASSPATH   = /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/lib/patches/*:/EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/lib/teiid-adminshell-VERSION.jar:/EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/lib/*
      JAVA        = /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java
    
    ======================================================================
    
    ===> [import static org.teiid.adminshell.AdminShell.*; import static org.teiid.adminshell.GroovySqlExtensions.*; import org.teiid.adminapi.*;]
    Groovy Shell (1.7.2, JVM: 1.7.0_25)
    Type 'help' or '\h' for help.
    -------------------------------------------------------------------------------
    groovy:000>
    
Result

The interactive AdminShell is running. At this point you can execute individual commands line by line.

Note

You can exit the interactive AdminShell at any time by entering the exit command.

2.4.7. Helpful Tips for the Interactive AdminShell

Table 2.2. Interactive AdminShell Commands

Command Description
↑ ('up arrow')
Displays previous commands executed, starting with the most recent.
help
Displays additional commands provided specifically for use within the interactive shell.
Shell Behavior

The interactive shell uses a special interpreter and displays behavior different from what would be expected from running a Groovy script:

  • def statements do not define a variable in the context of the shell; For example, do not use def x = 1, use x = 1.
  • The shell cannot parse Groovy classes that use annotations.

2.4.8. Save a Script in Interactive AdminShell

Procedure 2.5. Save a Script in Interactive AdminShell

  1. Open the Interactive AdminShell

    1. Open a command line terminal.
    2. Navigate to /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/.
    3. Run the ./adminshell.sh command.
  2. Start Recording to a File

    Enter the record start PATH/FILENAME command.
  3. Enter Desired Commands to Record

    Enter a series of commands for which you want to record the input/output.
  4. Stop Recording to the File

    Enter the record stop command.
Result

All input and output between the moments you issue the record start and record stop commands are captured in PATH/FILENAME.

Note

Since both input and output are recorded, the file will need some editing before it can be executed as a script file.

2.4.9. Execute a Script in Interactive AdminShell

Procedure 2.6. Execute a Script in Interactive AdminShell

  1. Open the Interactive AdminShell

    1. Open a command line terminal.
    2. Navigate to /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/.
    3. Run the ./adminshell.sh command.
  2. Run the Script

    • Run the Script using the Interactive Shell load Command

      Run the load PATH/FILENAME command.
    • Run the Script using the Groovy evaluate Command

      Run the evaluate("PATH/FILENAME" as File) command.

Note

You can also execute a script by entering it line by line after opening the interactive shell.

2.4.10. Launch the AdminShell GUI

Procedure 2.7. Launch the AdminShell GUI

  1. Navigate to the adminshell Directory

    1. Open a command line terminal.
    2. Navigate to /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/.
  2. Launch the adminshell-console Script

    Run the ./adminshell-console.sh command.
Result

The AdminShell GUI is displayed.

Note

You can exit the AdminShell GUI at any time by clicking on FileExit.

2.4.11. Save a Script in AdminShell GUI

Procedure 2.8. Save a Script in AdminShell GUI

  1. Navigate to the adminshell Directory

    1. Open a command line terminal.
    2. Navigate to /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/.
  2. Type a Script

    Type your script in the upper script window of the AdminShell GUI. You may find it useful to test the script as you are preparing it by executing it via ScriptRun.
  3. Save the Script

    1. Select FileSave As.
    2. Choose a location and filename for the script and click on Save.

2.4.12. Execute a Script in AdminShell GUI

Procedure 2.9. Execute a Script in AdminShell GUI

  1. Open the AdminShell GUI

    1. Open a command line terminal.
    2. Navigate to /EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/.
    3. Run the ./adminshell-console.sh command.
  2. Prepare the Script

    • Type a New Script

      Type your script in the upper script window of the AdminShell GUI.
    • Load a Saved Script

      1. Select FileOpen.
      2. Locate the desired script file and click on Open.
        The script will appear in the upper script window of the AdminShell GUI.
  3. Run the Script

    Select ScriptRun.

2.4.13. AdminShell Connection Properties

The EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/connection.properties file provides the default connection properties used by AdminShell to connect to an JBoss Data Virtualization instance:
jdbc.user=user
jdbc.password=user
jdbc.url=jdbc:teiid:admin@mm://localhost:31000;

admin.host=localhost
admin.port=9999
admin.user=admin
admin.password=admin
A call to connect() or connectionAsAdmin() without any input parameters will connect to JBoss Data Virtualization using the settings defined in this properties file. connect() uses those properties prefixed with jdbc and connectAsAdmin() uses those properties prefixed with admin. Alternatively, you can include parameters in the connect() or connectAsAdmin() methods to connect using properties of your choice:
connect("URL", "USER", "PASSWORD")

Warning

Do not leave passwords stored in clear text. The example above is for demonstration purposes only.
If you want to store your password in the file, take the necessary measures to secure it. Otherwise, do not use this feature at all and, instead, input your password interactively (or some other secure way.)

2.4.14. Multiple Connections in AdminShell

Using AdminShell, users can manage more than one connection to one or more JBoss Data Virtualization instances. For example, a user might want to have one connection to a development server and another to an integration server, simultaneously.
Every time a new connection is made, it is assigned a unique name and it becomes the active connection. If there is already a connection, it will be suspended (it will not be closed).
The getConnectionName() method returns the name of the active connection. The connection name can be assigned to a variable cName by the following command:
cName = getConnectionName();
The name of the current connection is required in order to switch from one to another. To change the active connection, use the useConnection() command, supplying the name (or a variable with the name assigned) of the connection you wish to use:
useConnection(cName);
Example

The following example demonstrates how to use two connections and switch between them:

// Create a connection
connectAsAdmin();

//Assign the connection name to conn1
conn1 = getConnectionName();

deploy("file.vdb")

// Create a second connection (which is now the active connection)
connectAsAdmin();

//Assign the new connection name to conn2
conn2 = getConnectionName();

deploy("file.vdb")

// Switch the active connection back to conn1
useConnection(conn1);

// Close the active connection (conn1)
disconnect();

2.4.15. Example Scripts

Example 2.1. Deploying a VDB

connectAsAdmin();
deploy("/path/to/<name>.vdb");

// check to validate the deployment
VDB vdb = getVDB("<name>", 1);
if (vdb != null){
     print (vdb.getName()+"."+vdb.getVersion()+" is deployed";
}
else {
     print ("<name>.vdb failed to deploy";
}

Example 2.2. Create a Datasource (Oracle)

connectAsAdmin();

// first deploy the JDBC jar file for Oracle
deploy("ojdbc6.jar");

props = new Properties();
props.setProperty("connection-url","jdbc:oracle:thin:@<host>:1521:<sid>");
props.setProperty("user-name", "scott");
props.setProperty("password", "tiger");

createDataSource("oracleDS", "ojdbc6.jar", props);

Example 2.3. Execute SQL Query against Teiid

sql = connect("jdbc:teiid:<vdb>@mm://<host>:31000", "user", "user");

// select
sql.eachRow("select * from sys.tables") { println "${it}" }

// update, insert, delete
sql.execute(<sql command>);

2.4.16. AdminShell FAQ

Q: Why won't the adminhelp command work in the GUI tool?
Q: Are there any pre-built scripts available?
Q: What is the difference between connectAsAdmin() and connect()?
Q: What does getAdmin() do? Why do I need it?
Q: Is IDE support available for writing the scripts?
Q: Can I use AdminShell methods in other environments?
Q: Is debugging support available?
Q:
Why won't the adminhelp command work in the GUI tool?
A:
The AdminShell GUI environment does not understand shell commands such as load, help, and adminhelp, since they are not directly supported by Groovy. In the GUI you should use the equivalent functional form; for example, instead of using adminhelp use adminHelp().
Q:
Are there any pre-built scripts available?
A:
Not currently.
Q:
What is the difference between connectAsAdmin() and connect()?
A:
The connectAsAdmin() method creates a contextual connection to the AdminAPI of JBoss Data Virtualization. The connect() method returns an extension of the Groovy SQL object to be used for SQL calls to JBoss Data Virtualization.
Q:
What does getAdmin() do? Why do I need it?
A:
The getAdmin() method returns the current contextual connection object that was created when you connected with connectAsAdmin(). This object implements the interface org.teiid.adminapi.Admin . The AdminShell commands provided are wrappers around this API. Advanced users can use this API directly if the provided wrapper commands do not meet their needs.
Q:
Is IDE support available for writing the scripts?
A:
The AdminShell GUI tool is a light-weight IDE. Full IDE support is available for Groovy, but requires manual manipulation of the class path and script imports.
Q:
Can I use AdminShell methods in other environments?
A:
The AdminShell methods (including the named contextual connections, AdminAPI wrapper, and help system) have no direct dependencies on Groovy and can be used in other scripting languages.
To use the AdminShell methods in another language, simply import the static methods and Admin classes into your script. You will also need to ensure that the EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/lib/teiid-VERSION.jar and EAP_HOME/dataVirtualization/teiid-adminshell-VERSION/lib/teiid-adminshell-VERSION.jar are in your class path.
The following snippet shows import statements that would work in Java, BeanShell, Groovy and others:
import static org.teiid.adminshell.AdminShell.*;
import org.teiid.adminapi.*;
Q:
Is debugging support available?
A:
The Interactive AdminShell and GUI have built-in support for inspection of the current state; however, line based debugging is beyond the scope of this document.

2.5. JBoss Operations Network

2.5.1. Installing JBoss Agent Plug-in Packs

JBoss Operations Network has additional agent plug-ins to manage other JBoss products. Although these are JBoss ON resource plug-ins, they are included in separate packages and require a separate subscription to download them.
  1. Download the plug-in JAR files from the Customer Support Portal.
    1. Select JBoss ON for PRODUCT.
    2. Select Download.
  2. Extract the plug-in JAR files into the JON_HOME/plugins directory.
  3. Have the JBoss ON server update its plug-ins. This can be done through the JBoss ON GUI or by restarting the server.
  4. To load the plug-ins through the GUI:
    1. Open the Administration tab.
    2. In the Configuration area on the left, select the Agent Plug-ins link.
    3. At the bottom of the list of loaded agent plug-ins, click the SCAN FOR UPDATES button.
  5. Have the agents reload their plug-ins to load the new plug-ins. This can be done from the agent's command prompt using the plugins command:
    > plugins update

    Note

    This can also be done in the JBoss ON GUI by scheduling an update plugins operation for an agent or a group or agents. Alternatively, restart the agents.

2.6. Dashboard Builder

2.6.1. Data Sources

Red Hat JBoss Dashboard Builder can be connected to an external database, be it using JNDI of the container or connecting directly only using the JDBC driver to access the database. Connections to databases can be configured in workspace Showcase on page External Connections. After you have established the connection to the database, you need to create a data provider that will collect the data from the database and allow you to visualize it as an indicator in the dashboard area of a page.
When connecting to CSV files to acquire data, the connection is estabished directly through the data provider.
Note that Red Hat JBoss Dashboard Builder makes use of its own local internal database to store its local data. This database is read-only for Dashboard Builder, but is accessible from outside.

2.6.2. Connecting to data sources

You can connect either to a JNDI data source, that is, a data source set up and accessible from the application container, or directly to the data source as a custom data source, if the application container has the correct JDBC driver deployed.
To connect to an external data source, do the following:
  1. Make sure the data source is up and running and that the application server has access to the data source. (Check the driver, the login credentials, etc. In Red Hat JBoss EAP 6, you can do so in the Management Console under SubsystemsConnectorDatasources)
  2. In Dashboard Builder, on the Tree Menu (by default located on the of the Showcase perspective), go to AdministrationExternal connections.
  3. On the displayed External Connection panel, click the New DataSource button.
  4. Select the data source type (JNDI or Custom DataSource) and provide the respective data source parameters below.

2.6.3. Creating data providers

To create a new data provider, do the following:
  1. In the Tree Menu (the panel in the lateral menu of the Showcase workspace), click AdministrationData providers.
  2. In the Data Providers panel, click the Create new data provider button.
  3. In the updated Data Providers panel, select in the Type dropdown menu the type of the data provider depending on the source you want the data provider to operate on.
  4. Define the data provider parameters:
    Data provider over a CSV file
    • Name: user-friendly name and its locale
    • CSV file URL: the url of the file (for example, file:///home/me/example.csv)
    • Data separator: the symbol used as separator in the CSV file (the default value is semicolon; if using comma as the separator sign, make sure to adapt the number format if applicable)
    • Quoting symbol: the symbol used for quotes (the default value is the double-quotes symbol; note that the symbol may vary depending on the locale)
    • Escaping symbol: the symbol used for escaping the following symbol in order to keep its literal value
    • Date format: date and time format
    • Number format: the format of numbers as resolved to thousands and decimals
    Data provider over a database (SQL query)
    • Name: user-friendly name and its locale
    • Data source: the data source to query (the default value is local, which allows you to query the Dashboard Builder database)
    • Query: query that returns the required data
  5. Click Attempt data load to verify the parameters are correct.
  6. Click Save.
  7. In the table with the detected data, define the data type and if necessary provide a user-friendly name for the data. Click Save.
The data provider can now be visualized in an indicator on a page of your choice.

2.6.4. Dashboard Builder Workspace

2.6.4.1. Creating a workspace

To create a new workspace, do the following:
  1. Click the Create workspace button on the top menu.
    The management console with the Workspace node expanded and workspace management area with workspace details on the right is displayed.
  2. In the Create workspace table on the right, set the workspace parameters:
    • Name: workspace name and its locale
    • Title: workspace title and its locale
    • Skin: skin to be applied on the workspace resources
    • Envelope: envelope to be applied on the workspace resources
  3. Click Create workspace.
  4. Optionally, click the workspace name in the tree menu on the left and in the area with workspace properties on the right define additional workspace parameters:
    • URL: the workspace URL
    • User home search: the home page setting
      If set to Role assigned page, the home page configured in the page permissions is applied. Hence, you can set different home page for every role. (If you set it to Current page, all users will use the current home page as their home page.)

2.6.4.2. Creating Pages

To create a new page, do the following:
  1. Make sure you are in the correct workspace.
  2. Next to the Page dropdown box in the top menu, click the Create new page button .
  3. The management console with the Pages node expanded and page management area with page details on the right is displayed.
  4. In the Create new page table on the right, set the page parameters:
    • Name: page name and its locale
    • Parent page: parent page of the new page
    • Skin: skin to be applied on the page
    • Envelope: envelope to be applied on the page
    • Page layout: layout of the page
  5. Click Create new page.
  6. Optionally, click the page name in the tree menu on the left and in the area with workspace properties on the right define additional page parameters:
    • URL: the page URL
    • Visible page: visibility of the page
    • Spacing between regions and panels

2.6.4.3. Defining Page permissions

Although users are usually authorized using the authorization method setup for the underlying application container (on Red Hat JBoss EAP, the other security domain by default), the Red Hat JBoss Dashboard Builder has its own role-based access control (RBAC) management tool to facilitate permission management on an individual page or multiple pages.
To define permissions on a page or all workspace pages for a role, do the following:
  1. On the top menu, click the General configuration button : the management console is displayed.
  2. Under the Workspace node on the left, locate the page or the Pages node.
  3. Under the page/pages node, click the Page permissions node.
  4. In the Page permissions area on the right, delete previously defined permission definition if applicable and define the rights for the required role:
    1. In the Permission assignation table, locate the Select role dropdown menu and pick the respective role.
    2. In the Actions column of the table, enable or disable individual permissions.
  5. Click Save.

2.6.4.4. Panels

A panel is a GUI widget, which can be placed on a page. There are three main types of panels:
Dashboard panels
are the primary BAM panels and include the following:
  • Data provider manager: a panel with a list of available data providers and data provider management options
  • Filter and Drill-down: a panel that displays all KPIs and their values to facilitate filtering in indicators on the given page defined over a data provider
  • HTML Editor panel: a panel with static content
  • Key Performance Indicator (indicator): a panel that visualizes the data of a data provider
Navigation panels
are panels that provide navigation functions and include the following:
  • Breadcrumb: a panel with the full page hierarchy pointing to the current page
  • Language menu: a panel with available locales (by default in the top center)
  • Logout panel: a panel with the name of the currently logged-in user and the logout button
  • Page menu custom: a panel with vertically arranged links to all pages in the workspace (the list of pages can be adjusted) and general controls for the HTML source of the page
  • Page menu vertical: a panel with vertically arranged links to all pages in the workspace (the list of pages can be adjusted)
  • Page menu horizontal: a panel with horizontally arranged links to all pages in the workspace (the list of pages can be adjusted)
  • Tree menu: a panel with the links to essential features such as Administration, Home (on the Home page of the Showcase workspace displayed on the left, in the lateral menu)
  • Workspace menu custom: a panel with links to available workspaces (the list of workspaces can be adjusted) and general controls for the HTML source of the workspace
  • Workspace menu horizontal: a horizontal panel with links to available workspaces (the list of workspaces can be adjusted)
  • Workspace menu vertical: a vertical panel with links to available workspaces (the list of workspaces can be adjusted)
System panels
are panels that provide access to system setting and administration facilities and include the following:
  • Data source manager: a panel for management of external data sources
  • Export dashboards: a panel export of dashboards
  • Export/Import workspaces: a panel for exporting and importing of workspaces

2.6.4.5. Adding panels

To add an existing panel to a page or to create a new panel, do the following:
  1. Make sure the respective page is open (in the Page dropdown menu of the top menu select the page).
  2. In the top menu, click the Create a new panel in current page button.
  3. In the displayed dialog box, expand the panel type you want to add (Dashboard, Navigation, or System) and click the panel you wish to add.
  4. From the Components menu on the left, drag and drop the name of an existing panel instance or the Create panel item into the required location on the page.
    If inserting a new indicator, the Panel view with the graph settings will appear. Define the graph details and close the dialog.
    If adding an instance of an already existing indicator, you might not be able to use it, if it is linked to the KPIs on the particular original page. In such a case, create a new panel.
  5. If applicable, edit the content of the newly added panel.

2.6.5. Dashboard Builder Filters

Filters work in the following way:
  • You can define "shared" properties across several data set providers. A shared property is a property with the same id number in at least two different data providers.
  • If you build a dashboard which refers to two or more data providers containing shared properties and try to filter by a shared property, then any of the KPIs containing this property will be filtered.
  • Shared properties can be useful for implementing "join"-like filter behaviour. This allows you to filter several KPIs belonging to different data providers simultaneously.
  • To disable the join-like behaviour, adjust the property ids on the Data Provider Column Definition screen. (Ensure the property ids are unique and do not clash between data provider definitions)

Part III. User Management

Chapter 3. User Accounts

3.1. User Accounts

The following account types are available for you to use in JBoss Data Virtualization:
JBoss EAP Management User
You must have a JBoss EAP Management User in order to administer your JBoss Data Virtualization installation via the Management Console, Management CLI, JBoss Developer Studio, and Admin API.
JBoss Data Virtualization User
JBoss Data Virtualization users can access the virtual databases (VDBs). You can define what permissions each of these users has to dictate what they can do with these databases.
Hierarchical Database User
Hierarchical database users can access the provided hierarchical database. You can define what permissions each of these users has to dictate what they can do with this database.

Note

User/role names and details are independent of any other accounts such as operating system accounts. They relate only to JBoss EAP and JBoss Data Virtualization.

3.2. Data Roles

All authenticated users have access to a VDB. To restrict access, configure data roles. Set these in the Teiid Designer or the dynamic VDB's META-INF/vdb.xml file.
As part of the data role definition, you can map them to JAAS roles specified in <mapped-role-name> tags. You can set up JAAS roles using the web console.
How these JAAS roles are associated with users depends on which particular JAAS login module you use. For instance, the default UsersRolesLoginModule associates users with JAAS roles in plain text files.
For more information about data roles, see Red Hat JBoss Data Virtualization Development Guide: Reference Material.

Important

Do not use "admin" or "user" as JAAS role names as these are reserved specifically for Dashboard Builder permissions.

3.3. Adding a JBoss EAP Management User

3.3.1. Add the User for the Management Interfaces

The following procedure documents how to create the initial administrative user, in the event such a user is not created by the chosen installation method. This initial administrative user can use the web-based Management Console and remote instances of the Management CLI to configure and administer JBoss EAP 6 from remote systems.

Procedure 3.1. Create the Initial Administrative User for the Remote Management Interfaces

  1. Run the add-user.sh or add-user.bat script.

    Change to the EAP_HOME/bin/ directory. Invoke the appropriate script for your operating system.
    Red Hat Enterprise Linux
    [user@host bin]$ ./add-user.sh
    Microsoft Windows Server
    C:\bin>  add-user.bat
  2. Choose to add a Management user.

    Press ENTER to select the default option a to add a Management user.
    This user is added to the ManagementRealm and is authorized to perform management operations using the web-based Management Console or command-line based Management CLI. The other choice, b, adds a user to the ApplicationRealm, and provides no particular permissions. That realm is provided for use with applications.
  3. Enter the desired username and password.

    When prompted, enter the username and password. You will be prompted to confirm the password.
  4. Enter group information.

    Add the group or groups to which the user belongs. If the user belongs to multiple groups, enter a comma-separated list. Leave it blank if you do not want the user to belong to any groups.
  5. Review the information and confirm.

    You are prompted to confirm the information. If you are satisfied, type yes.
  6. Choose whether the user represents a remote JBoss EAP 6 server instance.

    Besides administrators, the other type of user which occasionally needs to be added to JBoss EAP 6 in the ManagementRealm is a user representing another instance of JBoss EAP 6, which must be able to authenticate to join a cluster as a member. The next prompt allows you to designate your added user for this purpose. If you select yes, you will be given a hashed secret value, representing the user's password, which would need to be added to a different configuration file. For the purposes of this task, answer no to this question.
  7. Enter additional users.

    You can enter additional users if desired, by repeating the procedure. You can also add them at any time on a running system. Instead of choosing the default security realm, you can add users to other realms to fine-tune their authorizations.
  8. Create users non-interactively.

    You can create users non-interactively, by passing in each parameter at the command line. This approach is not recommended on shared systems, because the passwords will be visible in log and history files. The syntax for the command, using the management realm, is:
    [user@host bin]$ ./add-user.sh username password
    To use the application realm, use the -a parameter.
    [user@host bin]$ ./add-user.sh -a username password
  9. You can suppress the normal output of the add-user script by passing the --silent parameter. This applies only if the minimum parameters username and password have been specified. Error messages will still be shown.
Result

Any users you add are activated within the security realms you have specified. Users active within the ManagementRealm realm are able to manage JBoss EAP 6 from remote systems.

3.3.2. Default User Security Configuration

Introduction

All management interfaces in JBoss EAP 6 are secured by default. This security takes two different forms:

  • Local interfaces are secured by a SASL contract between local clients and the server they connect to. This security mechanism is based on the client's ability to access the local filesystem. This is because access to the local filesystem would allow the client to add a user or otherwise change the configuration to thwart other security mechanisms. This adheres to the principle that if physical access to the filesystem is achieved, other security mechanisms are superfluous. The mechanism happens in four steps:

    Note

    HTTP access is considered to be remote, even if you connect to the localhost using HTTP.
    1. The client sends a message to the server which includes a request to authenticate with the local SASL mechanism.
    2. The server generates a one-time token, writes it to a unique file, and sends a message to the client with the full path of the file.
    3. The client reads the token from the file and sends it to the server, verifying that it has local access to the filesystem.
    4. The server verifies the token and then deletes the file.
  • Remote clients, including local HTTP clients, use realm-based security. The default realm with the permissions to configure the JBoss EAP 6 instance remotely using the management interfaces is ManagementRealm. A script is provided which allows you to add users to this realm (or realms you create). For more information on adding users, see the User Management chapter of the JBoss EAP 6 Administration and Configuration Guide. For each user, the username and a hashed password are stored in a file.
    Managed domain
    EAP_HOME/domain/configuration/mgmt-users.properties
    Standalone server
    EAP_HOME/standalone/configuration/mgmt-users.properties
    Even though the contents of the mgmt-users.properties are masked, the file must still be treated as a sensitive file. It is recommended that it be set to the file mode of 600, which gives no access other than read and write access by the file owner.

3.3.3. Adding a JBoss Data Virtualization User

JBoss Data Virtualization's default configuration uses hashed files to define users and their roles. Follow these steps to add a new user for this default configuration.
  1. Navigate to the bin directory.
  2. Launch the ./add-user.sh script.
  3. Selection option the "b) Application User (application-users.properties)" and add the username and password you desire.

Important

The "admin" and "user" role names are reserved specifically for Dashboard Builder permissions. The odata group is needed for OData access.

3.4. Adding a Modeshape User

3.4.1. Create a Modeshape Publishing User

To use Modeshape publishing, you need to ensure that at least one user has the connect and readwrite roles or the connect and admin roles.
By default, all the roles are assigned to all anonymous-roles users.

Part IV. Configuring the Product

Chapter 4. Virtual Databases

4.1. Virtual Databases and JBoss Data Virtualization

JBoss Data Virtualization uses virtual databases (VDBs) to facilitate an enterprise data integration solution.

4.2. Virtual Database Deployment

A virtual database (VDB) must be deployed before it can be accessed by client applications.
A VDB can be deployed by the following means:
File Deployment
File deployment is recommended for quick deployment during development, when the server is running locally on the developer's workstation.
Management Console
Deployment via the web-based Management Console is recommended for a simple way of deploying a VDB to a remote server.
Management Command Line Interface
Deployment via the EAP Management Command Line Interface (CLI) is another simple option for deployment.
AdminShell
Deployment via the AdminShell is recommended for more advanced deployments such as when you want to automate the deployment of artifacts in your environment.
Admin API
Deployment via the Admin API is recommended for more advanced deployments such as when you want to deploy a VDB from within other applications.

Note

A VDB can also be deployed within the Teiid Designer. See the Red Hat JBoss Data Virtualization User Guide.

Important

Removal of a VDB cleans up all of its resources automatically; however, existing sessions will not be terminated automatically.

Warning

When deploying a VDB, overwriting a VDB with one of the same name will terminate all connections to the old VDB unless VDB versioning is used.
Red Hat recommends the use of VDB versioning on production systems.

Warning

You can locally name VDB artifacts as you wish, but the runtime names of your deployed VDB artifacts must have an extension of *.vdb for a zip file or *-vdb.xml for an xml file. Failure to name your file correctly will result in a deployment failure as the Teiid subsystem will not know how to handle it.

4.3. Deploy a VDB via File Deployment

Prerequisites

  • Red Hat JBoss Data Virtualization must be installed.

Procedure 4.1. Deploy a VDB via File Deployment

  1. Copy your VDB into the deploy directory

    Copy your VDB file into the EAP_HOME/standalone/deployments directory.
  2. Create a marker file

    Create an empty marker file of the same name with extension .dodeploy in the same directory. For example, if your VDB name is enterprise.vdb, then the marker file name must be enterprise.vdb.dodeploy.

Note

This only works in standalone mode. For domain mode, you must use one of the other available methods.

4.4. Deploy a VDB via Management Console

Prerequisites

  • Red Hat JBoss Data Virtualization must be installed.
  • The JBoss Enterprise Application Platform (EAP) server must be running.
  • You must have a JBoss EAP Management User registered.

Procedure 4.2. Deploy a VDB via Management Console

  1. Launch the console in a Web browser

    Open http://localhost:9990/console/ in a web browser.
  2. Authenticate to the console

    Enter your JBoss EAP administrator username and password when prompted.
  3. Open the Deployments panel

    In the Runtime view, select ServerManage Deployments.
  4. Add the virtual database

    1. Select the Add button.
    2. Select Choose File and choose the VDB file you want to deploy.
    3. Select Next to review the deployment names then select Save.
    4. Select En/Disable to enable the VDB.

4.5. Deploy a VDB via CLI

Prerequisites

  • Red Hat JBoss Data Virtualization must be installed.
  • The JBoss Enterprise Application Platform (EAP) server must be running.

Procedure 4.3. Deploy a VDB via CLI

  1. Open the command line interface

    Run the EAP_HOME/bin/jboss-cli.sh command.
  2. Connect to the server

    Run the connect command.
  3. Deploy the virtual database

    If in standalone mode, run deploy PATH/DATABASE.vdb.
    If in domain mode, run deploy -all-server-groups PATH/DATABASE.vdb.

Note

In domain mode, you need to select a particular "server-group" or all available server groups are deployment options.

4.6. Deploy a VDB via AdminShell

Prerequisites

  • Red Hat JBoss Data Virtualization must be installed.
  • The JBoss Enterprise Application Platform (EAP) server must be running.

Procedure 4.4. Deploy a VDB via AdminShell

  1. Open the interactive AdminShell interface

    Run the ./adminshell.sh command.
  2. Open a connection

    Within the interactive AdminShell, run the connectAsAdmin() command.
  3. Deploy your virtual database

    Run the deploy("PATH/DATABASE.vdb") command.
  4. Close the connection

    Run the disconnect() command.
  5. Exit the interactive shell

    1. Enter the exit command to leave the interactive shell.

Note

In domain mode, when deploying using AdminShell, the VDB is deployed to all available servers.
A VDB can also be deployed via the AdminShell console or using a script via the non-interactive AdminShell. For more information on using these, refer to topics on the Administration Shell.

4.7. Deploy a VDB via Admin API

You can deploy a VDB using the deploy method provided by the Admin interface within the Admin API package (org.teiid.adminapi).
Javadocs for Red Hat JBoss Data Virtualization can be found on the Red Hat Customer Portal.

Note

In domain mode, when deploying using the Admin API, the VDB is deployed to all available servers.

4.8. VDB Dependencies

When deploying a virtual database (VDB) in JBoss Data Virtualization, you also have to provide dependency libraries and configuration settings for accessing the physical data sources used by your VDB. (You can identify all dependent physical data sources by looking in META-INF/vdb.xml within the EAP_HOME/MODE/deployments/DATABASE.vdb file.)
For example, if you are trying to integrate Oracle and file sources in your VDB, then you are responsible for providing both the JDBC driver for the Oracle source, and any necessary documents and configuration files that are needed by the file translator.
Data source instances may be shared between multiple VDBs and applications. Consider sharing connections to sources that are heavy-weight and resource-constrained.
Once you have deployed the VDB and its dependencies, client applications can connect using the JDBC API. If there are any errors in the deployment, the connection attempt will fail and a message will be logged. Use the Management Console (or check the log files) to identify any errors and correct them so you can proceed. See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on how to use JDBC to connect to your VDB.

Warning

Some data source configuration files may contain passwords or other sensitive information. For instructions on how to avoid storing passwords in plaintext, refer to the JBoss Enterprise Application Platform Security Guide.

4.9. Data Source Deployment

4.9.1. Accumulo Data Sources

Accumulo data sources use a Data Virtualization-specific JCA connector that is deployed into JBoss EAP during installation. There are many ways to create a Accumulo data source, using CLI, AdminShell, admin-console, etc. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Ensure you provide the correct URL and user credentials. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in VDB:
batch 
/subsystem=resource-adapters/resource-adapter=accumulo/connection-definitions=teiid:add(jndi-name=java:/accumulo-ds, class-name=org.teiid.resource.adapter.accumulo.AccumuloManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=accumulo/connection-definitions=teiid/config-properties=ZooKeeperServerList:add(value=localhost:2181) 
/subsystem=resource-adapters/resource-adapter=accumulo/connection-definitions=teiid/config-properties=Username:add(value=user) 
/subsystem=resource-adapters/resource-adapter=accumulo/connection-definitions=teiid/config-properties=Password:add(value=password) 
/subsystem=resource-adapters/resource-adapter=accumulo/connection-definitions=teiid/config-properties=InstanceName:add(value=instancename) 
/subsystem=resource-adapters/resource-adapter=accumulo/connection-definitions=teiid/config-properties=Roles:add(value=public) 
/subsystem=resource-adapters/resource-adapter=accumulo:activate 
runbatch 
These are the properties defined in the RAR file:

Table 4.1.  Properties

Property Description Required? Default?
ZooKeeperServerList
A comma-separated list of zoo keeper server locations. Each location can contain an optional port, of the format host:port
True.
None.
ZooKeeperServerList
A comma-separated list of zoo keeper server locations. Each location can contain an optional port, of the format host:port
True.
None.
Username
Connection User's Name
True.
None.
Password
Connection User's password
True.
None.
InstanceName
Accumulo instance name
True.
None.
Password
Connection User's password
True.
None.
Roles
optional visibility for user, supply multiple with comma separated
False.
None.
To find out all the properties that are supported by this Accumulo Connector execute this command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=accumulo)

4.9.2. Amazon SimpleDB Data Sources

SimpleDB data sources use a Teiid specific JCA connector that is deployed into EAP during installation. There are many ways to create a SimpleDB data source, using CLI, AdminShell, admin-console, and so forth. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Make sure you provide the correct access keys. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB.
batch 
/subsystem=resource-adapters/resource-adapter=simpledb/connection-definitions=simpledbDS:add(jndi-name=java:/simpledbDS, class-name=org.teiid.resource.adapter.simpledb.SimpleDBManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=simpledb/connection-definitions=simpledbDS/config-properties=AccessKey:add(value=xxx) 
/subsystem=resource-adapters/resource-adapter=simpledb/connection-definitions=simpledbDS/config-properties=SecretAccessKey:add(value=xxx) 
/subsystem=resource-adapters/resource-adapter=simpledb:activate 
runbatch 
To find out all the properties that are supported by this SimpleDB Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=simpledb)

4.9.3. Cassandra Data Sources

Cassandra's data sources use a Data Virtualization-specific JCA connector that is deployed on EAP during installation. There are many ways to create a Cassandra data source, using CLI, AdminShell, admin-console, etc. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Make sure you provide the correct URL and user credentials. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in VDB:
 
batch 
/subsystem=resource-adapters/resource-adapter=cassandra/connection-definitions=cassandraDS:add(jndi-name=java:/cassandraDS, class-name=org.teiid.resource.adapter.cassandra.CassandraManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=cassandra/connection-definitions=cassandraDS/config-properties=Address:add(value=127.0.0.1) 
/subsystem=resource-adapters/resource-adapter=cassandra/connection-definitions=cassandraDS/config-properties=Keyspace:add(value=my-keyspace) 
/subsystem=resource-adapters/resource-adapter=cassandra:activate 
runbatch
To find out all the properties that are supported by this Cassandra Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=cassandra)

4.9.4. File Data Source

File data sources use a Teiid specific JCA connector that is deployed into EAP during installation. There are many ways to create the file data source, using CLI, AdminShell, admin-console, and so forth. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute following command using the CLI once you connected to the Server. Make sure you provide the correct directory name and other properties below. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=file/connection-definitions=fileDS:add(jndi-name=java:/fileDS, class-name=org.teiid.resource.adapter.file.FileManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=file/connection-definitions=fileDS/config-properties=Parentdirectory:add(value=/home/rareddy/testing/) 
/subsystem=resource-adapters/resource-adapter=file/connection-definitions=fileDS/config-properties=AllowParentPaths:add(value=true) 
/subsystem=resource-adapters/resource-adapter=file:activate 
runbatch 
To find out all the properties that are supported by this File Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=file)

4.9.5. Google Spreadsheet Data Sources

You can authenticate your Google account in either of two ways: ClientLogin (which requires login and password) or OAuth (more complicated as it requires the user to supply a refresh token).
The Google JCA connector is named teiid-connector-google.rar. The examples include a sample google.xml file. The JCA connector has number of config-properties that facilitate authentication. The JCA connector connects to one spreadsheet:

Table 4.2. Configuration Properties

Property Description
AuthMethod
Method to access Google. This property can be set to OAuth2. If you do so, it is necessary to provide RefreshToken.
SpreadsheetName
Required property with name of the Spreadsheet that is datasource for this connector.
BatchSize
Maximum number of rows that can be fetched at a time. Defaults to 4096.
To obtain an OAuth Refresh Token, go to Google's authentication site:
https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive+https%3A%2F%2Fspreadsheets.google.com%2Ffeeds&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&client_id=217138521084.apps.googleusercontent.com
Next, copy the authorization code into the following POST request and run it on command line. (The refresh token will be in the response.)
curl \--data-urlencode code=<AUTH_CODE> \
--data-urlencode client_id=217138521084.apps.googleusercontent.com \
--data-urlencode client_secret=gXQ6-lOkEjE1lVcz7giB4Poy \
--data-urlencode redirect_uri=urn:ietf:wg:oauth:2.0:oob \
--data-urlencode grant_type=authorization_code https://accounts.google.com/o/oauth2/token	

4.9.6. Infinispan Data Sources

Infinispan data sources use a Teiid specific JCA connector that is deployed into EAP during installation. This connector can be configured to support the following modes of Infinispan caches:

Table 4.3. Registry Properties

Cache Type How to Obtain This Cache
Local Cache
JNDI
Local Cache
Configuration file
Remote Cache
JNDI
Remote Cache
Specify one or more host ports.
Local Cache
Specify one or more Hot Rod client property files.
You can configure these properties for this connector:

Table 4.4. Registry Properties

Property Name Required Property Template Description
CacheTypeMap
Yes
cacheName:className[;pkFieldName] [,cacheName:className[;pkFieldName]..]
Map the root Java Object class name to the cache, and identify which attribute is the primary key to the cache.
module
No
Nil
Specify the JBoss AS module that contains the cache classes that were defined in CacheTypeMap
CacheJndiName
No
Nil
JNDI name to fine the CacheContainer
RemoteServerList
No
host:port[;host:port….]
Specify the host and ports that will be clustered together to access the caches defined in CacheTypeMap
ConfigurationFileNameForLocalCache
No
Nil
The Infinispan Configuration xml file for configuring a local cache.
HotRodClientPropertiesFile
No
Nil
The HotRod properties file for configuring a connection to a remote cache
There are many ways to create the data source, using CLI, AdminShell, admin-console and so forth. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes:
Execute following command using CLI once you connected to the Server. Make sure you provide the correct directory name and other properties below. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=infinispan/connection-definitions=infinispanDS:add(jndi-name=java:/infinispanDS, class-name=org.teiid.resource.adapter.infinispan.InfinispanManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=infinispan/connection-definitions=infinispanDS/config-properties=CacheTypeMap:add(value=trades:org.somewhere.Trade;tradeId) 
/subsystem=resource-adapters/resource-adapter=infinispan/connection-definitions=infinispanDS/config-properties=Module:add(value=org.somewhere) 
/subsystem=resource-adapters/resource-adapter=infinispan/connection-definitions=infinispanDS/config-properties=CacheJndiName:add(value=java:/myCache)
runbatch 
To find out all the properties that are supported by this Infinispan Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=infinispan)

Important

Avoid Classloading Issues: If you are using a servlet or other type of web application to create the DefaultCacheManager for the cache, be sure not to include the Infinispan jar dependencies in the application, but add their module dependencies instead.

4.9.7. Infinispan-DSL Data Sources

You must install the Red Hat JBoss Data Grid modules in order to use this resource adapter. Note that this translator uses different JBoss Data Grid modules for remote and local modes. Please refer to the JBoss Data Grid documentation at https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Grid/6.4/html/Infinispan_Query_Guide/chap-The_Infinispan_Query_DSL.html for more information about this. This JBoss Data Grid guide contains some basic information on using the Infinispan Query DSL. The jar required for library mode is infinispan-embedded-query, and in remote client-server mode the .jar is infinispan-query, which is installed by default with jboss-datagrid-server.
Infinispan-DSL data sources use a Teiid-specific JCA connector that is deployed into EAP during installation. This connector can be configured to support the following modes of Infinispan caches that will be accessed using the Hot Rod client:

Table 4.5. Properties

Cache Type Property Name How to Obtain Cache
Remote Cache
CacheJndiName
JNDI
Remote Cache
RemoteServerList
Server list: specify one or more host ports
Remote Cache
HotRodClientPropertiesFile
HotRod client properties file
This requires, at a minimum, JDG 6.3 or better, that is configured to use Google Protobuf for object serialization. This will enable Teiid to query the cache as if it was a local cache configured with Lucene indexes.
You can configure the following properties for this connector:

Table 4.6. Properties

Property Required? Property Template Description
CacheTypeMap
Yes
cacheName:className[;pkFieldName] [,cacheName:className[;pkFieldName]..]
Map the root Java Object class name to the cache, and identify which attribute is the primary key to the cache.
ProtobinFile
Yes
Nil
Path to the Google Protobin file that's packaged in a jar (such as /quickstart/addressbook.protobin)
MessageMarshallers
Yes
marshaller [,marshaller,..]
Contains Class names mapped its respective message marshaller, (class:marshaller,[class:marshaller,..]), that are to be registered for serialization
MessageDescriptor
Yes
Nil
Message descriptor class name for the root object in cache
module
No
Nil
Specify the JBoss EAP module that contains the cache classes that need to be loaded.
CacheJndiName
No
Nil
JNDI name to find the CacheContainer
RemoteServerList
No
host:port[;host:port….]
Specify the host and ports that will be clustered together to access the caches.
HotRodClientPropertiesFile
No
Nil
The HotRod properties file for configuring a connection to a remote cache
Here is the XML code for a resource-adapter that is used to connect to the JDG remote-query quick start:
<resource-adapter id="infinispanRemQS">
     <module slot="main" id="org.jboss.teiid.resource-adapter.infinispan.dsl"/>
     <connection-definitions>
         <connection-definition class-name="org.teiid.resource.adapter.infinispan.dsl.InfinispanManagedConnectionFactory" jndi-name="java:/infinispanRemote" enabled="true" use-java-context="true" pool-name="infinispanDS">
             <config-property name="CacheTypeMap">
                 addressbook:org.jboss.as.quickstarts.datagrid.hotrod.query.domain.Person;id
             </config-property>
             <config-property name="ProtobinFile">
                  /quickstart/addressbook.protobin
             </config-property>
             <config-property name="MessageDescriptor">
                 quickstart.Person
             </config-property>
             <config-property name="Module">
                 com.client.quickstart.pojos
             </config-property>
             <config-property name="MessageMarshallers">                              org.jboss.as.quickstarts.datagrid.hotrod.query.domain.Person:org.jboss.as.quickstarts.datagrid.hotrod.query.marshallers.PersonMarshaller,org.jboss.as.quickstarts.datagrid.hotrod.query.domain.PhoneNumber:org.jboss.as.quickstarts.datagrid.hotrod.query.marshallers.PhoneNumberMarshaller,org.jboss.as.quickstarts.datagrid.hotrod.query.domain.PhoneType:org.jboss.as.quickstarts.datagrid.hotrod.query.marshallers.PhoneTypeMarshaller
             </config-property>
             <config-property name="RemoteServerList">
                 127.0.0.1:11322
             </config-property>
         </connection-definition>
     </connection-definitions>
 </resource-adapter>

4.9.8. JDG HotRod Data Sources

Red Hat JBoss Data Grid HotRod data sources uses a Red Hat JBoss Data Virtualization-specific JCA connector that is deployed upon installation. You can configure this to support the following cache types using the Hot Rod client:

Table 4.7. Cache Types

Remote Cache Remote Cache Obtain Cache By
Remote Cache CacheJndiName using JNDI
Remote Cache RemoteServerList Server list, specify 1 or more host:port’s
Remote Cache HotRodClientPropertiesFile HotRod client properties file
  • (option 1) Minimum, JDG 6.2 - this requires you provide a protobuf definition file and pojo marsharller for the pojo to be cached
  • (option 2) Minimum, JDG 6.6 - this can be used when the pojo has protobuf annotations which trigger the creation of the protobuf definition and pojo marshaller by JDG.
The pojo class is the object that will be used to store the data in the cache. It should be built in order to do the following:
  • To take advantage of the cache being indexed enabled, should annotate the class. See JDG Protobuf Annotations at https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Grid/6.6/html-single/Infinispan_Query_Guide/index.html#Custom_Fields_Indexing_with_Protobuf
  • The class should be packaged into a jar so that it can be deployed as a module.
To configure this pojo, deploy the pojo jar as a module in the Red Hat JBoss EAP server. Then define the "lib" property in the -vdb.xml and assign the correct module name. You can do so using this template:
<property name ="lib" value ="{pojo_module_name}"></property>
These are required for reading and writing to and from the cache:

Table 4.8. Required Properties

Property Name Property Template Description
CacheTypeMap cacheName:className[;pkFieldName[:cacheKeyJavaType]] For the indicated cache, map the root Java Object class name. Optionally, but required for updates, identify which class attribute is the primary key to the cache. Identify primary key java type when different than class attribute type.
Set these properties to define how the RemoteCacheManager will be created/accessed:

Table 4.9. Configuration Properties

Property Name Required? Property Template Description
CacheJndiName No NA This is the JNDI name to find the CacheContainer.
RemoteServerList No host:port\[;host:port….\] Specify the host and ports that will be clustered together to access the caches.
HotRodClientPropertiesFile No NA The HotRod properties file for configuring a connection to a remote cache.
The following properties are required when the pojo is not annotated:

Table 4.10. Configuration Properties

Property Name Required? Property Template Description
ProtobinFile Yes NA Path to the Google Protobin file that’s packaged in a jar (ex: /quickstart/addressbook.protobin)
MessageMarshallers Yes marshaller \[,marshaller,..\] Contains Class names mapped its respective message marshaller, (class:marshaller,\[class:marshaller,..\]), that are to be registered for serialization.
MessageDescriptor Yes Message descriptor class name for the root object in cache. The HotRod properties file for configuring a connection to a remote cache.
These are the additional properties that need to be configured if you are using the Remote Cache for external materialization:

Table 4.11. Configuration Properties

Property Name Required? Description
StagingCacheName Yes Cache name for the staging cache used in materialization
AliasCacheName Yes Cache name for the alias cache used in tracking aliasing of the caches used in materialization
There are many ways to create the data source, including using CLI, AdminShell, admin-console and so forth. The first example shows a resource-adapter's XML code. This code is used to connect to the JDG remote-query quick start:me:
	<resource-adapter id="infinispanRemQS">
                    <module slot="main" id="org.jboss.teiid.resource-adapter.infinispan.hotrod"/>
                    <connection-definitions>
                        <connection-definition class-name="org.teiid.resource.adapter.infinispan.hotrod.InfinispanManagedConnectionFactory" jndi-name="java:/infinispanRemote" enabled="true" use-java-context="true" pool-name="infinispanDS">
                            <config-property name="CacheTypeMap">
                                addressbook:org.jboss.as.quickstarts.datagrid.hotrod.query.domain.Person;id
                            </config-property>
                            <config-property name="ProtobinFile">
                                 /quickstart/addressbook.protobin
                            </config-property>
                            <config-property name="MessageDescriptor">
                                quickstart.Person
                            </config-property>
                            <config-property name="Module">
                                com.client.quickstart.pojos
                            </config-property>
                            <config-property name="MessageMarshallers">                              org.jboss.as.quickstarts.datagrid.hotrod.query.domain.Person:org.jboss.as.quickstarts.datagrid.hotrod.query.marshallers.PersonMarshaller,org.jboss.as.quickstarts.datagrid.hotrod.query.domain.PhoneNumber:org.jboss.as.quickstarts.datagrid.hotrod.query.marshallers.PhoneNumberMarshaller,org.jboss.as.quickstarts.datagrid.hotrod.query.domain.PhoneType:org.jboss.as.quickstarts.datagrid.hotrod.query.marshallers.PhoneTypeMarshaller
                            </config-property>
                            <config-property name="RemoteServerList">
                                127.0.0.1:11322
                            </config-property>
                        </connection-definition>
                    </connection-definitions>
                </resource-adapter>
	
This example show you how to configure it for external materialization:
                <resource-adapter id="infinispanRemQSDSL">
                    <module slot="main" id="org.jboss.teiid.resource-adapter.infinispan.hotrod"/>
                    <connection-definitions>
                        <connection-definition class-name="org.teiid.resource.adapter.infinispan.hotrod.InfinispanManagedConnectionFactory" jndi-name="java:/infinispanRemoteDSL" enabled="true" use-java-context="true" pool-name="infinispanRemoteDSL">
                            <config-property name="CacheTypeMap">
                                addressbook_indexed:org.jboss.as.quickstarts.datagrid.hotrod.query.domain.Person;id
                            </config-property>
                            <config-property name="StagingCacheName">
                                addressbook_indexed_mat
                            </config-property>
                            <config-property name="AliasCacheName">
                                aliasCache
                            </config-property>
                            <config-property name="Module">
                                com.client.quickstart.addressbook.pojos
                            </config-property>
                            <config-property name="RemoteServerList">
                                127.0.0.1:11322
                            </config-property>
                        </connection-definition>
                    </connection-definitions>
                </resource-adapter>
	

4.9.9. JDBC Data Sources

4.9.9.1. JDBC Data Sources

Here is an example highlighting configuring an Oracle data source. The process is nearly identical regardless of the database vendor. Typically the JDBC jar and the configuration like connection URL and user credentials change.
There are configuration templates for all the data sources in the jboss-install/docs/teiid/datasources directory. A complete description how a data source can be added into JBoss EAP is also described here. Here are two different ways to create a datasource:
Firstly, deploy the required JDBC jar file. For example, if you are trying to create a Oracle data source, first you need to deploy the "ojdbc6.jar" file: deploy /path/to/ojdbc6.jar

Note

If JBoss EAPis running in standalone mode, you can also manually copy this 'ojdbc6.jar" to the jboss-install>/standalone/deployments directory, to automatically deploy without using the CLI tool.
Now create a data source using this driver. There are many ways to create the datasource using CLI, AdminShell or admin-console. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Make sure you provide the correct URL and user credentials and edit the JNDI name to match the JNDI name you used in the VDB:
/subsystem=datasources/data-source=oracel-ds:add(jndi-name=java:/OracleDS, driver-name=ojdbc6.jar, connection-url=jdbc:oracle:thin:{host}:1521:orcl,user-name={user}, password={password}) 
/subsystem=datasources/data-source=oracel-ds:enable

4.9.9.2. Configuring JDBC Data Sources

For JBoss Data Virtualization to connect to JDBC data sources, you must configure the data source in the JBoss EAP instance with the following two steps:
  1. Install (or deploy) the JDBC JAR driver file.
  2. Create (or configure) it as a data source in JBoss EAP.

Note

See the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide for more comprehensive information about configuring data sources.

4.9.9.3. Example Configuration

Example configuration, as it would appear in the server configuration file, can be found in the EAP_HOME/docs/teiid/datasources/ directory.

Note

This configuration will need to be adjusted with file paths and properties appropriate to your installation. The JNDI name must be the same JNDI name used in the VDB.

4.9.9.4. Install a JDBC Driver with the Management CLI

Procedure 4.5. Install a JDBC Driver with the Management CLI

  1. Start the Management CLI: ./EAP_HOME/bin/jboss-cli.sh
  2. Enter the connect command.
  3. Enter the deploy PATH/FILE.jar command: deploy ojdbc6.jar
  4. Enter the quit command.

4.9.9.5. Install a JDBC Driver with the Management Console

Summary

Before your application can connect to a JDBC datasource, your datasource vendor's JDBC drivers need to be installed in a location where JBoss EAP 6 can use them. JBoss EAP 6 allows you to deploy these drivers like any other deployment. This means that you can deploy them across multiple servers in a server group, if you use a managed domain.

Prerequisites

Before performing this task, you need to meet the following prerequisites:

  • Download the JDBC driver from your database vendor.

Note

Any JDBC 4-compliant driver is automatically recognized and installed in the system by name and version. A JDBC JAR is identified using the Java service provider mechanism. Such JARs contain the META-INF/services/java.sql.Driver text, which contains the name of the Driver classes in that JAR.

Procedure 4.6. Modify the JDBC Driver JAR

If the JDBC driver JAR is not JDBC 4-compliant, it can be made deployable using the following method.
  1. Change to, or create, an empty temporary directory.
  2. Create a META-INF subdirectory.
  3. Create a META-INF/services subdirectory.
  4. Create a META-INF/services/java.sql.Driver file, which contains one line indicating the fully-qualified class name of the JDBC driver.
  5. Use the JAR command-line tool to update the JAR like this:
    jar \-uf jdbc-driver.jar META-INF/services/java.sql.Driver
  6. If you use a managed domain, deploy the JAR file to a server group. Otherwise, deploy it to your server. See the Deploy with the Management Console section of the Administration and Configuration Guide for JBoss EAP 6 for more information.
Result:

The JDBC driver is deployed, and is available for your applications to use.

4.9.9.6. Create a Non-XA Datasource with the Management Interfaces

Summary

This topic covers the steps required to create a non-XA datasource, using either the Management Console or the Management CLI.

Prerequisites

  • The JBoss EAP 6 server must be running.

Note

Prior to version 10.2 of the Oracle datasource, the <no-tx-separate-pools/> parameter was required, as mixing non-transactional and transactional connections would result in an error. This parameter may no longer be required for certain applications.

Note

To prevent issues such as duplication of driver listing, selected driver not available in a profile, or driver not displayed if a server for the profile is not running, in JBoss EAP 6.4 onwards, only JDBC drivers that are installed as modules and correctly referenced from profiles are detectable while creating a datasource using the Management Console in domain mode.

Procedure 4.7. Create a Datasource using either the Management CLI or the Management Console

    • Management CLI

      1. Launch the CLI tool and connect to your server.
      2. Run the following Management CLI command to create a non-XA datasource, configuring the variables as appropriate:

        Note

        The value for DRIVER_NAME depends on the number of classes listed in the /META-INF/services/java.sql.Driver file located in the JDBC driver JAR. If there is only one class, the value is the name of the JAR. If there are multiple classes, the value is the name of the JAR + driverClassName + "_" + majorVersion +"_" + minorVersion. Failure to do so will result in the following error being logged:
        JBAS014775:    New missing/unsatisfied dependencies
        For example, the DRIVER_NAME value required for the MySQL 5.1.31 driver, is mysql-connector-java-5.1.31-bin.jarcom.mysql.jdbc.Driver_5_1.
        data-source add --name=DATASOURCE_NAME --jndi-name=JNDI_NAME --driver-name=DRIVER_NAME  --connection-url=CONNECTION_URL
      3. Enable the datasource:
        data-source enable --name=DATASOURCE_NAME
    • Management Console

      1. Login to the Management Console.
      2. Navigate to the Datasources panel in the Management Console

        1. Select the Configuration tab from the top of the console.
        2. For Domain mode only, select a profile from the drop-down box in the top left.
        3. Expand the Subsystems menu on the left of the console, then expand the Connector menu.
        4. Select Datasources from the menu on the left of the console.
      3. Create a new datasource

        1. Click Add at the top of the Datasources panel.
        2. Enter the new datasource attributes in the Create Datasource wizard and proceed with the Next button.
        3. Enter the JDBC driver details in the Create Datasource wizard and click Next to continue.
        4. Enter the connection settings in the Create Datasource wizard.
        5. Click the Test Connection button to test the connection to the datasource and verify the settings are correct.
        6. Click Done to finish
Result

The non-XA datasource has been added to the server. It is now visible in either the standalone.xml or domain.xml file, as well as the management interfaces.

4.9.9.7. Create an XA Datasource with the Management Interfaces

Summary

This topic covers the steps required to create an XA datasource, using either the Management Console or the Management CLI.

Note

Prior to version 10.2 of the Oracle datasource, the <no-tx-separate-pools/> parameter was required, as mixing non-transactional and transactional connections would result in an error. This parameter may no longer be required for certain applications.

Procedure 4.8. Create an XA Datasource, Using Either the Management CLI or the Management Console

    • Management CLI

      1. Run the following Management CLI command to create an XA datasource, configuring the variables as appropriate:

        Note

        The value for DRIVER_NAME depends on the number of classes listed in the /META-INF/services/java.sql.Driver file located in the JDBC driver JAR. If there is only one class, the value is the name of the JAR. If there are multiple classes, the value is the name of the JAR + driverClassName + "_" + majorVersion +"_" + minorVersion. Failure to do so will result in the following error being logged:
        JBAS014775:    New missing/unsatisfied dependencies
        For example, the DRIVER_NAME value required for the MySQL 5.1.31 driver, is mysql-connector-java-5.1.31-bin.jarcom.mysql.jdbc.Driver_5_1.
        xa-data-source add --name=XA_DATASOURCE_NAME --jndi-name=JNDI_NAME --driver-name=DRIVER_NAME --xa-datasource-class=XA_DATASOURCE_CLASS
      2. Configure the XA datasource properties

        1. Set the server name

          Run the following command to configure the server name for the host:
          /subsystem=datasources/xa-data-source=XA_DATASOURCE_NAME/xa-datasource-properties=ServerName:add(value=HOSTNAME)
        2. Set the database name

          Run the following command to configure the database name:
          /subsystem=datasources/xa-data-source=XA_DATASOURCE_NAME/xa-datasource-properties=DatabaseName:add(value=DATABASE_NAME)
      3. Enable the datasource:
        xa-data-source enable --name=XA_DATASOURCE_NAME
    • Management Console

      1. Navigate to the Datasources panel in the Management Console

        1. Select the Configuration tab from the top of the console.
        2. For Domain mode only, select a profile from the drop-down box at the top left.
        3. Expand the Subsystems menu on the left of the console, then expand the Connector menu.
        4. Select Datasources.
      2. Select the XA Datasource tab.
      3. Create a new XA datasource

        1. Click Add.
        2. Enter the new XA datasource attributes in the Create XA Datasource wizard and click Next.
        3. Enter the JDBC driver details in the Create XA Datasource wizard and click Next.
        4. Enter the XA properties and click Next.
        5. Enter the connection settings in the Create XA Datasource wizard.
        6. Click the Test Connection button to test the connection to the XA datasource and verify the settings are correct.
        7. Click Done to finish
Result

The XA datasource has been added to the server. It is now visible in either the standalone.xml or domain.xml file, as well as the management interfaces.

4.9.9.8. Datasource Parameters

Table 4.12. Datasource parameters common to non-XA and XA datasources

Parameter Description
jndi-name The unique JNDI name for the datasource.
pool-name The name of the management pool for the datasource.
enabled Whether or not the datasource is enabled.
use-java-context
Whether to bind the datasource to global JNDI.
spy
Enable spy functionality on the JDBC layer. This logs all JDBC traffic to the datasource. Note that the logging category jboss.jdbc.spy must also be set to the log level DEBUG in the logging subsystem.
use-ccm Enable the cached connection manager.
new-connection-sql A SQL statement which executes when the connection is added to the connection pool.
transaction-isolation
One of the following:
  • TRANSACTION_READ_UNCOMMITTED
  • TRANSACTION_READ_COMMITTED
  • TRANSACTION_REPEATABLE_READ
  • TRANSACTION_SERIALIZABLE
  • TRANSACTION_NONE
url-selector-strategy-class-name A class that implements interface org.jboss.jca.adapters.jdbc.URLSelectorStrategy.
security
Contains child elements which are security settings. See Table 4.17, “Security parameters”.
validation
Contains child elements which are validation settings. See Table 4.18, “Validation parameters”.
timeout
Contains child elements which are timeout settings. See Table 4.19, “Timeout parameters”.
statement
Contains child elements which are statement settings. See Table 4.20, “Statement parameters”.

Table 4.13. Non-XA datasource parameters

Parameter Description
jta Enable JTA integration for non-XA datasources. Does not apply to XA datasources.
connection-url The JDBC driver connection URL.
driver-class The fully-qualified name of the JDBC driver class.
connection-property
Arbitrary connection properties passed to the method Driver.connect(url,props). Each connection-property specifies a string name/value pair. The property name comes from the name, and the value comes from the element content.
pool
Contains child elements which are pooling settings. See Table 4.15, “Pool parameters common to non-XA and XA datasources”.
url-delimiter
The delimiter for URLs in a connection-url for High Availability (HA) clustered databases.

Table 4.14. XA datasource parameters

Parameter Description
xa-datasource-property
A property to assign to implementation class XADataSource. Specified by name=value. If a setter method exists, in the format setName, the property is set by calling a setter method in the format of setName(value).
xa-datasource-class
The fully-qualified name of the implementation class javax.sql.XADataSource.
driver
A unique reference to the class loader module which contains the JDBC driver. The accepted format is driverName#majorVersion.minorVersion.
xa-pool
recovery
Contains child elements which are recovery settings. See Table 4.21, “Recovery parameters”.

Table 4.15. Pool parameters common to non-XA and XA datasources

Parameter Description
min-pool-size The minimum number of connections a pool holds.
max-pool-size The maximum number of connections a pool can hold.
prefill Whether to try to prefill the connection pool. An empty element denotes a true value. The default is false.
use-strict-min Whether the idle connection scan should strictly stop marking for closure of any further connections, once the min-pool-size has been reached. The default value is false.
flush-strategy
Whether the pool is flushed in the case of an error. Valid values are:
  • FailingConnectionOnly
  • IdleConnections
  • EntirePool
The default is FailingConnectionOnly.
allow-multiple-users Specifies if multiple users will access the datasource through the getConnection(user, password) method, and whether the internal pool type accounts for this behavior.

Table 4.16. XA pool parameters

Parameter Description
is-same-rm-override Whether the javax.transaction.xa.XAResource.isSameRM(XAResource) class returns true or false.
interleaving Whether to enable interleaving for XA connection factories.
no-tx-separate-pools
Whether to create separate sub-pools for each context. This is required for Oracle datasources, which do not allow XA connections to be used both inside and outside of a JTA transaction.
Using this option will cause your total pool size to be twice max-pool-size, because two actual pools will be created.
pad-xid Whether to pad the Xid.
wrap-xa-resource
Whether to wrap the XAResource in an org.jboss.tm.XAResourceWrapper instance.

Table 4.17. Security parameters

Parameter Description
user-name The username to use to create a new connection.
password The password to use to create a new connection.
security-domain Contains the name of a JAAS security-manager which handles authentication. This name correlates to the application-policy/name attribute of the JAAS login configuration.
reauth-plugin Defines a reauthentication plug-in to use to reauthenticate physical connections.

Table 4.18. Validation parameters

Parameter Description
valid-connection-checker
An implementation of interface org.jboss.jca.adaptors.jdbc.ValidConnectionChecker which provides a SQLException.isValidConnection(Connection e) method to validate a connection. An exception means the connection is destroyed. This overrides the parameter check-valid-connection-sql if it is present.
check-valid-connection-sql An SQL statement to check validity of a pool connection. This may be called when a managed connection is taken from a pool for use.
validate-on-match
Indicates whether connection level validation is performed when a connection factory attempts to match a managed connection for a given set.
Specifying "true" for validate-on-match is typically not done in conjunction with specifying "true" for background-validation. Validate-on-match is needed when a client must have a connection validated prior to use. This parameter is false by default.
background-validation
Specifies that connections are validated on a background thread. Background validation is a performance optimization when not used with validate-on-match. If validate-on-match is true, using background-validation could result in redundant checks. Background validation does leave open the opportunity for a bad connection to be given to the client for use (a connection goes bad between the time of the validation scan and prior to being handed to the client), so the client application must account for this possibility.
background-validation-millis The amount of time, in milliseconds, that background validation runs.
use-fast-fail
If true, fail a connection allocation on the first attempt, if the connection is invalid. Defaults to false.
stale-connection-checker
An instance of org.jboss.jca.adapters.jdbc.StaleConnectionChecker which provides a Boolean isStaleConnection(SQLException e) method. If this method returns true, the exception is wrapped in an org.jboss.jca.adapters.jdbc.StaleConnectionException, which is a subclass of SQLException.
exception-sorter
An instance of org.jboss.jca.adapters.jdbc.ExceptionSorter which provides a Boolean isExceptionFatal(SQLException e) method. This method validates whether an exception is broadcast to all instances of javax.resource.spi.ConnectionEventListener as a connectionErrorOccurred message.

Table 4.19. Timeout parameters

Parameter Description
use-try-lock Uses tryLock() instead of lock(). This attempts to obtain the lock for the configured number of seconds, before timing out, rather than failing immediately if the lock is unavailable. Defaults to 60 seconds. As an example, to set a timeout of 5 minutes, set <use-try-lock>300</use-try-lock>.
blocking-timeout-millis The maximum time, in milliseconds, to block while waiting for a connection. After this time is exceeded, an exception is thrown. This blocks only while waiting for a permit for a connection, and does not throw an exception if creating a new connection takes a long time. Defaults to 30000, which is 30 seconds.
idle-timeout-minutes
The maximum time, in minutes, before an idle connection is closed. The actual maximum time depends upon the idleRemover scan time, which is half of the smallest idle-timeout-minutes of any pool.
set-tx-query-timeout
Whether to set the query timeout based on the time remaining until transaction timeout. Any configured query timeout is used if no transaction exists. Defaults to false.
query-timeout Timeout for queries, in seconds. The default is no timeout.
allocation-retry The number of times to retry allocating a connection before throwing an exception. The default is 0, so an exception is thrown upon the first failure.
allocation-retry-wait-millis
How long, in milliseconds, to wait before retrying to allocate a connection. The default is 5000, which is 5 seconds.
xa-resource-timeout
If non-zero, this value is passed to method XAResource.setTransactionTimeout.

Table 4.20. Statement parameters

Parameter Description
track-statements
Whether to check for unclosed statements when a connection is returned to a pool and a statement is returned to the prepared statement cache. If false, statements are not tracked.

Valid values

  • true: statements and result sets are tracked, and a warning is issued if they are not closed.
  • false: neither statements or result sets are tracked.
  • nowarn: statements are tracked but no warning is issued. This is the default.
prepared-statement-cache-size The number of prepared statements per connection, in a Least Recently Used (LRU) cache.
share-prepared-statements
Whether asking for the same statement twice without closing it uses the same underlying prepared statement. The default is false.

Table 4.21. Recovery parameters

Parameter Description
recover-credential A username/password pair or security domain to use for recovery.
recover-plugin
An implementation of the org.jboss.jca.core.spi.recoveryRecoveryPlugin class, to be used for recovery.

4.9.9.9. JDBC Driver Download Locations

The following table gives the standard download locations for JDBC drivers of common databases used with JBoss EAP 6.

Note

These links point to third-party websites which are not controlled or actively monitored by Red Hat. For the most up-to-date drivers for your database, check your database vendor's documentation and website.

4.9.10. LDAP Data Sources

LDAP data sources use a Data Virtualization-specific JCA connector which is deployed into EAP during installation. There are many ways to create the ldap data source, using CLI, AdminShell, admin-console etc. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Make sure you provide the correct URL and user credentials. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=ldap/connection-definitions=ldapDS:add(jndi-name=java:/ldapDS, class-name=org.teiid.resource.adapter.ldap.LDAPManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=ldap/connection-definitions=ldapDS/config-properties=LdapUrl:add(value=ldap://ldapServer:389) 
/subsystem=resource-adapters/resource-adapter=ldap/connection-definitions=ldapDS/config-properties=LdapAdminUserDN:add(value={cn=???,ou=???,dc=???}) 
/subsystem=resource-adapters/resource-adapter=ldap/connection-definitions=ldapDS/config-properties=LdapAdminUserPassword:add(value={pass}) 
/subsystem=resource-adapters/resource-adapter=ldap/connection-definitions=ldapDS/config-properties=LdapTxnTimeoutInMillis:add(value=-1) 
/subsystem=resource-adapters/resource-adapter=ldap:activate 
runbatch
To find out all the properties that are supported by this LDAP Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=ldap)

4.9.11. MongoDB Data Sources

The MongoDB data sources use a Data Virtualization-specific JCA connector that is deployed into EAP during installation. There are many ways to create a MongoDB data source, using CLI, AdminShell, admin-console and so forth. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Make sure you provide the correct URL and user credentials. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=mongodb/connection-definitions=mongodbDS:add(jndi-name="java:/mongoDS", class-name=org.teiid.resource.adapter.mongodb.MongoDBManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=mongodb/connection-definitions=mongodbDS/config-properties=RemoteServerList:add(value="{host}:27017") 
/subsystem=resource-adapters/resource-adapter=mongodb/connection-definitions=mongodbDS/config-properties=Database:add(value="{db-name}") 
/subsystem=resource-adapters/resource-adapter=mongodb:activate 
runbatch
These are the properties that are defined in the RAR file:

Table 4.23. Properties

Property Description Required? Default
RemoteServerList
A comma-separated list of server locations. Each location can contain an optional port, of the format host:port
False.
Not applicable.
Username
Connection User's Name
False.
None.
Password
Connection User's password
False.
None.
Database
MongoDB database name
True.
None.
To find out all the properties that are supported by this MongoDB Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=mongodb)

4.9.12. Apache Phoenix Data Source

The following is an example that teaches you to set up Phoenix Data Sources, which are needed before you can use the Apache HBase Translator.
There are configuration templates for Phoenix data sources in the EAP_HOME>/docs/teiid/datasources directory. A complete description on how a data source can be added into Red Hat JBoss EAP is also described here.
Configuring a Phoenix data source is nearly identical to configuring JDBC Data Sources. The first step is deploying the Phoenix driver jar. Using below CLI command to deploy the Phoenix driver:
module add --name=org.apache.phoenix --resources=/path/to/phoenix-[version]-client.jar --dependencies=javax.api,sun.jdk,org.apache.log4j,javax.transaction.api /subsystem=datasources/jdbc-driver=phoenix:add(driver-name=phoenix,driver-module-name=org.apache.phoenix,driver-class-name=org.apache.phoenix.jdbc.PhoenixDriver
The driver jar can be download from Phoenix.
Next, create the Data Source base on above deployed driver, which is also like creating JDBC Data Source. Using below CLI command to create the data source:
/subsystem=datasources/data-source=phoenixDS:add(jndi-name=java:/phoenixDS, driver-name=phoenix, connection-url=jdbc:phoenix:{zookeeper quorum server}, enabled=true, use-java-context=true, user-name={user}, password={password})
Ensure the URL, Driver, and other properties are configured correctly: The JNDI name need to match the JNDI name you used in the VDB. The driver name need to match the driver you deployed in above steps
The URL need to match the HBase zookeeper quorum server. Here is an example:
 jdbc:phoenix [ :<zookeeper quorum> [ :<port number> ] [ :<root node> ] ], 'jdbc:phoenix:127.0.0.1:2181'
user-name/password - The user credentials for Phoenix Connection
Mapping a Phoenix table to an existing HBase table takes two steps. The first step is installing phoenix-[version]-server.jar to the classpath of every HBase region server. An easy way to do this is to copy it into the HBase lib. (For more details see the Phoenix documentation.)
Next, you must executing the DDL to map a Phoenix table to an existing HBase table. The DDL can either be executed via the Phoenix Command Line, or executed by JDBC.
The Following is a example for mapping an existing HBase Customer with the following structure:
As depicted above, the HBase Customer table have 2 column families, customer and sales, and each has 2 column qualifiers, name, city, product and amount respectively.Map this table to Phoenix via DDL:
 
CREATE TABLE IF NOT EXISTS "Customer"("ROW_ID" VARCHAR PRIMARY KEY, "customer"."city" VARCHAR, "customer"."name" VARCHAR, "sales"."amount" VARCHAR, "sales"."product" VARCHAR)
For more about mapping Phoenix table to an existing HBase table please refer to the Phoenix documentation.
Creating a new Phoenix table is just like mapping to an existing HBase table. Phoenix will create any metadata (table, column families) that do not exist. Similar to the above example the DDL to create the Phoenix/HBase Customer table would be:
CREATE TABLE IF NOT EXISTS "Customer"("ROW_ID" VARCHAR PRIMARY KEY, "customer"."city" VARCHAR, "customer"."name" VARCHAR, "sales"."amount" VARCHAR, "sales"."product" VARCHAR)

4.9.13. Salesforce Data Sources

Salesforce data sources use a Data Virtualization-specific JCA connector that is deployed into EAP during installation. There are many ways to create the salesforce data source, using CLI,AdminShell, admin-console etc. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the server. Make sure you provide the correct URL and user credentials. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=salesforce/connection-definitions=sfDS:add(jndi-name=java:/sfDS, class-name=org.teiid.resource.adapter.salesforce.SalesForceManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=salesforce/connection-definitions=sfDS/config-properties=URL:add(value=https://www.salesforce.com/services/Soap/u/22.0) /subsystem=resource-adapters/resource-adapter=salesforce/connection-definitions=sfDS/config-properties=username:add(value={user}) 
/subsystem=resource-adapters/resource-adapter=salesforce/connection-definitions=sfDS/config-properties=password:add(value={password}) 
/subsystem=resource-adapters/resource-adapter=salesforce:activate 
runbatch
To find out all the properties that are supported by this Salesforce Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=salesforce)

4.9.14. Solr Data Sources

Solr data sources use a Data Virtualization-specific JCA connector that is deployed into EAP during installation. There are many ways to create a Solr data source, using CLI, AdminShell, admin-console, etc. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute the following command using the CLI once you connected to the Server. Make sure you provide the correct URL and user credentials. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=solr/connection-definitions=solrDS:add(jndi-name=java:/solrDS, class-name=org.teiid.resource.adapter.solr.SolrManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=solr/connection-definitions=solrDS/config-properties=url:add(value=http://localhost:8983/solr/) 
/subsystem=resource-adapters/resource-adapter=solr/connection-definitions=solrDS/config-properties=CoreName:add(value=collection1) 
/subsystem=resource-adapters/resource-adapter=solr:activate 
runbatch
To find out all the properties that are supported by this Solr Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=solr)

4.9.15. Integrate Apache Solr with Red Hat JBoss Data Virtualization

In this example, you will learn how to move some data from a CSV file into Apache Solr using Red Hat JBoss Data Virtualiation. You will then issue queries to Apache Solr, to obtain the contents of a search using DV. Once the search results are available in Red Hat JBoss Data Virtualization, you can further enhance this data by integrating it with other data sources.

Prerequisites

  1. Create a temporary directory and copy Copy "/solr/example/solr/collection1" into it.
  2. Rename "/solr/example/solr/collection1" to "/solr/example/solr/babynames".
  3. Delete the contents of "/solr/example/solr/babynames/data" directory (if there are any).
  4. Delete the "/solr/example/solr/babynames/core.properties" file.
  5. Open the "/solr/example/solr/babynames/conf/schema.xml" file in a text editor and delete all of the elements under "fields" and "copyFields".
  6. Add this under "fields":
        <field indexed="true" name="_version_" stored="true" type="long"/>  
        <field indexed="true" name="_root_" stored="false" type="string"/>  
        <field indexed="true" name="id" stored="true" type="string" required="true" multiValued="false" />  
        <field indexed="true" name="state" stored="true" type="string" omitNorms="true"/>  
        <field indexed="true" name="name" required="true" stored="true" type="string"/>  
        <field indexed="true" name="gender" stored="true" type="string" omitNorms="true"/>  
        <field indexed="true" name="birthyear" stored="true" type="int"/>  
        <field indexed="true" name="totalcount" stored="true" type="int"/>  
        <field indexed="true" name="text" stored="false" type="text_general" multiValued="true"/>  
    
    These fields represent the schema for the data that is present in the baby names CSV document from above. The schemal.xml file defines the internal document structure for Solr.
  7. Save and close the file.
  8. Start Apache Solr by issuing these commands:
    cd "/solr/example"  
    java -jar start.jar  
    
  9. On the left-hand side, clck on Core Admin. (If you see Unload, click it too.)
  10. Click Add Core, and supply "babies" as the name and "babynames" as instanceDir.
    Solr is now configured with a core that can store and search documents.
You now hace a choice of building a VDB or a Dynamic VDB. First of all, you will learn how to make a VDB using Teiid Designer.
  1. Start Teiid Designer and switch to the Teiid Designer perspective.
  2. Ensure you have configured Red Hat JBoss Data Virtualization correctly and that you have a connection to it.
  3. Create a new "Teiid Model Project" and call it "BabyNames".
  4. Choose "File - import", then choose "Teiid Designer/File Source Flat - Source and View Model".
  5. Create a file-based model.
  6. You can preview the data by right-clicking on the "babies" table, and selecting "Modeling - Preview Data". Make sure you can do this successfully.
  7. Choose "File - import", then choose "Teiid Designer/Teiid Connection - Source Model" and click Next.
  8. In the "Data the Source to use for Import" dialog box, click "New".
  9. Ensure Create Data Source dialog has "solr-ds" set as the name., "solr" as the driver and "AllowCompression" set to true.
  10. Click Ok and then click Next.
  11. Make sure your "translator" is set to "solr", and click "Next".
  12. The dialog will show the DDL for the model (below), with "babies" table. Click on until you reach the Finish button.
    	CREATE FOREIGN TABLE babies (  
             id string OPTIONS (SEARCHABLE 'Searchable'),  
             birthyear integer OPTIONS (SEARCHABLE 'Searchable'),  
             name string OPTIONS (SEARCHABLE 'Searchable'),  
             state string OPTIONS (SEARCHABLE 'Searchable'),  
             gender string OPTIONS (SEARCHABLE 'Searchable'),  
            totalcount integer OPTIONS (SEARCHABLE 'Searchable')  
        ) OPTIONS (UPDATABLE TRUE);  
    
  13. Now build a VDB called "babynames" with all of the models in this VDB and deploy it to server by right-clicking.
The other option mentioned above is to build a dynamic VDB. (Skip this procedure if you built a regular VDB instead.)
  1. Open the standalone.xml in your text editor.
  2. Add this XML under the "resource-adapter" section:
    <!-- Data Source for Flat File -->		
    			    <resource-adapter id="file-ds">  
                            <module slot="main" id="org.jboss.teiid.resource-adapter.file"/>  
                            <transaction-support>NoTransaction</transaction-support>  
                            <connection-definitions>  
                                <connection-definition class-name="org.teiid.resource.adapter.file.FileManagedConnectionFactory" jndi-name="java:/file-ds" enabled="true" pool-name="file-ds">  
                                    <config-property name="ParentDirectory">  
                                        /babynames  
                                    </config-property>  
                                </connection-definition>  
                            </connection-definitions>  
                        </resource-adapter>  
    
    <!-- Data Source for Solr --> 
                        <resource-adapter id="solr-ds">  
                            <module slot="main" id="org.jboss.teiid.resource-adapter.solr"/>  
                            <transaction-support>XATransaction</transaction-support>  
                            <connection-definitions>  
                                <connection-definition class-name="org.teiid.resource.adapter.solr.SolrManagedConnectionFactory" jndi-name="java:/solr-ds" enabled="true" pool-name="solr-ds">  
                                    <config-property name="CoreName">  
                                        babies  
                                    </config-property>  
                                    <config-property name="url">  
                                        http://localhost:8983/solr/babies  
                                    </config-property>  
                                </connection-definition>  
                            </connection-definitions>  
                        </resource-adapter>  
    
  3. Start Red Hat JBoss Data Virtualization.
  4. Create this dynamic VDB file and save it as "babynames-vdb.xml":
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>  
    <vdb name="babynames" version="1">  
        <model name="solr">  
            <source name="solr" translator-name="solr" connection-jndi-name="java:/solr-ds"/>  
        </model>  
        <model name="file_source">  
            <source name="file" translator-name="file" connection-jndi-name="java:/file-ds"/>  
        </model>  
      
        <model name = "file" visible = "true" type = "VIRTUAL" >  
             <metadata type = "DDL"><![CDATA[ 
                  CREATE VIEW babies ( 
                    name varchar, 
                    state varchar, 
                    gender varchar, 
                    birthyear integer, 
                    totalcount integer 
                    ) AS 
                      SELECT A.name, A.state, A.gender, A.birthyear,  A.totalcount FROM 
               (EXEC file_source.getTextFiles('VA.TXT')) AS f, 
                      TEXTTABLE(f.file COLUMNS state string, gender string, birthyear integer, name string, totalcount integer) AS A; 
          ]]
             </metadata>  
        </model>  
    </vdb> 
    
  5. Deploy the VDB using the console or the CLI.
In order to test your integration, follow these steps.
  1. Make sure your VDB is active by checking the Console or your log files or by using web-console at http://localhost:9990/console/App.html#vdb-runtime
  2. Use a tool like JBDS Data Tools or Squirrel to execute test queries. First check the file source and then the Solr source (there should be none in Solr at first):
        select * from file.babies  
    
        select * from solr.babies  
    
  3. insert the File source records into Solr:
        insert into solr.babies (id, name, state, gender, birthyear, totalcount)  
          select f.name, f.name, f.state, f.gender, f.birthyear, f.totalcount from file.babies as f   
    
    You should see 131638 rows affected.
  4. You will now be able to issue SQL commands like these:
     select * from solr.babies  
    select * from solr.babies where name = 'Mary'  
    select * from solr.babies where name like '*ary'  
    select * from solr.babies where birthyear > 1910 and birthyear < 1920
    
  5. If you have other models like RDBMS, Salesforce, Web Service etc you can join the table between those sources and Solr. You can also issue SQL queries like INESRT/UPDATE/DELETE on Solr source to manage the documents in the store.

4.9.16. Web Service Data Sources

Web service data sources use a Data Virtualization-specific JCA connector that is deployed into EAP during installation. There are many ways to create the file data source, using CLI, AdminShell, admin-console etc. The example shown below uses the CLI tool, as this works in both Standalone and Domain modes.
Execute following command using the CLI once you connected to the Server. Make sure you provide the correct endpoint and other properties below. Add any additional properties required by the connector by duplicating the "connection-definitions" command below. Edit the JNDI name to match the JNDI name you used in the VDB:
batch 
/subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS:add(jndi-name=java:/wsDS, class-name=org.teiid.resource.adapter.ws.WSManagedConnectionFactory, enabled=true, use-java-context=true) 
/subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=EndPoint:add(value={end_point}) 
/subsystem=resource-adapters/resource-adapter=webservice:activate 
runbatch
To find out all the properties that are supported by this Web Service Connector execute the following command in the CLI:
/subsystem=teiid:read-rar-description(rar-name=webservice)
The Web Service Data Source supports specifying a WSDL using the Wsdl property. If the Wsdl property is set, then the ServiceName, EndPointName, and NamespaceUri properties should also be set. The Wsdl property may be a URL or file location or the WSDL to use. You must restart the server.

Table 4.24. Registry Properties

Property Application Required? Default? Description
EndPoint
HTTP and SOAP.
True
Not applicable.
URL for HTTP ans service endpoint for SOAP.
SecurityType
HTTP and SOAP.
false
none
Type of Authentication to used with the web service. Allowed values are "None","HTTPBasic","WSSecurity" and "Kerberos".
AuthUserName
HTTP and SOAP.
false
Not applicable.
Name value for authentication, used in HTTPBasic and WsSecurity.
AuthPassword
HTTP and SOAP.
false
Not applicable.
Password value for authentication, used in HTTPBasic and WsSecurity
ConfigFile
HTTP and SOAP.
False
Not applicable.
CXF client configuration File or URL.
ConfigName
HTTP and SOAP.
False
Not applicable.
Note that this property is deprecated.
EndPointName
HTTP and SOAP.
False
Teiid
Local part of the end point QName to use with this connection, needs to match one defined in CXF file.
ServiceName
SOAP
False
Not applicable.
Local part of the service QName to use with this connection.
NamespaceUri
SOAP.
False
http://teiid.org
Namespace URI of the service QName to use with this connection
RequestTimeout
HTTP and SOAP.
False
Not applicable.
Timeout for request.
ConnectTimeout
HTTP and SOAP.
False
Not applicable.
Timeout for connection.
Wsdl
SOAP.
False
Not applicable.
WSDL file or URL for the web service.
Each web service data source may choose a particular CXF config file and port configuration. The ConfigFile config property specifies the Spring XML configuration file for the CXF Bus and port configuration to be used by connections. If no config file is specified then the system default configuration will be used.
Only one port configuration can be used by this data source. You may explicitly set the local name of the port QName to use via the ConfigName property. The namespace URI for the QName in your config file should match your WSDL/namespace setting on the data source or use the default of http://teiid.org. See the CXF Documentation and the sections below on WS-Security, Logging, etc. for examples of using the CXF configuration file.
<beans xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xmlns:http-conf="http://cxf.apache.org/transports/http/configuration"
        xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration
        http://cxf.apache.org/schemas/configuration/http-conf.xsd
        http://www.springframework.org/schema/beans
        >
 
    <http-conf:conduit name="{http://teiid.org}configName.http-conduit">
        <http-conf:client ConnectionTimeout="120000" ReceiveTimeout="240000"/>
    </http-conf:conduit>
</beans>
In the conduit name {http://teiid.org}configName.http-conduit, the namespace, {http://teiid.org}, may be set via the namespace datasource property. Typically that will only need done when also supplying the wsdl setting. The local name is followed by .http-conduit. It will be based upon the configName setting, with a default value of teiid.

Note

You do not have to use the Spring configuration to set timeouts. The ConnectionTimeout and ReceiveTimeout can be set via the resource adapter connectTimeout and requestTimeout properties respectively.
To enable the use of WS-Security, the SecurityType should be set to WSSecurity. At this time Teiid does not expect a WSDL to describe the service being used. Thus a Spring XML configuration file is not only required, it must instead contain all of the relevant policy configuration. And just as with the general configuration, each data source is limited to specifying only a single port configuration to use: batch /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS:add(jndi-name=java:/wsDS, class-name=org.teiid.resource.adapter.ws.WSManagedConnectionFactory, enabled=true, use-java-context=true) /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=ConfigFile:add(value=${jboss.server.home.dir}/standalone/configuration/xxx-jbossws-cxf.xml) /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=ConfigName:add(value=port_x) /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=SecurityType:add(value=WSSecurity) /subsystem=resource-adapters/resource-adapter=webservice:activate runbatch
This is the corresponding xxx-jbossws-cxf.xml file that adds a timestamp to the SOAP header
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jaxws="http://cxf.apache.org/jaxws"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
          http://www.springframework.org/schema/beans/spring-beans.xsd
          http://cxf.apache.org/jaxws
          http://cxf.apache.org/schemas/jaxws.xsd">
 
    <jaxws:client name="{http://teiid.org}port_x"
        createdFromAPI="true">
        <jaxws:outInterceptors>
            <bean/>
            <ref bean="Timestamp_Request"/>
        </jaxws:outInterceptors>
    </jaxws:client>
 
    <bean
 
        id="Timestamp_Request">
        <constructor-arg>
            <map>
                <entry key="action" value="Timestamp"/>
            <map>
        </constructor-arg>
    </bean>
 
</beans>
Note that the client port configuration is matched to the data source instance by the QName {http://teiid.org}port_x, where the namespace will match your namespace setting or the default of http://teiid.org. The configuration may contain other port configurations with different local names.
WS-Security Kerberos is only supported when the WSDL property is defined in resource-adapter connection configuration and only when WSDL Based Procedures are used. WSDL file must contain WS-Policy section, then WS-Policy section is correctly interpreted and enforced on the endpoint. This is what the sample CXF configuration looks like:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:http="http://cxf.apache.org/transports/http/configuration"
    xmlns:jaxws="http://cxf.apache.org/jaxws"
    xmlns:cxf="http://cxf.apache.org/core"
    xmlns:p="http://cxf.apache.org/policy"
    xmlns:sec="http://cxf.apache.org/configuration/security"
    xsi:schemaLocation="http://www.springframework.org/schema/beans           http://www.springframework.org/schema/beans/spring-beans.xsd           http://cxf.apache.org/jaxws                           http://cxf.apache.org/schemas/jaxws.xsd           http://cxf.apache.org/transports/http/configuration   http://cxf.apache.org/schemas/configuration/http-conf.xsd           http://cxf.apache.org/configuration/security          http://cxf.apache.org/schemas/configuration/security.xsd           http://cxf.apache.org/core http://cxf.apache.org/schemas/core.xsd           http://cxf.apache.org/policy http://cxf.apache.org/schemas/policy.xsd">
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"/>
    <cxf:bus>
        <cxf:features>
            <p:policies/>
            <cxf:logging/>
        </cxf:features>
    </cxf:bus>
 
    <jaxws:client name="{http://webservices.samples.jboss.org/}HelloWorldPort" createdFromAPI="true">
        <jaxws:properties>
            <entry key="ws-security.kerberos.client">
                <bean class="org.apache.cxf.ws.security.kerberos.KerberosClient">
                    <constructor-arg ref="cxf"/>
                    <property name="contextName" value="alice"/>
                    <property name="serviceName" value="bob@service.example.com"/>
                </bean>
            </entry>
        </jaxws:properties>
    </jaxws:client>
</beans>
Nexst, configure the security-domain in the standalone.xml file under 'security' subsystem like this:
<security-domain name="alice" cache-type="default">
    <authentication>
        <login-module code="Kerberos" flag="required">
            <module-option name="storeKey" value="true"/>
            <module-option name="useKeyTab" value="true"/>
            <module-option name="keyTab" value="/home/alice/alice.keytab"/>
            <module-option name="principal" value="alice@EXAMPLE.COM"/>
            <module-option name="doNotPrompt" value="true"/>
            <module-option name="debug" value="true"/>
            <module-option name="refreshKrb5Config" value="true"/>
        </login-module>
    </authentication>
</security-domain>
The CXF config property may also be used to control the logging of requests and responses for specific or all ports. Logging, when enabled, will be performed at an INFO level to the org.apache.cxf.interceptor context: batch /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS:add(jndi-name=java:/wsDS, class-name=org.teiid.resource.adapter.ws.WSManagedConnectionFactory, enabled=true, use-java-context=true) /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=ConfigFile:add(value=${jboss.server.home.dir}/standalone/configuration/xxx-jbossws-cxf.xml) /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=ConfigName:add(value=port_x) /subsystem=resource-adapters/resource-adapter=webservice:activate runbatch
Here is the corresponding xxx-jbossws-cxf.xml file:
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:jaxws="http://cxf.apache.org/jaxws"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
          http://www.springframework.org/schema/beans/spring-beans.xsd
          http://cxf.apache.org/jaxws
          http://cxf.apache.org/schemas/jaxws.xsd">
 
    <jaxws:client name="{http://teiid.org}port_y"
        createdFromAPI="true">
        <jaxws:features>
            <bean class="org.apache.cxf.feature.LoggingFeature"/>
        </jaxws:features>
    </jaxws:client>
 
</beans>
The CXF config property may also be used to control low level aspects of the HTTP transport:

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:http-conf="http://cxf.apache.org/transports/http/configuration"
       xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration
                           http://cxf.apache.org/schemas/configuration/http-conf.xsd
                           http://www.springframework.org/schema/beans
                             http://www.springframework.org/schema/beans/spring-beans.xsd">
 
  <http-conf:conduit name="{http://teiid.org}port_z.http-conduit">
     <!-- WARNING ! disableCNcheck=true should NOT be used in production -->
     <http-conf:tlsClientParameters disableCNcheck="true" />
 
  </http-conf:conduit>
</beans>
To use HTTPS, configure the CXF file like this (You can also configure HTTPBasic, kerberos and so forth):
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:sec="http://cxf.apache.org/configuration/security"
    xmlns:http-conf="http://cxf.apache.org/transports/http/configuration"
    xmlns:jaxws="http://java.sun.com/xml/ns/jaxws"
    xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration http://cxf.apache.org/schemas/configuration/http-conf.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/configuration/security http://cxf.apache.org/schemas/configuration/security.xsd">
  
    <http-conf:conduit name="*.http-conduit">
        <http-conf:client ConnectionTimeout="120000" ReceiveTimeout="240000"/>
        <http-conf:tlsClientParameters secureSocketProtocol="SSL">
          <sec:trustManagers>
            <sec:keyStore type="JKS" password="changeit" file="/path/to/truststore.jks"/>
          </sec:trustManagers>
        </http-conf:tlsClientParameters>
    </http-conf:conduit>
</beans>
The kerberos support is based SPNEGO as described in http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html#ClientHTTPTransport%28includingSSLsupport%29-SpnegoAuthentication%28Kerberos%29. There two types of kerberos support, this first of which is Negotiation. With this configuration, the REST service is configured with a Kerberos JAAS domain, to negotiate a token, which is then used to access the web service. For this first create a security domain in the standalone.xml file:
<security-domain name="MY_REALM" cache-type="default">
     <authentication>
         <login-module code="Kerberos" flag="required">
             <module-option name="storeKey" value="true"/>
              
             <module-option name="useKeyTab" value="true"/>
             <module-option name="keyTab" value="/home/username/service.keytab"/>
             <module-option name="principal" value="host/testserver@MY_REALM"/>
 
             <module-option name="doNotPrompt" value="true"/>
             <module-option name="debug" value="false"/>
         </login-module>
     </authentication>
</security-domain> 
Set the jboss-cxf-xxx.xml file like this:
<beans xmlns="http://www.springframework.org/schema/beans" 
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
        xmlns:sec="http://cxf.apache.org/configuration/security" 
        xmlns:http-conf="http://cxf.apache.org/transports/http/configuration" 
        xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration http://cxf.apache.org/schemas/configuration/http-conf.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/configuration/security http://cxf.apache.org/schemas/configuration/security.xsd"> 
  
       <http-conf:conduit name="*.http-conduit">
           <http-conf:authorization>
               <sec:AuthorizationType>Negotiate</sec:AuthorizationType>
               <sec:Authorization>MY_REALM</sec:Authorization>
           </http-conf:authorization>
       </http-conf:conduit>
</beans>
The resource adapter creation needs to define these properties:
<config-property name="ConfigFile">path/to/jboss-cxf-xxxx.xml</config-property>
<config-property name="ConfigName">test</config-property>

Important

Even though above configuration configures the value of "ConfigName", the cxf framework currently in the case of JAX-RS client does not give option to use it. For that reason use "*.http-conduit" which will apply to all the HTTP communications under this resource adapter.
The second method is Delegation. If the user is already logged into Teiid using Kerberos using JDBC/ODBC or used SPNEGO in web-tier and used pass-through authentication into Teiid, then there is no need to negotiate a new token for the Kerberos. The system can delegate the existing token. To configure for delegation, set up security domain defined exactly as defined in "negotiation", and jboss-cxf-xxx.xml file, however remove the following line from jboss-cxf-xxx.xml file, as it is not going to negotiate new token.
<sec:Authorization>MY_REALM</sec:Authorization>
Add the following properties in web service resource adapter creation. One determines that "kerberos" security being used, the second defines a security domain to be used at the data source, in this case we want to use a security domain that passes through a logged-in user:
<config-property name="SecurityType">Kerberos</config-property>
<security> 
     <security-domain>passthrough-security</security-domain> 
</security> 
To configure the "passthrough-security" security domain you must add the "security" subsystem settings:
<security-domain name="passthrough-security" cache-type="default"> 
    <authentication> 
        <login-module code="org.teiid.jboss.PassthroughIdentityLoginModule" flag="required" module="org.jboss.teiid"> 
            <module-option name="username" value="guest"/> 
            <module-option name="password" value="guest"/> 
        </login-module> 
    </authentication> 
</security-domain> 

Note

The username and password are optional. If there is no authenticated subject available in the context, these can help create a simple static user, but that user will not work with kerberos authentication as the subject will not have the kerberos token that is required.

4.10. Managing Deployed Virtual Databases Using Management Console

4.10.1. Managing Deployed Virtual Databases Using Management Console

Administrators can manage deployed virtual databases from within the Virtual Databases panel.

4.10.2. Opening the Virtual Databases Panel

  1. Log in to the JBoss Management Console.
  2. Select the Runtime tab.
  3. From the navigation tree, select StatusSubsystemsVirtual Databases.

4.11. Resource Adapters

4.11.1. Resource Adapters in JBoss Data Virtualization

With the exception of JDBC data sources, JBoss Data Virtualization provides a JCA adapter for each supported data source. These are the resource adapter identifiers, as specified in the server configuration file:
  • File Adapter - file
  • Google Spreadsheet Adapter - google
  • Red Hat JBoss Data Grid (6.1 & 6.2) Adapter - infinispan
  • LDAP Adapter - ldap
  • Salesforce Adapter - salesforce
  • Web Services Adapter - webservice
  • Mongo DB Adapter (technical preview) - mongodb

Note

A resource adapter for the JDBC translator is provided with JBoss EAP by default.

4.11.2. Configuring Resource Adapters

You can use Management Console, AdminShell, or Management CLI (or, in standalone mode, directly edit the server configuration file) to configure resource adapters for data sources required by a VDB.

Note

Note that in domain mode, you must use Management CLI, Management Console or AdminShell to configure data sources. Refer to the JBoss Enterprise Application Platform Administration and Configuration Guide for an example of how to configure resource adapters using the Management Console.

Important

If you configure a resource adapter using either a CLI script or the AdminShell, the name of the resource adapter must not be the same as that of an existing one. (Teiid Designer handles this by creating a unique name for the resource adapter.)
The reasons for this are as follows:
  • Any time a resource adapter with a duplicate name is added, the server has to be restarted. This conforms with the way Red Hat JBoss EAP's JCA system was designed, in that multiple instances of resource adapters with the same name are only recognised when the server is launched.
  • Anytime you delete a resource adapter by name, all instances with the same name will be deleted.

4.11.3. Example Configuration

Example configuration, as it would appear in the server configuration file, can be found in the EAP_HOME/docs/teiid/datasources/ directory.

Note

This configuration will need to be adjusted with file paths and properties appropriate to your installation. The JNDI name must be the same JNDI name used in the VDB.

4.11.4. Resource Adapter Properties

For a full listing of configuration properties, you can run the following command from within the Management CLI:
/subsystem=teiid:read-rar-description(rar-name=ADAPTER_ID)
ADAPTER_ID refers to the identifier of the resource adapter as specified in the server configuration file.

4.11.5. Configuring Resource Adapters Using CLI Scripts

This is the preferred method for configuring resource adapters.
JBoss Data Virtualization provides an example Management CLI script for configuring each of the provided resource adapters. A script for a particular resource adapter can be found in the EAP_HOME/docs/teiid/datasources/DATASOURCE directory.
When configuring resource adapters using the Management CLI, you must provide a properties file. Sample properties files are provided in the same directory as the sample scripts.
You can run these scripts (once JBoss Data Virtualization is running) by executing the following command:
./EAP_HOME/bin/jboss-cli.sh --connect --file=EAP_HOME/docs/teiid/datasources/DATASOURCE/SCRIPT.cli --properties=EAP_HOME/docs/teiid/datasources/DATASOURCE/SCRIPT.properties
For more information about using the Management CLI, see the JBoss Enterprise Application Platform Administration and Configuration Guide.

Note

These scripts will need to be adjusted with file paths and properties appropriate to your installation. The JNDI name must be the same JNDI name used in the VDB.

4.11.6. File Adapter Properties

The following table describes the configuration properties that can be configured for the File resource adapter:
Config property
Example
Description
ParentDirectory
 
Directory where the data files are stored.
FileMapping
file1.txt=fileX.txt,file2.txt=fileY.txt
Set FileMapping to redirect specific relative paths (case sensitive) to alternative locations. The string value specifies a map in the format key=value(,key=value)*. Optional.
AllowParentPaths
true
Set AllowParentPaths to false to disallow '..' in paths. This prevents requesting files that are not contained in the parent directory. Optional.

4.11.7. Google Spreadsheet Resource Adapter Properties

4.11.7.1. Google Spreadsheet Resource Adapter Properties

The following table describes the configuration properties that can be configured for the Google Spreadsheet resource adapter:
Config property
Description
AuthMethod
This is the authentication method used to access Google. If the setting is OAuth2 it is necessary to provide a RefreshToken.
RefreshToken
This is required only if AuthMethod=OAuth2
Username
Username for the Google account. Required only if AuthMethod=ClientLogin
Password
Password for the Google account. Required only if AuthMethod=ClientLogin
SpreadsheetName
The name of the spreadsheet to which this resource adapter is connecting. Required.
BatchSize
The maximum number of rows that can be fetched at a time. Default is 4096.

4.11.7.2. Obtaining an OAuth Refresh Token

When using the Google Spreadsheet resource adapter with OAuth authentication, you will need to obtain an OAuth refresh token.

Procedure 4.9. Obtaining an OAuth Refresh Token

  1. Get an authorization code

    Click on the following link: Get Authorization Code
    Click on Allow access to allow the Teiid Google Connector to access the Google account in which the spreadsheet resides.
  2. Obtain the refresh token

    Copy the authorization code from the previous step into the code field of the following POST request and run it from the command line:
    curl \--data-urlencode code=AUTH_CODE \
    --data-urlencode client_id=217138521084.apps.googleusercontent.com \
    --data-urlencode client_secret=gXQ6-lOkEjE1lVcz7giB4Poy \
    --data-urlencode redirect_uri=urn:ietf:wg:oauth:2.0:oob \
    --data-urlencode grant_type=authorization_code https://accounts.google.com/o/oauth2/token
    The refresh token will be in the response.

4.11.8. JBoss Data Grid Resource Adapter Properties

The JBoss Data Grid (JDG) resource adapter can be configured to support the following caching modes:
Cache Type
Obtain Cache By
Remote Cache
using JNDI
Remote Cache
1 or more host:port combinations specified
Remote Cache
referring to HotRod client properties file specified
The following table describes the configuration properties that can be configured for the resource adapter:
Property Name
Required
Property Template
Description
CacheTypeMap
Y
cacheName:className[;pkFieldName] [,cacheName:className[;pkFieldName]..]
This property maps the root Java class name to the cache and identifies the primary key.
module
N
This property specifies the JBoss EAP module containing the cache classes defined in CacheTypeMap.
CacheJndiName
N
This is the JNDI name used to find the CacheContainer.
RemoteServerList
N
host:port[;host:port….]
This property specifies the host (and ports) that will be clustered together to access the caches defined in CacheTypeMap.
ConfigurationFileNameForLocalCache
N
This is the XML configuration file for configuring a local cache.
HotRodClientPropertiesFile
N
This is the HotRod properties file for configuring a connection to a remote cache.

Note

When you use Teiid Designer to reverse-engineer the view into a pojo, a BigDecimal data type is defined in the view. Unfortunately for the Google Protobuf used for serialization, complex data types cannot be converted to either C or C++. It is therefore recommended that you use primitive data types only. (You will come across this situation if you are trying to materialize a view that contains a complex data type or if there is an existing JDG cache that contains a POJO that has complex data types.)
As the protobuffer does not support BigDecimal directly, you have three options:
  1. use all primitive data types
  2. implement a marshaller that will handle the conversion, which means the .proto file will also need to be created (see Red Hat JBoss Data Grid for the creation of files)
  3. create a view that will convert the BigdDecimal to a string, then materialize that view.

4.11.9. JDG HotRod Translator

The Infinispan HotRod Translator, known by the type ispn-hotrod, can read the java objects from a remote Red Hat JBoss Data Grid Cache via the Hot Rod client using the Google Protobuf for serialization. This will enable Red Hat JBoss Data Virtualization to query the remote cache using JDG DSL.
This translator retrieves objects from a cache and transform into rows and columns, allows you to perform writes to the cache and enables you to use external materialization to improve query performance.
The connector has these capabilities:
  • Compare Criteria - EQ
  • Compare Criteria Ordered - LT, GT, LE, GE - if the supportsCompareCriteriaOrdered translator override is set to true. It defaults to false.
  • And/Or Criteria
  • In Criteria
  • Like Criteria
  • Order By
  • INSERT, UPDATE, DELETE (non-transactional)
The following is not pushed down to JDG for processing, but is processed withing Red Hat JBoss Data Virtualization:
  • Not (NE)
  • IsNull
It currently has these limitations:
  • support for 'Not' has been disabled.
  • boolean data type: JDG will throw an exception if no value is specified on the insert or when no default value is defined in the protobuf definition file.
  • char data type: is not a supported type in theProtobuf data types (https://developers.google.com/protocol-buffers/docs/proto#scalar). You would either have to handle conversion in the protobuf marshaller or create a Red Hat JBoss Data Virtualization view with the data type as char.
  • 1-to-Many, currently only supports Collection or Array, not Maps.
  • Write transactions are not supported by JDG when you are using the Hot Rod client
The pojo class is the object that will be used to store the data in the cache. It should be built:
  • To take advantage of the cache's index being enabled, you should annotate the class. See JDG Indexing With Protobuf Annotations at https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Grid/6.6/html-single/Infinispan_Query_Guide/index.html#Custom_Fields_Indexing_with_Protobuf
  • The class should be packaged into a jar so that it can be deployed as a module
Here is an example:
	public class Person {

   @ProtoField(number = 2, required = true)
   public String name;
   @ProtoField(number = 1, required = true)
   public int id;
   @ProtoField(number = 3)
   public String email;
   private List<PhoneNumber> phones;

   public String getName() {
      return name;
   }

   public void setName(String name) {
      this.name = name;
   }

   public int getId() {
      return id;
   }

   public void setId(int id) {
      this.id = id;
   }

   public String getEmail() {
      return email;
   }

   public void setEmail(String email) {
      this.email = email;
   }

   public List<PhoneNumber> getPhones() {
      return phones;
   }

   public void setPhones(List<PhoneNumber> phones) {
      this.phones = phones;
   }
}
	
To configure the use of the pojo, deploy the pojo jar as a module in the Red Hat JBoss EAP server and then define the "lib" property in the -vdb.xml and assign the correct module name. This can be done using the following template:
     <property name ="lib" value ="{pojo_module_name}"></property>
There are several options to defining the metadata representing your object in the cache:
  • "Recommended" Use the Teiid Connection Importer in Teiid Designer to create the physical source model based on your object cache.
  • Use Teiid Designer to manually create the physical source model based on your object cache using the below Definition Requirements.
  • A simple VDB that only defines the data source to use:
        <model name="People" type="Physical">
            <property name="importer.useFullSchemaName" value="false"/>
    
            <source name="infinispan-hotrod-connector" translator-name="ispn-hotrod" connection-jndi-name="java:/infinispanRemoteDSL" />
        </model>
    
    The metadata will be resolved by reverse engineering the defined object in the cache. This can be useful when using the Teiid Designer Teiid Connection Importer for building the physical source model(s).
  • You can also define the metadata using DDL.
    Note, this also shows a container class, PhoneNumber, as an example of the foreign key that’s defines the relationship.
    <vdb name="PeopleVDB" version="1">
        <model name="People" visible="true">
            <property name="importer.useFullSchemaName" value="false"/>
    
            <source name="infinispan-cache-dsl-connector" translator-name="ispn-hotrod" connection-jndi-name="java:/infinispanRemote" />
    
            <metadata type="DDL"><![CDATA[
    
     CREATE FOREIGN TABLE Person (
    	PersonObject object OPTIONS (NAMEINSOURCE 'this', UPDATABLE FALSE, SEARCHABLE 'Unsearchable', NATIVE_TYPE 'java.lang.Object'),
    	id integer NOT NULL OPTIONS (SEARCHABLE 'Searchable', NATIVE_TYPE 'int'),
    	name string OPTIONS (SEARCHABLE 'Searchable', NATIVE_TYPE 'java.lang.String'),
    	email string OPTIONS (SEARCHABLE 'Searchable', NATIVE_TYPE 'java.lang.String'),
    	CONSTRAINT PK_ID PRIMARY KEY(id)
    ) OPTIONS (NAMEINSOURCE 'PersonsCache', UPDATABLE TRUE);
    
    CREATE FOREIGN TABLE PhoneNumber (
    	number string OPTIONS (NAMEINSOURCE 'phone.number', SEARCHABLE 'Searchable', NATIVE_TYPE 'java.lang.String'),
    	type string OPTIONS (NAMEINSOURCE 'phone.type', SEARCHABLE 'Searchable', NATIVE_TYPE 'java.lang.String'),
    	id integer NOT NULL OPTIONS (SELECTABLE FALSE, UPDATABLE FALSE, SEARCHABLE 'Searchable', NATIVE_TYPE 'int'),
    	CONSTRAINT FK_PERSON FOREIGN KEY(id) REFERENCES Person (id) OPTIONS (NAMEINSOURCE 'phones')
    ) OPTIONS (NAMEINSOURCE 'PersonsCache', UPDATABLE TRUE);
    
             ]> </metadata>
        </model>
    
    </vdb>
    
  • Each Google-registered class in the cache will have a corresponding table created.
  • The table for the root class, must have a primary key defined, which must map to an attribute in the class.
    The data type for the attribute in the class must match the JDG cache key data type.
  • The table "name in source" (NIS) will be the name of the JDG cache this table/class is stored
  • The table columns will be created from the google protobuf definition, that corresponds to a registered class.
  • Columns will be identified as SEARCHABLE if either the protobuf definition for a column indicates its indexed or the pojo class has the attribute/method annotated.
  • Attributes defined as repeatable (i.e., collections, arrays, etc.) or a container class, will be supported as 1-to-* relationships, and will have corresponding registered class (if they are to be searched).
  • A 1-to-* relationship class must have a foreign key to map to the root class/table, where the name in source for the foreign key is the name of the root class method to access those child objects. Note, this is the class method, not a reference in the google protobuf definition.
  • A container/child class will have attributes where the NIS contain a period. Example: phone.number. This is because this maps to to google protobuf definition and what is expected to be used in the DSL query.
This translator supports using the cache for external materialization. However, there are specific configuration changes that are required at the [Infinispan-HotRod resource-adapter] and at the translator.
External materialization is enabled by the use of native queries in the BEFORE_LOAD_SCRIPT and AFTER_LOAD_SCRIPT. A translator override will need to be set to enable native queries:
SupportsNativeQueries=true
You must define these properties:

Table 4.25. Configuration Properties

Script Native Query Description
teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT truncate cache Truncates the cache identified as the staging cache.
teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT swap cache names Swaps the aliases for the caches, so that the primary cache points to the recently-loaded cache.
This example defines the load scripts in DDL:
..
"teiid_rel:MATVIEW_BEFORE_LOAD_SCRIPT" 'execute StockMatCache.native(''truncate cache'');',
"teiid_rel:MATVIEW_LOAD_SCRIPT" 'insert into StockMatCache.Stock (productId, symbol, price, companyName) SELECT  A.ID, S.symbol, S.price, A.COMPANY_NAME FROM Stocks.StockPrices AS S, Accounts.PRODUCT AS A WHERE S.symbol = A.SYMBOL',
"teiid_rel:MATVIEW_AFTER_LOAD_SCRIPT"  'execute StockMatCache.native(''swap cache names'');',
Native queries are used to simulate how the query is performed in the RDBMS and renaming tables, because JDG does not currently support renaming a cache. hence, the native queries will trigger the clearing of the "staging" cache, and the swapping of the cache aliases.
Additionally, the execution of native queries is done through the support of direct query procedures. The procedure to be executed is called native.

Warning

This feature is turned off by default because of its security risk. It coul allow someone to execute any command against the source. To enable this feature, set the Override Execution Property called SupportsDirectQueryProcedure to true.
See the Infinispan-HotRod resource adapter for this JCA Resource translator. It can be configured to look up the cache container via JNDI, server list, or Hot Rod properties.

4.11.10. LDAP Adapter Properties

The following table describes the configuration properties that can be configured for the LDAP resource adapter:
Config property
Property Template
Description
LdapUrl
LDAP Directory URL. This property is mandatory.
LdapAdminUserDN
cn=???,ou=???,dc=???
LDAP administration user DN. This property is mandatory.
LdapAdminUserPassword
LDAP administration password. This property is mandatory.
LdapTxnTimeoutInMillis
LDAP transaction timeout in milliseconds. -1 = no timeout. This property is optional.

4.11.11. Salesforce Adapter Properties

The following table describes the configuration properties that can be configured for the Salesforce resource adapter:
Config property
Description
URL
The URL to connect to.
username
The username.
password
The password.
requestTimeout
This is an optional property for setting timeouts, which can also be done through the CXF config.
connectTimeout
This is an optional property for setting timeouts, which can also be done through the CXF config.
configFile
Use this property to supply specific configuration for the SalesForce service. This configuration must contain config for "SforceService" service with namespace "urn:partner.soap.sforce.com".

4.11.12. Web Services Adapter Properties

The following table describes the configuration properties that can be configured for the Web Services resource adapter:
Config property
Example
Description
EndPoint
End point URL for the web service
SecurityType
HTTPBasic
Use for http basic security.
AuthUserName
Use for http basic security.
AuthPassword
Use for http basic security.
RequestTimeout
This is an optional property for setting timeouts, which can also be done through the CXF config.
ConnectTimeout
This is an optional property for setting timeouts, which can also be done through the CXF config.
ConfigFile
Use these properties to supply specific CXF configuration for this service. This file must contain a configuration for the name defined on the "EndPointName" property.
EndPointName
WebSVC
Use with ConfigFile. These properties to supply specific CXF configuration for this service.

4.11.13. MongoDB Adapter Properties

The following table describes the configuration properties that can be configured for the MongoDB resource adapter:
Config property
Example
Description
RemoteServerList
localhost:27017
MongoDB server list in this form host:port[;host:port...]*
Database
Database name.
Username
Use this property together with Password to supply credentials.
Password
Use this property together with Username to supply credentials.

Chapter 5. Versioning

5.1. Virtual Database Versioning

Virtual database (VDB) versioning allows you to deploy multiple versions of a VDB simultaneously, and to specify which version will be used in certain scenarios.
When an application connects to JBoss Data Virtualization, you can set the desired version of a VDB using a connection property. (Refer to the Red Hat JBoss Data Virtualization User Guide for more information.)
If a specific version is set, then you can only connect to that VDB. If no version is set, the deployed VDBs are searched until the appropriate version is found. This feature helps support more dynamic migration scenarios.

5.2. Set the VDB Version

You can set the version in one of two ways: through the vdb.xml file, (which is useful for dynamic VDBs), or by specifying a naming convention in the deployment file (such as VDBNAME.VERSION.vdb). The deployer is responsible for choosing an appropriate version number. If there is already a VDB name and version combination that matches the current deployment, then connections to the previous VDB will be terminated and its cache entries will be flushed. Any new connections will then be made to the new VDB.

5.3. Virtual Database Connection Type

Once your VDB is deployed, you can configure a property in it called connection type. By setting this property, you can determine what type of connections can be made to the VDB. You can set it to one of the following:
  • NONE: disallow new connections.
  • BY_VERSION: (the default setting) allow connections only if the version is specified or if this is the earliest BY_VERSION VDB and there are no VDBs marked as ANY.
  • ANY: allow connections with or without a version specified.
If you only want to migrate a few of your applications to the new version of the VDB, then set it to BY_VERSION. This ensures that only applications that know of the new version may use it.
If only a select few applications are to remain on the current VDB version, then you will need to update their connection settings to reference the current VDB by its version. The newly deployed VDB will then have its connection type set to ANY, which allows all new connections to be made against the newer version.
If you need to undertake a rollback in this scenario, then the newly-deployed VDB will, accordingly, have its connection type set to NONE or BY_VERSION.

5.4. Set the VDB Connection Type via Admin API

You can change a VDB connection type using the changeVDBConnectionType method provided by the Admin interface within the Admin API package (org.teiid.adminapi).
Javadocs for Red Hat JBoss Data Virtualization can be found on the Red Hat Customer Portal.

Chapter 6. CXF Configuration

6.1. CXF Configuration

If your configuration uses CXF, you need to specify the data source configuration file and port configuration.
The ConfigFile property, set in the server configuration file for the resource-adapter subsystem, specifies the Spring XML configuration file. (The system's default configuration is used if the configuration file is not specified.)
The EndPointName property, also set in the server configuration file for the resource-adapter subsystem, specifies the port configuration. Set this to the local part of the desired port QName.
The configuration file property is specified via this command:
/subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=ConfigFile:add(value=${jboss.server.home.dir}/standalone/configuration/xxx-jbossws-cxf.xml)

6.2. Configure CXF for a Web Service Data Source

Prerequisites

  • You must have configured a web service data source.

Procedure 6.1. Configure CXF for a Web Service Data Source

  1. Specify the Web Service CXF ConfigFile

    Run the following command from within the Management CLI, specifying the CXF configuration file for the data source:
    /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=ConfigFile:add(value=CONFIG-FILE.xml)
  2. Specify the Web Service CXF EndPointName

    Specify the port configuration by running the following command from within the Management CLI, using the port QName (local part only) for the value:
    /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=EndPointName:add(value=CONFIG-NAME)
  3. Create/Edit the CXF Configuration File

    Open/create the EAP_HOME/MODE/configuration/CONFIG-FILE.xml configuration file.
    <http-conf:conduit name="{NAMESPACE}CONFIG-NAME.http-conduit">
    ...
    </http-conf:conduit>

    Note

    CONFIG-NAME is the same as that specified above, with a default value of teiid. NAMESPACE is the namespace URI for the QName in your config file, which should match your WSDL/namespace setting on the data source or use the default of "http://teiid.org". The namespace may be set via the namespace datasource property. Typically that will only need done when also supplying the WSDL setting.
    The following is an example of a CXF file configuring timeouts:
    <beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:http-conf="http://cxf.apache.org/transports/http/configuration"
      xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration
      http://cxf.apache.org/schemas/configuration/http-conf.xsd
      http://www.springframework.org/schema/beans
      http://www.springframework.org/schema/beans/spring-beans.xsd">
    
      <http-conf:conduit name="{NAMESPACE}CONFIG-NAME.http-conduit">
        <http-conf:client ConnectionTimeout="120000" ReceiveTimeout="240000"/>
      </http-conf:conduit>
    </beans>

Note

For web service data sources, CXF configuration is only applicable to non-binary calls.

6.3. Configure CXF for a Web Service Data Source: WS-Security

Prerequisites

  • The web service data source must be configured and the ConfigFile and EndPointName properties must be configured for CXF.

Procedure 6.2. Configure CXF for a Web Service Data Source: WS-Security

  1. Specify the CXF SecurityType

    Run the following command from within the Management CLI, using WSSecurity as the value for SecurityType:
    /subsystem=resource-adapters/resource-adapter=webservice/connection-definitions=wsDS/config-properties=SecurityType:add(value=WSSecurity)
  2. Modify the CXF Configuration File

    Open the CXF configuration file for the web service data source and add your desired properties.
    The following is an example of a web service data source CXF configuration file adding a timestamp to the SOAP header:
    <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xmlns:jaxws="http://cxf.apache.org/jaxws"
           xsi:schemaLocation="http://www.springframework.org/schema/beans
              http://www.springframework.org/schema/beans/spring-beans.xsd
              http://cxf.apache.org/jaxws
              http://cxf.apache.org/schemas/jaxws.xsd">
    
        <jaxws:client name="{http://teiid.org}.teiid" 
            createdFromAPI="true">
            <jaxws:outInterceptors>
                <ref bean="Timestamp_Request"/>
            </jaxws:outInterceptors>
        </jaxws:client>
    
        <bean 
            id="Timestamp_Request">
            <constructor-arg>
                <map>
                    <entry key="action" value="Timestamp"/>
                </map>
            </constructor-arg>
        </bean>
        
    </beans>

Note

  • A WSDL is not expected to describe the service being used.
  • The Spring XML configuration file must contain the relevant policy configuration.
  • The client port configuration is matched to the data source instance by the CONFIG-NAME. The configuration may contain other port configurations with different local names.

References

6.4. Configure CXF for a Web Service Data Source: Logging

CXF configuration can control the logging of requests and responses for specific or all ports. Logging, when enabled, is performed at an INFO level to the org.apache.cxf.interceptor context.

Prerequisites

  • The web service data source must be configured and the ConfigFile and EndPointName properties must be configured for CXF.

Procedure 6.3. Configure CXF for a Web Service Data Source: Logging

  • Modify the CXF Configuration File

    Open the CXF configuration file for the web service data source and add your desired logging properties.
    The following is an example of a CXF configuration file for a web service data source that enables logging:
    <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xmlns:jaxws="http://cxf.apache.org/jaxws"
           xsi:schemaLocation="http://www.springframework.org/schema/beans
              http://www.springframework.org/schema/beans/spring-beans.xsd
              http://cxf.apache.org/jaxws
              http://cxf.apache.org/schemas/jaxws.xsd">
    
        <jaxws:client name="{http://teiid.org}teiid" 
            createdFromAPI="true">
            <jaxws:features>
                <bean class="org.apache.cxf.feature.LoggingFeature"/>
            </jaxws:features>
        </jaxws:client>
        
    </beans>

References

6.5. Configure CXF for a Web Service Data Source: Transport Settings

CXF configuration can also control low level aspects of the HTTP transport.

Prerequisites

  • The web service data source must be configured and the ConfigFile and EndPointName properties must be configured for CXF.

Procedure 6.4. Configure CXF for a Web Service Data Source: Transport Settings

  • Open the CXF configuration file for the web service data source and add your desired transport properties.
    The following is an example of a CXF configuration file for a web service data source that disables hostname verification:
    <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xmlns:jaxws="http://cxf.apache.org/jaxws"
           xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration
              http://cxf.apache.org/schemas/configuration/http-conf.xsd
              http://www.springframework.org/schema/beans
              http://www.springframework.org/schema/beans/spring-beans.xsd">
    
        <http-conf:conduit name="{http://teiid.org}teiid.http-conduit">
              <http-conf:tlsClientParameters disableCNcheck="tru" />
        </http-conf:conduit>
        
    </beans>

Warning

disableCNcheck=true must NOT be used in production.

6.6. Configure CXF for a Web Service Data Source: SSL Support (HTTPS)

For using HTTPS, you can configure the CXF configuration file as below:
<beans xmlns="http://www.springframework.org/schema/beans" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xmlns:sec="http://cxf.apache.org/configuration/security" 
    xmlns:http-conf="http://cxf.apache.org/transports/http/configuration" 
    xmlns:jaxws="http://java.sun.com/xml/ns/jaxws" 
    xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration http://cxf.apache.org/schemas/configuration/http-conf.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/configuration/security http://cxf.apache.org/schemas/configuration/security.xsd">
 
    <http-conf:conduit name="*.http-conduit">
        <http-conf:client ConnectionTimeout="120000" ReceiveTimeout="240000"/>
        <http-conf:tlsClientParameters secureSocketProtocol="SSL">
          <sec:trustManagers>
            <sec:keyStore type="JKS" password="changeit" file="/path/to/truststore.jks"/>
          </sec:trustManagers>
        </http-conf:tlsClientParameters>
    </http-conf:conduit>
</beans>
For more information about http-conduit based configuration see http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html . You can also configure for services such as HTTPBasic and Kerberos.

6.7. Configure CXF for a Salesforce Data Source

Prerequisites

  • You must have configured a Salesforce data source.

Procedure 6.5. Configure CXF for a Salesforce Data Source

  1. Run the following command from within the Management CLI, specifying the CXF configuration file for the data source:
    /subsystem=resource-adapters/resource-adapter=salesforce/connection-definitions=sfDS/config-properties=ConfigFile:add(value=CONFIG-FILE.xml)
  2. Specify the port configuration by running the following command from within the Management CLI, using the port QName (local part only) for the value, in this case Soap:
    /subsystem=resource-adapters/resource-adapter=salesforce/connection-definitions=sfDS/config-properties=EndPointName:add(value=Soap)
  3. Open/create the EAP_HOME/MODE/configuration/CONFIG-FILE.xml configuration file. Set the namespace URI for the QName to {urn:partner.soap.sforce.com}, using Soap as the value for EndPointName:
    <http-conf:conduit name="{urn:partner.soap.sforce.com}Soap.http-conduit">
    ...
    </http-conf:conduit>
    The following is an example of a CXF file that configures timeout values:
    <beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:http-conf="http://cxf.apache.org/transports/http/configuration"
      xsi:schemaLocation="http://cxf.apache.org/transports/http/configuration
      http://cxf.apache.org/schemas/configuration/http-conf.xsd
      http://www.springframework.org/schema/beans
      http://www.springframework.org/schema/beans/spring-beans.xsd">
    
      <http-conf:conduit name="{urn:partner.soap.sforce.com}Soap.http-conduit">
        <http-conf:client ConnectionTimeout="120000" ReceiveTimeout="240000"/>
      </http-conf:conduit>
    </beans>

Note

CXF configuration for Salesforce data sources is only used for http bus configuration. It is not used for WS-Security. Salesforce has its own security authentication.

Chapter 7. Logging

7.1. Overview of Logging

JBoss EAP 6 provides highly configurable logging facilities for both its own internal use and for use by deployed applications. The logging subsystem is based on JBoss LogManager and it supports several third party application logging frameworks in addition to JBoss Logging.
The logging subsystem is configured using a system of log categories and log handlers. Log categories define what messages to capture, and log handlers define how to deal with those messages (write to disk, send to console etc).
Logging Profiles allow uniquely named sets of logging configuration to be created and assigned to applications independent of any other logging configuration. The configuration of logging profiles is almost identical to the main logging subsystem.

7.2. Default Log File Locations

These are the log files that get created for the default logging configurations. The default configuration writes the server log files using periodic log handlers. The server is configured to write to teiid-command.log and teiid-audit.log by default.

Table 7.1. Default Log File for a standalone server

Log File Description
EAP_HOME/standalone/log/server.log
Server Log. Contains all server log messages, including server startup messages.
EAP_HOME/standalone/log/gc.log
Garbage collection log. Contains details of all garbage collection.

Table 7.2. Default Log Files for a managed domain

Log File Description
EAP_HOME/domain/log/host-controller.log
Host Controller boot log. Contains log messages related to the startup of the host controller.
EAP_HOME/domain/log/process-controller.log
Process controller boot log. Contains log messages related to the startup of the process controller.
EAP_HOME/domain/servers/SERVERNAME/log/server.log
The server log for the named server. Contains all log messages for that server, including server startup messages.

7.3. JBoss Data Virtualization Log Categories

Within EAP log files, all information specific to JBoss Data Virtualization is prefixed with the org.teiid log category identifier.
Logs originating from third-party code and other components not related to JBoss Data Virtualization, including integrated org.jboss components, are prefixed with their own log category identifiers, not org.teiid.
The table below summarizes the categories relevant to JBoss Data Virtualization. For a complete listing, see the standalone.xml (or domain.xml for domain mode) file.

Table 7.3. JBoss Data Virtualization Log Categories

Log Category Description
com.arjuna
Third-party transaction manager. This will include information about all transactions, not just those for JBoss Data Virtualization.
org.teiid
Root category identifier for all JBoss Data Virtualization logs. Note: there are potentially more contexts used under org.teiid than those shown in this table.
org.teiid.PROCESSOR
Query processing logs. Also see org.teiid.PLANNER.
org.teiid.PLANNER
Query planning logs.
org.teiid.SECURITY
Session/Authentication events. Also see org.teiid AUDIT_LOG.
org.teiid.TRANSPORT
Events related to the socket transport.
org.teiid.RUNTIME
Events related to work management and system start/stop.
org.teiid.CONNECTOR
Connector logs.
org.teiid.BUFFER_MGR
Buffer and storage management logs.
org.teiid.TXN_LOG
Detailed log of all transaction operations.
org.teiid.COMMAND_LOG
org.teiid.AUDIT_LOG
org.teiid.ADMIN_API
Admin API logs.
org.teiid.ODBC
ODBC logs.

Note

Refer to the JBoss Enterprise Application Platform Administration and Configuration Guide for more information about logging.

7.4. Command Logging

Command logging captures user commands that have been submitted to JBoss Data Virtualization, query plan commands when query planning is performed, and data source commands that are being executed by connectors.
The user command, "START USER COMMAND", is logged when JBoss Data Virtualization starts working on the query for the first time. This does not include the time the query was waiting in the queue. A corresponding user command, "END USER COMMAND", is logged when the request is complete (that is, when the statement is closed or all the batches are retrieved). There is only one pair of these for every user query.
The query plan command, "PLAN USER COMMAND", is logged when JBoss Data Virtualization finishes the query planning process. There is no corresponding ending log entry.
Non-plan user events are logged at the INFO level.
The data source command, "START DATA SRC COMMAND", is logged when a query is sent to the data source. And a corresponding data source command, "END SRC COMMAND", is logged when the execution is closed (that is, all the rows has been read). There can be one pair for each data source query that has been executed by JBoss Data Virtualization, and there can be number of pairs depending upon your query.
With this information being captured, the overall query execution time can be calculated. Additionally, each source query execution time can be calculated. If the overall query execution time is showing a performance issue, then look at each data source execution time to see where the issue may be.
With this information being captured, the overall query execution time in Teiid can be calculated. Additionally, each source query execution time can be calculated. If the overall query execution time is showing a performance issue, then look at each data source execution time to see where the issue maybe.
To enable command logging to the default log location, simply enable the DEBUG level of logging for the org.teiid.COMMAND_LOG context. You can use the Admin Console to enable or disable it. Note that you can also turn on command logging using the web-console.
To enable command logging to an alternative file location, configure a separate file appender for the DETAIL logging of the org.teiid.COMMAND_LOG context. An example of this is shown below and can also be found in the standalone.xml file.
<periodic-rotating-file-handler name="COMMAND_FILE">
    <level name="DEBUG" />
    <formatter>
        <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n" />
    </formatter>
    <file relative-to="jboss.server.log.dir" path="command.log" />
    <suffix value=".yyyy-MM-dd" />
</periodic-rotating-file-handler>
 
<logger category="org.teiid.COMMAND_LOG">
    <level name="DEBUG" />
    <handlers>
        <handler name="COMMAND_FILE" />
    </handlers>
</logger>
See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on developing a custom logging solution if file based (or any other built-in log4j) logging is not sufficient.
The following is an example of a data source command and what one would look like when printed to the command log:
2012-02-22 16:01:53,712 DEBUG [org.teiid.COMMAND_LOG] (Worker1_QueryProcessorQueue11 START DATA SRC COMMAND: startTime=2012-02-22 16:01:53.712 
requestID=Ku4/dgtZPYk0.5 sourceCommandID=4 txID=null modelName=DTHCP translatorName=jdbc-simple sessionID=Ku4/dgtZPYk0 
principal=user@teiid-security 
sql=HCP_ADDR_XREF.HUB_ADDR_ID, CPN_PROMO_HIST.PROMO_STAT_DT FROM CPN_PROMO_HIST, HCP_ADDRESS, HCP_ADDR_XREF 
WHERE (HCP_ADDRESS.ADDR_ID = CPN_PROMO_HIST.SENT_ADDR_ID) AND (HCP_ADDRESS.ADDR_ID = HCP_ADDR_XREF.ADDR_ID) AND 
(CPN_PROMO_HIST.PROMO_STAT_CD NOT LIKE 'EMAIL%') AND (CPN_PROMO_HIST.PROMO_STAT_CD <> 'SENT_EM') AND 
(CPN_PROMO_HIST.PROMO_STAT_DT > {ts'2010-02-22 16:01:52.928'})
Note the following pieces of information:
  • modelName: this represents the physical model for the data source that the query is being issued.
  • translatorName: shows type of translator used to communicate to the data source.
  • principal: shows the user account who submitted the query
  • startTime/endTime: the time of the action, which is based on the type command being executed.
  • sql: is the command submitted to the translator for execution, which is NOT necessarily the final sql command submitted to the actual data source. But it does show what the query engine decided to push down.

7.5. Audit Logging

Audit logging captures important security events, including the enforcement of permissions, and authentication success/failure.
See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on developing a custom logging solution if file based (or any other built-in log4j) logging is not sufficient.

7.6. Enable Audit and Command Logging

You need to enable audit and command logging via the Web Console. You will also need to grant at least some users the logging role.

Note

If using a data base to store the logging, then the Dashboard logging workspace can be used.
The teiidUser requires logging role permissions for this to work (you most likely configured this through the installer.)

Chapter 8. Clustering

8.1. Clustering in Red Hat JBoss Data Virtualization

Clustering may be used to improve the performance of the system through:
Load Balancing
See Red Hat JBoss Data Virtualization Development Guide: Client Development for information on load balancing between multiple nodes.
Failover
See Red Hat JBoss Data Virtualization Development Guide: Client Development for information on failover with multiple nodes.
Distributed Caching
In cluster mode, resultsset caches and internal materialization caches are shared across the cluster automatically. Therefore no further configuration is needed.
Event Distribution
Metadata and data modifications will be distributed to all members of a cluster automatically when clustering is configured.

8.2. Enable Clustering in JBoss Data Virtualization

Ensure JBoss Data Virtualization is installed on each JBoss EAP node and that JBoss EAP has started using either the standalone-ha.xml or the standalone-full-ha.xml profile before starting the cluster. (These profiles provide high availability, or clustering, as their names imply.)
For more information, see the Red Hat JBoss Data Virtualization Installation Guide and Red Hat JBoss Enterprise Application Platform documentation.

Note

Artifacts such as VDBs cannot be deployed via file deployment when clustering is used. Use other methods of VDB deployment.

Part V. Monitoring and Performance

Chapter 9. Monitoring

9.1. Monitoring Red Hat JBoss Data Virtualization

Red Hat JBoss Data Virtualization provides information about its current operational state. This information can be useful in tuning, monitoring, and managing load and throughput. Runtime data can be accessed using administrative tools such as web-console, AdminShell or Admin API.

9.2. Query/Session Details

Name
Description
Current Sessions
List current connected sessions
Current Request
List current executing requests
Current Transactions
List current executing transactions
Query Plan
Retrieves the query plan for a specific request
There are administrative options for terminating sessions, queries, and transactions.

9.3. Session/Query Metrics

Name
Property
Description
Comment
Session Count
sessionCount
Indicates the number of user connections currently active.
To ensure number of sessions are not restricted at peak times, check max-sessions-allowed (default 5000) is set accordingly and review sessions-expiration-timelimit.
Query Count
queryCount
Indicates the number of queries currently active.
 
Active Query Plan Count
ENGINE_STATISTIC.active-plans-count
Number of query plans currently being processed.
To ensure maximum throughput, see the performance tuning section on QueryEngine and threading.
Waiting Query Plan Count
ENGINE_STATISTIC.waiting-plans-count
Number of query plans currently waiting.
 
Max Waiting Query Plan Watermark
ENGINE_STATISTIC.max-waitplan-watermark
The maximum number of query plans that have been waiting at one time, since the last time the server started.
 
Long Running Queries
longRunningQueries
List current executing queries that have surpassed the query threshold ( query-threshold-in-seconds. ).
Setup alert to warn when one or more queries are consuming resources for an extended period of time. If running too long, an option is to cancel request or increase threshold.

9.4. Buffer Manager Metrics

Name
Property
Description
Comment
Disk Write Count
ENGINE_STATISTIC.buffermgr-disk-write-count
Disk write count for the buffer manager.
Disk Read Count
ENGINE_STATISTIC.buffermgr-disk-read-count
Disk read count for the buffer manager.
Cache Write Count
ENGINE_STATISTIC.buffermgr-cache-write-count
Cache write count for the buffer manager.
Cache Read Count
ENGINE_STATISTIC.buffermgr-cache-read-count
Cache read count for the buffer manager.
Disk Space Used (MB)
ENGINE_STATISTIC.buffermgr-diskspace-used-mb
Indicates amount of storage space currently used by buffer files
Setup alert to warn when used buffer space is at an unacceptable level, based on the setting of max-buffer-space
Total memory in use (KB)
ENGINE_STATISTIC.total-memory-inuse-kb
Estimate of the current memory usage in kilobytes.
Total memory in use by active plans (KB)
ENGINE_STATISTIC.total-memory-inuse-active-plans-kb
Estimate of the current memory usage by active plans in kilobytes

9.5. Cache Metrics

Name
Property
Description
Prepared Plan Cache Size
PREPARED_PLAN_CACHE.total-entries
Current number of entries in cache.
Prepared Plan Cache # of Requests
PREPARED_PLAN_CACHE.request-count
Total number of requests made against cache.
Prepared Plan Cache Hit Ratio %
PREPARED_PLAN_CACHE.hit-ratio
Percentage of positive cache hits
ResultSet Cache Size
QUERY_SERVICE_RESULT_SET_CACHE.total-entries
Current number of entries in cache.
ResultSet Cache # of Requests
QUERY_SERVICE_RESULT_SET_CACHE.request-count
Total number of requests made against cache.
ResultSet Cache Hit Ratio %
QUERY_SERVICE_RESULT_SET_CACHE.hit-ratio
Percentage of positive cache hits.

Chapter 10. Performance Tuning

10.1. Memory Management Considerations

Here is some information about settings related to memory management.
Red Hat JBoss Data Virtualization uses a BufferManager to track both memory and disk usage. Configuring the BufferManager properly is one of the most important parts of ensuring high performance. In most instances though the default settings are sufficient as they will scale with the JVM and consider other properties such as the setting for max active plans. Execute the following command using the CLI to find all of the possible settings for the BufferManager:
/subsystem=teiid:read-resource
All of the properties that start with "buffer-service" can be used to configure BufferManager. Here are the CLI commands you can use to change these settings:
/subsystem=teiid:write-attribute(name=buffer-service-use-disk,value=true)
/subsystem=teiid:write-attribute(name=buffer-service-encrypt-files,value=false)
/subsystem=teiid:write-attribute(name=buffer-service-processor-batch-size,value=256)
/subsystem=teiid:write-attribute(name=buffer-service-max-open-files,value=64)
/subsystem=teiid:write-attribute(name=buffer-service-max-file-size,value=2048)
/subsystem=teiid:write-attribute(name=buffer-service-max-processing-kb,value=-1)
/subsystem=teiid:write-attribute(name=buffer-service-max-reserve-kb,value=-1)
/subsystem=teiid:write-attribute(name=buffer-service-max-buffer-space,value=51200)
/subsystem=teiid:write-attribute(name=buffer-service-max-inline-lobs,value=true)
/subsystem=teiid:write-attribute(name=buffer-service-memory-buffer-space,value=-1)
/subsystem=teiid:write-attribute(name=buffer-service-max-storage-object-size,value=8388608)
/subsystem=teiid:write-attribute(name=buffer-service-memory-buffer-off-heap,value=false)

Warning

Do not change these properties until you understand the properties and any potential issue that is being experienced.
Here is an explanation of the main settings:
  • max-reserve-kb (default -1) - setting determines the total size in kilobytes of batches that can be held by the BufferManager in memory. This number does not account for persistent batches held by soft (such as index pages) or weak references. The default value of -1 will auto-calculate a typical max based upon the max heap available to the VM. The auto-calculated value assumes a 64bit architecture and will limit buffer usage to 40% of the first gigabyte of memory beyond the first 300 megabytes (which are assumed for use by the AS and other Red Hat JBoss Data Virtualization purposes) and 50% of the memory beyond that. The additional caveat here is that if the size of the memory buffer space is not specified, then it will effectively be allocated out of the max reserve space. A small adjustment is also made to the max reserve to account for batch tracking overhead.
    With default settings and an 8GB VM size, then max-reserve-kb will at a max use: (((1024-300) * 0.4) + (7 * 1024 * 0.5)) = 4373.6 MB or 4,478,566 KB
  • The BufferManager automatically triggers the use of a canonical value cache if enabled when more than 25% of the reserve is in use. This can dramatically cut the memory usage in situations where similar value sets are being read through Red Hat JBoss Data Virtualization but does introduce a lookup cost. If you are processing small or highly similar datasets through Red Hat JBoss Data Virtualization and wish to conserve memory, you should consider enabling value caching.
    Memory consumption can be significantly more or less than the nominal target depending upon actual column values and whether value caching is enabled. Large non built-in type objects can exceed their default size estimate. If an out of memory errors occur, then set a lower max-reserve-kb value. Also note that source lob values are held by memory references that are not cleared when a batch is persisted. With heavy LOB usage you should ensure that buffers of other memory associated with lob references are appropriately sized.
  • max-processing-kb (default -1) - setting determines the total size in kilobytes of batches that can be guaranteed for use by one active plan and may be in addition to the memory held based on max-reserve-kb. Typical minimum memory required by Red Hat JBoss Data Virtualization when all the active plans are active is #active-plans*max-processing-kb. The default value of -1 will auto-calculate a typical max based upon the max heap available to the VM and max active plans. The auto-calculated value assumes a 64bit architecture and will limit nominal processing batch usage to less than 10% of total memory.
    With default settings including 20 active-plans and an 8GB VM size, then max-processing-kb will be: (.07 * 8 * 1024)/20^.8 = 537.4 MB/11 = 52.2 MB or 53,453 KB per plan. This implies a nominal range between 0 and 1060 MB that may be reserved with roughly 53 MB per plan. You should be cautious in adjusting max-processing-kb on your own. Typically it will not need adjusted unless you are seeing situations where plans seem memory constrained with low performing large sorts.
  • max-file-size (default 2GB) - Each intermediate result buffer, temporary LOB, and temporary table is stored in its own set of buffer files, where an individual file is limited to max-file-size megabytes. Consider increasing the storage space available to all such files by increasing max-buffer-space, if your installation makes use of internal materialization, makes heavy use of SQL/XML, or processes large row counts.
  • processor-batch-size (default 256) - Specifies the target row count of a batch of the query processor. A batch is used to represent both linear data stores, such as saved results, and temporary table pages. Teiid will adjust the processor-batch-size to a working size based upon an estimate of the data width of a row relative to a nominal expectation of 2KB. The base value can be doubled or halved up to three times depending upon the data width estimation. For example a single small fixed width (such as an integer) column batch will have a working size of processor-batch-size * 8 rows. A batch with hundreds of variable width data (such as string) will have a working size of processor-batch-size / 8 rows. Any increase in the processor batch size beyond the first doubling should be accompanied with a proportional increase in the max-storage-object-size to accommodate the larger storage size of the batches.
    Additional considerations are needed if large VM sizes and/or datasets are being used. Red Hat JBoss Data Virtualization has a non-negligible amount of overhead per batch/table page on the order of 100-200 bytes. If you are dealing with datasets with billions of rows and you run into OutOfMemory issues, consider increasing the processor-batch-size to force the allocation of larger batches and table pages. A general guideline would be to double processor-batch-size for every doubling of the effective heap for Red Hat JBoss Data Virtualizationbeyond 4 GB - processor-batch-size = 512 for an 8 GB heap, processor-batch-size = 1024 for a 16 GB heap, etc.
  • max-storage-object-size (default 8288608 or 8MB) - The maximum size of a buffered managed object in bytes and represents the individual batch page size. If the processor-batch-size is increased and/or you are dealing with extremely wide result sets (several hundred columns), then the default setting of 8MB for the max-storage-object-size may be too low. The inline-lobs setting also can increase the size of batches containing small lobs. The sizing for max-storage-object-size is in terms of serialized size, which will be much closer to the raw data size than the Java memory footprint estimation used for max-reserved-kb. max-storage-object-size should not be set too large relative to memory-buffer-space since it will reduce the performance of the memory buffer. The memory buffer supports only 1 concurrent writer for each max-storage-object-size of the memory-buffer-space. Note that this value does not typically need to be adjusted unless the processor-batch-size is adjusted, in which case consider adjusting it in proportion to the increase of the processor-batch-size.
    If exceptions occur related to missing batches and "TEIID30001 Max block number exceeded" is seen in the server log, then increase the max-storage-object-size to support larger storage objects. Alternatively you could make the processor-batch-size smaller. memory-buffer-space (default -1) - This controls the amount of on or off heap memory allocated as byte buffers for use by the Red Hat JBoss Data Virtualization buffer manager measured in megabytes. This setting defaults to -1, which automatically determines a setting based upon whether it is on or off heap and the value for max-reserve-kb. The memory buffer supports only 1 concurrent writer for each max-storage-object-size of the memory-buffer-space. Any additional space serves as a cache for the serialized for of batches.
    When left at the default setting the calculated memory buffer space will be approximately 40% of the max-reserve-kb size. If the memory buffer is on heap and the max-reserve-kb is automatically calculated, then the memory buffer space will be subtracted out of the effective max-reserve-kb. If the memory buffer is off heap and the max-reserve-kb is automatically calculated, then its size will be reduced slightly to allow for effectively more working memory in the vm. memory-buffer-off-heap (default false) - Take advantage of the BufferManager memory buffer to access system memory without allocating it to the heap. Setting memory-buffer-off-heap to "true" will allocate the Red Hat JBoss Data Virtualization memory buffer off heap. Depending on whether your installation is dedicated to Red Hat JBoss Data Virtualization and the amount of system memory available, this may be preferable to on-heap allocation. The primary benefit is additional memory usage for Red Hat JBoss Data Virtualization without additional garbage collection tuning. This becomes especially important in situations where more than 32GB of memory is desired for the VM. Note that when using off-heap allocation, the memory must still be available to the java process and that setting the value of memory-buffer-space too high may cause the VM to swap rather than reside in memory. With large off-heap buffer sizes (greater than several gigabytes) you may also need to adjust VM settings.
For Sun VMs the relevant VM settings are MaxDirectMemorySize and UseLargePages. Here is an example:
-XX:MaxDirectMemorySize=12g -XX:+UseLargePages
Adding this to the VM process arguments would allow for an effective allocation of approximately an 11GB Red Hat JBoss Data Virtualization memory buffer (the memory-buffer-space setting) accounting for any additional direct memory that may be needed by Red Hat JBoss EAP or applications running within EAP.

10.2. Scalability Considerations

Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about those settings related to scalability.
buffer-service-processor-batch-size
Default is 256. This property specifies the maximum row count of a batch sent internally within the query processor. Additional considerations are needed if extremely large VM sizes and or datasets are being used. JBoss Data Virtualization has a non-negligible amount of overhead per batch/table page on the order of 100-200 bytes. Depending on the data types involved, each full batch/table page will represent a variable number of rows (a power of two multiple above or below the processor batch size).
If you are working with extremely large datasets and you run into memory issues, consider increasing the buffer-service-processor-batch-size property to force the allocation of larger batches and table pages.
buffer-service-max-storage-object-size
Default is 8288608 or 8MB. This value is the maximum size of a buffered managed object in bytes and represents the individual batch page size.
If buffer-service-processor-batch-size is increased or you are dealing with extremely wide result sets, then the default setting of 8MB for the buffer-service-max-storage-object-size may be too low. The inline LOBS also contribute to this size if the batch contains them. The sizing for buffer-service-max-storage-object-size is in terms of serialized size, which will be much closer to the raw data size than the Java memory footprint estimation used for buffer-service-max-reserve-kb.
buffer-service-max-storage-object-size should not be set too large relative to buffer-service-memory-buffer-space since it will reduce the performance of the memory buffer. The memory buffer supports only 1 concurrent writer for each buffer-service-max-storage-object-size of the buffer-service-memory-buffer-space.

Note

JBoss Data Virtualization temporary tables (also used for internal materialization) can only support 2^31-1 rows per table.
buffer-service-memory-buffer-space
Default is -1. This controls the amount of on or off heap memory allocated as byte buffers for use by the JBoss Data Virtualization buffer manager. This setting defaults to -1, which automatically determines a setting based upon whether it is on or off heap and the value for buffer-service-max-reserve-kb.

Note

When left at the default setting the calculated memory buffer space will be approximately one quarter of the buffer-service-max-reserve-kb setting. If the memory buffer is off heap and the buffer-service-max-reserve-kb setting is automatically calculated, the memory buffer space will be subtracted out of the effective buffer-service-max-reserve-kb.
buffer-service-memory-buffer-off-heap
Default is false. Determines whether to take advantage of the buffer manager memory buffer to access system memory without allocating it to the heap. Setting buffer-service-memory-buffer-off-heap to true will allocate the JBoss Data Virtualization memory buffer off heap. Depending on whether your installation is dedicated to JBoss Data Virtualization and the amount of system memory available, this may be preferred over on-heap allocation.
The primary benefit of off-heap memory is additional memory usage for JBoss Data Virtualization without additional garbage collection tuning. This becomes especially important in situations where more than 32GB of memory is desired for the JVM. Note that when using off-heap allocation, the memory must still be available to the java process and that setting the value of buffer-service-memory-buffer-space too high may cause the JVM to swap rather than reside in memory. With large off-heap buffer sizes (greater than several gigabytes) you may also need to adjust JVM settings.
For Sun JVMs the relevant JVM settings are MaxDirectMemorySize and UseLargePages. For example adding:
-XX:MaxDirectMemorySize=12g -XX:+UseLargePages
to the JVM process arguments would allow for an effective allocation of an (approximately) 11GB JBoss Data Virtualization memory buffer (the buffer-service-memory-buffer-space property) accounting for any additional direct memory that may be needed by JBoss EAP or applications running within it.

10.3. Disk Usage Considerations

Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about those settings related to disk usage.
buffer-service-max-buffer-space
Default is 51200. For table page and result batches, the buffer manager will limit the number of files that are dedicated to a particular storage size. However, creation of Large Object (LOB) values (for example through SQL/XML) will typically create one file per LOB once the LOB exceeds the allowable in memory size of 8KB. In heavy usage scenarios, consider pointing the buffer directory on a partition that is routinely defragmented. By default, JBoss Data Virtualization will use up to 50GB of disk space. This is tracked in terms of the number of bytes written by JBoss Data Virtualization. For large data sets, you may need to increase the buffer-service-max-buffer-space property.

10.4. Threading Considerations

Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about those settings related to threading.
max-threads
Default is 64. The query engine has several settings that determine its thread utilization. max-threads sets the total number of threads available in the process pool for query engine work (such as processing plans, transaction control operations, and processing source queries).
You should consider increasing the maximum threads on systems with a large number of available processors and/or when it is necessary to issue non-transactional queries involving a large number of concurrent source requests.
max-active-plans
Default is 20. This value should always be smaller than max-threads. By default, thread-count-for-source-concurrency is calculated by (max-threads / max_active_plans) * 2 to determine the threads available for processing concurrent source requests for each user query. Increasing the max-active-plans should be considered for workloads with a high number of long running queries and/or systems with a large number of available processors. If memory issues arise from increasing the max-threads and max-active-plans, then consider decreasing the amount of heap held by the buffer manager or decreasing the processor-batch-size to limit the base number of memory rows consumed by each plan.
Increasing max-active-plans should be considered for workloads with a high number of long running queries and/or systems with a large number of available processors. If memory issues arise from increasing max-threads and max-active-plans, then consider decreasing buffer-service-processor-batch-size to limit the base number of memory rows consumed by each plan.
thread-count-for-source-concurrency
Default is 0. This value should always be smaller than max-threads. This property sets the number of concurrently executing source queries per user request. 0 indicates to use the default calculated value based on 2 * (max-threads / max-active-plans). Setting this to 1 forces serial execution of all source queries by the processing thread. Any number greater than 1 limits the maximum number of concurrently executing source requests accordingly.
Using the defaults, each user request would be allowed 6 concurrently executing source queries. If the default calculated value is not applicable to your workload, for example, if you have queries that generate more concurrent long running source queries, you should adjust this value.
Also see Section 10.6, “Transport Considerations” for max-socket-threads.

10.5. Caching Considerations

Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about those settings related to caching.
JBoss Data Virtualization settings regarding cache tuning are divided amongst:
  • Resultset Cache Tuning
  • Prepared Plan Cache Tuning
Cache statistics can be obtained through the Management Console or AdminShell. The statistics can be used to help tune cache parameters and ensure a hit ratio.
Plans are currently fully held in memory and may have a significant memory footprint. When making extensive use of prepared statements and/or virtual procedures, the size of the plan cache may be increased proportionally to number of gigabytes intended for use by JBoss Data Virtualization.
While the result cache parameters control the cache result entries (such as max number, and eviction), the result batches themselves are accessed through the buffer manager. If the size of the result cache is increased, you may need to tune the buffer manager configuration to ensure there is enough buffer space.
Result set and prepared plan caches have their entries invalidated by data and metadata events. By default, these events are captured by running commands through JBoss Data Virtualization (see the Red Hat JBoss Data Virtualization Developer Guide for further customization). JBoss Data Virtualization stores compiled forms of update plans or trigger actions with the prepared plan so that if metadata changes, the changes may take effect immediately.
The default resultset-cache-max-staleness for resultset caching is 60 seconds to improve efficiency with rapidly changing sources. Consider decreasing this value to make the resultset cache more consistent with the underlying data. Even with a setting of 0, full transactional consistency is not guaranteed.

Warning

Disabling or constraining these caches will lead to poor performance.

10.6. Transport Considerations

Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about those settings related to transports.
JBoss Data Virtualization provides three transports by default: odbc, jdbc and embedded. Transport settings (such as those listed below) are configured for each.
max-socket-threads
Default is 0. Determines the maximum number of threads dedicated to the initial request processing. Zero indicates to use the system default of maximum available processors. Socket threads handle NIO non-blocking IO operations as well as directly servicing any operation that can run without blocking. For longer running operations, the socket threads queue works with the query engine. (The query engine has two properties that determine its thread utilization: max-threads and max-active-plans.)
All JDBC/ODBC socket operations are non-blocking, so setting the number of max-socket-threads higher than the maximum effective parallelism of the machine should not result in greater performance.
input-buffer-size
Default is 0 which will use the system default. Before adjusting input-buffer-size for any of the transports, keep in mind that each client will create a new socket connection. Increasing this value should only be done if the number of clients is constrained.
output-buffer-size
Default is 0 which will use the system default. Before adjusting output-buffer-size for any of the transports, keep in mind that each client will create a new socket connection. Increasing this value should only be done if the number of clients is constrained.
JDBC clients may need to adjust low-level transport values, in addition to SSL client connection properties via a teiid-client-settings.properties file placed in the client application's classpath. (An example file can be found within the EAP_HOME/modules/system/layers/base/org/jboss/teiid/client/main/teiid-client-VERSION.jar file.)

Note

Typical installations will not need to have any of these settings adjusted.

10.7. Large Objects (LOBs)

Large Objects (LOBs) consist of data. The three main large object runtime data types used by JBoss are:
  1. Binary (BLOB)
    • Contains multimedia objects such as images and audio.
  2. Character (CLOB)
    • Contains ASCII characters.
  3. Extensible Markup Language (XML)
    • Contains textual data.
LOBs and JBoss

The JBoss Data Services Connector API returns a reference to the LOB if allowed by the JBoss Data Services server. The JBoss Data Services server or JDBC driver can then access the data via a stream rather than retrieving the data all at once. This is useful for several reasons:

  • Reduces memory usage when returning the result set to the user.
  • Improves performance by passing less data in the result set.
  • Enables access to LOBs when required rather than assuming that users will always use the LOB data.
  • Enables handling of arbitrarily large data values within a fixed JBoss Data Services memory usage.
These benefits are achieved if the Connector itself does not materialize an entire LOB all at once. For example, the JDBC API supports a streaming interface for BLOB and CLOB data.
Source LOB values are typically accessed by reference, rather than having the value copied to a temporary location. Care must be taken to ensure that source LOBs are returned in a memory-safe manner.
LOBs are broken into pieces when being created and streamed. The size of each piece when fetched by the client can be configured.
Cached lobs will be copied rather than relying on the reference to the source lob.
Temporary lobs created by Teiid will be cleaned up when the result set or statement is closed. To rely on implicit garbage collection based cleanup instead of statement close, the Teiid session variable clean_lobs_onclose can be set to false (by issuing the query "SELECT teiid_session_set('clean_lobs_onclose', false)" - which can be done for example via the new connection sql in the datasource definition). This can be used for local client scenarios that relied on the implicit behavior, such as Designer generated REST VDBs.

10.8. LOB Considerations

Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about those settings related to large objects (LOBs).
lob-chunk-size-in-kb
LOBs and XML documents are streamed from the JBoss Data Virtualization server to the JDBC API. Normally, these values are not materialized in the server memory, avoiding potential out-of-memory issues. When using style sheets, or XQuery, whole XML documents must be materialized on the server. Even when using the XMLQuery or XMLTable functions and document projection is applied, memory issues may occur for large documents.
LOBs are broken into pieces when being created and streamed. The maximum size of each piece when fetched by the client can be configured with the lob-chunk-size-in-kb property.
The default value is 100. When dealing with extremely large LOBs, you may consider increasing lob-chunk-size-in-kb to decrease the amount of round-trips to stream the result. Setting the value too high may cause the server or client to have memory issues.
Source LOB values are typically accessed by reference, rather than having the value copied to a temporary location. Thus care must be taken to ensure that source LOBs are returned in a memory-safe manner. This caution is more for the source driver vendors not to consume VM memory for LOBs.

10.9. Other Performance Tuning Considerations

Although, you can find information about all JBoss Data Services settings using the Management CLI (see Section 11.1, “JBoss Data Virtualization Settings”), this section provides some additional information about the max-source-rows setting.
max-source-rows
When using JBoss Data Services in a development environment, you may consider setting max-source-rows to a small value (for example, 10000) to prevent large amounts of data from being pulled from sources. Leaving the exception-on-max-source-rows property set to true will alert the developer through an exception that an attempt was made to retrieve more than the specified number of rows.

Part VI. Reference

Chapter 11. General Configuration

11.1. JBoss Data Virtualization Settings

The following types of JBoss Data Virtualization settings are available for viewing and modification:
  • Buffer service settings
  • Cache settings (including result set and prepared plan cache settings)
  • Runtime engine deployer settings
  • Authorization validator and policy decider settings
  • Transport and SSL settings
  • Translator settings
To view all of the available settings for JBoss Data Virtualization, run the following command within the Management CLI:
/subsystem=teiid:read-resource-description

11.2. Viewing JBoss Data Virtualization Settings Using Management CLI

To view all of the current settings for JBoss Data Virtualization, run the following command within the Management CLI:
/subsystem=teiid:read-resource
To view a particular JBoss Data Virtualization setting, run the following command within the Management CLI:
/subsystem=teiid:read-attribute(name=SETTING_NAME)
For example:
/subsystem=teiid:read-attribute(name=max-active-plans)

11.3. Changing JBoss Data Virtualization Settings Using Management CLI

To edit a particular JBoss Data Virtualization setting, run the following command within the Management CLI:
/subsystem=teiid:write-attribute(name=SETTING_NAME, value=VALUE)
For example:
/subsystem=teiid:write-attribute(name=max-active-plans, value=50)
For more information about the Management CLI, refer to the Red Hat JBoss Enterprise Application Platform Administration and Configuration Guide.

Note

After modifying any of these settings, you will be prompted to reload the server. Changes to these settings will not take effect until the server is restarted. You can reload the server by running the reload command from within the Management CLI. After the server reloads, you may have to reconnect by running the connect command to continue working within the Management CLI.

11.4. Managing Transport and SSL Settings Using Management CLI

To manage JBoss Data Virtualization transport settings, you can use the same commands as those used for the base JBoss Data Virtualization settings, specifying a particular transport in the command. For example:
/subsystem=teiid/transport=TRANSPORT_NAME:read-resource
Available transport names are listed under transport when you run the following command to output current JBoss Data Virtualization settings:
/subsystem=teiid:read-resource

11.5. Managing Translator Settings Using Management CLI

To manage JBoss Data Virtualization translator settings, you can use the same commands as those used for the base JBoss Data Virtualization settings, specifying a particular translator in the command. For example:
/subsystem=teiid/translator=TRANSLATOR_NAME:read-resource
Available translator names are listed under translator when you run the following command to output current JBoss Data Virtualization settings:
/subsystem=teiid:read-resource

11.6. Transport Security Authentication Modes

The following authentication modes are available:
anonymous
No certificates are exchanged. Settings are not needed for the keystore and truststore properties. The client must have org.teiid.ssl.allowAnon set to true (the default) to connect to an anonymous server. Communications are encrypted using the TLS_DH_anon_WITH_AES_128_CBC_SHA SSL cipher suite. This is suitable for most secure intranets.
1-way
Athenticates the server to the client. The server presents a certificate which is signed by the private key stored in the server's keystore. The server's corresponding public key must be in the client's truststore.
2-way
Mutual client and server authentication. The server presents a certificate which is signed by the private key stored in the server's keystore. The server's corresponding public key must be in the client's truststore. Additionally, the client presents a certificate signed by its private key stored in the client's keystore. The client's corresponsing public key must be in the server's truststore.

Note

You can use keytool to generate encryption keys; however, you should first consider your local requirements for managing public key cryptography.

11.7. Managing Core Configuration Using JBoss Management Console

  1. Log in to the JBoss Management Console.
  2. Select the Profile tab.
  3. From the navigation tree, select SubsystemsTeiid.
  4. Select one of the following options:
    • Query Engine - From here you can view and configure core query engine properties.
    • Translators - From here you can view, add and remove translators.
    • Transports - From here you can view, add and remove transports.
  5. View and modify configuration as necessary.

Note

If you need help at any stage, select the Need Help? link.

Warning

Some properties require you to restart the server before they will take effect.

11.8. Ports Used by Red Hat JBoss Data Virtualization

Red Hat JBoss Data Virtualization inherits the ports used by JBoss Enterprise Application Platform. You can find a full list here: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6/html/Administration_and_Configuration_Guide/Network_Ports_Used_By_JBoss_Enterprise_Application_Platform_62.html
In addition, to these, ports 31000 and 35432 must also be kept open.

11.9. Change the Default JDBC Port Using Management Console

  1. Login to the Management Console.
  2. Navigate to the Socket Binding panel in the Management Console

      • Standalone Mode

        Select the Profile tab from the top-right of the console.
      • Domain Mode

        1. Select the Profiles tab from the top-right of the console.
        2. Select the appropriate profile from the drop-down box in the top left.
        3. Expand the Subsystems menu on the left of the console.
    1. Select General ConfigurationSocket Binding from the menu on the left of the console.
  3. Modify the port number

    1. Select the teiid-jdbc configuration.
    2. Select the Edit button.
    3. Set the Port to the new port number.
    4. Select Save.

11.10. System Properties

Some behavior can be configured via system properties, rather than configuration files. A typical place to set system properties for JBoss Data Virtualization is in the EAP_HOME/bin/standalone.conf. A property setting has the format -Dproperty=value.

Table 11.1. System Properties

Setting Description Default Value
org.teiid.allowNanInfinity Set to true to allow numeric functions to return NaN (Not A Number) and +-Infinity. Note that these values are not covered by the SQL specification. Defaults to false.
org.teiid.useValueCache Set to true to enable the canonical value cache. Value caching is used dynamically when buffer memory is running low to reuse identical values, reducing the memory consumed by JBoss Data Virtualization. However, there is a computation cost associated with the cache lookup, so enabling this setting is not appropriate for installations handling large volumes of dissimilar data. Defaults to false.
org.teiid.ansiQuotedIdentifiers Set to false to emulate prior behavior of treating double quoted values without leading identifier parts as string literals, which is not expected by the SQL specification. Defaults to true.
org.teiid.subqueryUnnestDefault Set to true to aggressively unnest subquery IN and EXISTS predicates. If possible, the predicate will be unnested to a traditional join and will be eligible for dependent join planning. If a traditional join is not possible (such as with NOT IN) a merge join version of the semijoin or antijoin will be considered based upon the costing information available. Defaults to false.
org.teiid.ODBCPacketSize Target size in bytes of the ODBC results buffer. This is not a hard maximum, LOBS and wide rows may use larger buffers. Defaults to 307200.
org.teiid.decimalAsDouble Set to true to parse exact fixed point literals (for example, 1.0) as double values rather than as decimal/BigDecimal values and to return a double value from the AVG function for integral values in the same way as previous versions. Defaults to false.
org.teiid.comparableLobs Set to true to allow BLOB and CLOB column values to be comparable in JBoss Data Virtualization. Source type metadata will determine if the comparison can be pushed down. Defaults to false.
org.teiid.comparableObject Set to true to allow object column values to be comparable in JBoss Data Virtualization. Source type metadata will determine if the comparison can be pushed down. The object instances are expected to correctly implement java.lang.Comparable.compareTo. If the instance object is not Comparable, then ClassCastExceptions may the thrown. Defaults to false.
org.teiid.padSpace Set to true to compare strings as if PAD SPACE collation is being used. This means strings will be right padded to the same length for comparison. If this property is set, it is not necessary to use the trimStrings translator option. Defaults to false.
org.teiid.collationLocale Set to a Java locale string language[_country[_varient]], where language, country, and variant are two letter codes - see java.util.Locale for more on valid codes. Note that even if org.teiid.comparableLobs is set, CLOB values will not be compared using the locale collator. Not set by default, which means that Java's natural (UTF-16) string comparison will be used.
org.teiid.clientVdbLoadTimeoutMillis The default amount of time a client (currently only local clients) will wait to make a connection to an active VDB before throwing an exception. Clients may override this setting via the waitForLoad connection property. Defaults to 5 minutes.
org.teiid.enDateNames Set to true to use English month and day names for the system function dayName and monthName, rather than returning names from the Java default locale. Prior to this release, dayName and monthName always returned English names. Defaults to false.
org.teiid.pushdownDefaultNullOrder Set to true to mimic prior release behavior of pushing the default null order of nulls low if the source has a different default null order and supports explicit null ordering. Defaults to false.
org.teiid.implicitMultiSourceJoin Set to false to disable prior release behavior of implicitly partitioning joins between multi-source tables. When set to false an explicit predicate such as tbl1.source_name = tbl2.source_name is required to partition the results of the join. Defaults to true.
org.teiid.joinPrefetchBatches Sets the number of batches that can be pre-fetched/buffered for join processing that may otherwise be streamed from the source. Consider increasing when you have slow ALREADY_SORTED sources participating in a non-dependent join that return significantly more batches. Note however that increasing this value can lead to an increase in disk utilization to perform the buffering. Defaults to 10.
org.teiid.maxStringLength Sets the nominal maximum length of strings in JBoss Data Virtualization. Most operations will truncate strings that are larger than this value. Setting this value can also adjust the maximum size of LOB bytes held in memory. Note that sources may not appropriately handle string values that are larger than what the source supports.

Warning

Strings are held entirely in memory. Do not set this value too high as you may experience out of memory errors.
Defaults to 4000.
org.teiid.calendarTimestampDiff Set to false to use the timestampdiff behaviour from previous versions. Note that using the old behavior can result in differing results between pushed and non-pushed versions of timestampdiff for intervals greater than seconds. This is because sources use the date part and not approximate interval differences. Defaults to true
The following properties are provided for PostgreSQL compatibility.

Table 11.2. System Properties for PostgreSQL Compatibility

Setting Description Default Value
org.teiid.iso8601Week Set to true to use ISO 8601 rules for week calculations regardless of the locale. When set to true, the dayOfWeek function will begin with 1 for MONDAY rather than SUNDAY, and the week function will require that week 1 of a year contains the year's first Thursday. Defaults to false.
org.teiid.backslashDefaultMatchEscape Set to true to use '\' as the default escape character for LIKE and SIMILAR TO predicates when no escape is specified. Otherwise JBoss Data Virtualization assumes the SQL specification compliant behavior of treating each non-wildcard character as an exact match character. Defaults to false.
org.teiid.honorDeclareFetchTxn When set to false, the wrapping begin/commit of a UseDeclareFetch cursor will be ignored as Red Hat JBoss Data Virtualization does not require a transaction. Defaults to false.

Note

These properties for PostgreSQL compatibility affect JBoss Data Virtualization globally, not just through the ODBC transport.

11.11. Teiid Management CLI

The AS CLI is a command line based administrative and monitoring tool for Teiid. AdminShell provides a binding into the Groovy scripting language and higher level methods that are often needed when interacting with Teiid. It is still useful to know the underlying CLI commands in many circumstances. The below is a series useful CLI commands for administering a Teiid Server.
VDB Operations
deploy adminapi-test-vdb.xml
undeploy adminapi-test-vdb.xml

/subsystem=teiid:restart-vdb(vdb-name=AdminAPITestVDB, vdb-version=1, model-names=TestModel)

/subsystem=teiid:list-vdbs()
/subsystem=teiid:get-vdb(vdb-name=AdminAPITestVDB,vdb-version=1)
/subsystem=teiid:change-vdb-connection-type(vdb-name=AdminAPITestVDB, vdb-version=1,connection-type=ANY)

/subsystem=teiid:add-data-role(vdb-name=AdminAPITestVDB, vdb-version=1, data-role=TestDataRole, mapped-role=test)
/subsystem=teiid:remove-data-role(vdb-name=AdminAPITestVDB, vdb-version=1, data-role=TestDataRole, mapped-role=test)
Source Operations
/subsystem=teiid:add-source(vdb-name=AdminAPITestVDB, vdb-version=1, source-name=text-connector-test, translator-name=file, model-name=TestModel, ds-name=java:/test-file)
/subsystem=teiid:remove-source(vdb-name=AdminAPITestVDB, vdb-version=1, source-name=text-connector-test, model-name=TestModel)
/subsystem=teiid:update-source(vdb-name=AdminAPITestVDB, vdb-version=1, source-name=text-connector-test, translator-name=file, ds-name=java:/marketdata-file)
Translator Operations
/subsystem=teiid:list-translators()
/subsystem=teiid:get-translator(translator-name=file)
/subsystem=teiid:read-translator-properties(translator-name=file,type=OVERRIDE)
/subsystem=teiid:read-rar-description(rar-name=file)
Runtime Operations
/subsystem=teiid:workerpool-statistics()

/subsystem=teiid:cache-types()
/subsystem=teiid:clear-cache(cache-type=PREPARED_PLAN_CACHE)
/subsystem=teiid:clear-cache(cache-type=QUERY_SERVICE_RESULT_SET_CACHE)
/subsystem=teiid:clear-cache(cache-type=PREPARED_PLAN_CACHE, vdb-name=AdminAPITestVDB,vdb-version=1)
/subsystem=teiid:clear-cache(cache-type=QUERY_SERVICE_RESULT_SET_CACHE, vdb-name=AdminAPITestVDB,vdb-version=1)
/subsystem=teiid:cache-statistics(cache-type=PREPARED_PLAN_CACHE)
/subsystem=teiid:cache-statistics(cache-type=QUERY_SERVICE_RESULT_SET_CACHE)

/subsystem=teiid:engine-statistics()

/subsystem=teiid:list-sessions()
/subsystem=teiid:terminate-session(session=sessionid)

/subsystem=teiid:list-requests()
/subsystem=teiid:cancel-request(session=sessionId, execution-id=1)
/subsystem=teiid:list-requests-per-session(session=sessionId)
/subsystem=teiid:list-transactions()

/subsystem=teiid:mark-datasource-available(ds-name=java:/accounts-ds)

/subsystem=teiid:get-query-plan(session=sessionid,execution-id=1)

Chapter 12. Directory Structure

12.1. Directory Structure

The following shows the contents of the Red Hat JBoss Data Virtualization deployment within a Red Hat JBoss EAP instance:
/bin
   /scripts
/docs
   /teiid
       /datsources
       /schema
       /examples
/domain
   /configuration
       application-users.properties
       application-roles.properties
/dataVirtualization       
/modules
   /system
      /layers
         /dv
           /dv modules
         /base
           /eap modules
/standalone
   /configuration
          standalone.xml
          application-users.properties
          application-roles.properties
bin/scripts
This directory contains installation and utility CLI scripts.
docs/teiid
This directory contains documents, examples, sample data source XML fragments and schema files.
standalone/configuration
  • standalone.xml is the master configuration file for Red Hat JBoss Data Virtualization. This file manages configuration for the JBoss Data Virtualization subsystem in addition to the standard Red Hat JBoss EAP web profile subsystems.
  • application-users.properties and application-roles.properties define the allowed users and their defined roles using the default security domain. Edit these files to add users. If you want to use a different security domain, change the security domain details in the main configuration file.
domain/configuration
  • application-users.properties and application-roles.properties define the allowed users and their defined roles in Red Hat JBoss Data Virtualization using the default security domain. Edit these files to add users. If you want to use a different security domain, change the security domain details in main configuration file.
/modules/system/layers/dv/org/jboss/teiid/*
This directory defines Red Hat JBoss Data Virtualization modules for JBoss EAP.
/modules/system/layers/dv/org/jboss/teiid/client
This directory contains JBoss Data Virtualization client libraries. It has the Red Hat JBoss Data Virtualization JDBC driver JAR file, teiid-[VERSION].Final-jdbc.jar, and also contains teiid-hibernate-dialect-[VERSION].Final.jar that contains the JBoss Data Virtualization Hibernate dialect.
{standalone,domain}/tmp/teiid
This directory contains temporary files created by Red Hat JBoss Data Virtualization. These are mostly created by the buffer manager. These files are not needed across a VM restart. Creation of Red Hat JBoss Data Virtualization LOB values (for example through SQL/XML) will typically create one file per LOB once it exceeds the allowable in memory size of 8KB. In heavy usage scenarios, consider pointing the buffer directory at a partition that is routinely defragmented.
{standalone,domain}/data/teiid-data
This directory contains cached VDB metadata files. Do not edit them manually.
dataVirtualization
This directory contains JDBC drivers, adminshell, the ModeShape VDB and some WARs.

Appendix A. Revision History

Revision History
Revision 6.30-13Wed Oct 12 2016David Le Sage
Updates for 6.3.
Revision 6.2.0-15Wed Sep 2 2015David Le Sage
Updates for 6.2.

Legal Notice

Copyright © 2015 Red Hat, Inc.
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat Software Collections is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.