Chapter 5. Storing Data
5.1. Data Browser
5.1.1. Overview
The Data Browser section of the App Studio allows a developer to:
- Graphically and interactively view the data associated with their app.
- View, create and delete collections.
- Modify data in a collection.
5.1.2. Using the data browser
5.1.2.1. Viewing/Adding Collections
The collections associated with an app can be viewed by selecting the Data Browser tab in the Cloud Management section of the Studio.
This screen has two controls located at the top of the collection list
These buttons
- Add a collection.
- Refresh the list of collections.
Clicking on the button to add a collection prompts you to enter the collection name. Click on the Create button to create the collection.
5.1.2.2. Viewing Data In A Collection
To view the data stored in a collection simply click on one of the collections listed in the Data Browser. This view shows the data accociated with the collection.
At the top of the screen are the main listing functions
These buttons allow you to
- Switch Collection. Selecting this option presents you with a list of collections for the app. Click on a collection to list the data in that collection.
- Add an entry to the collection.
- Import & Export data ( documented later ).
5.1.2.2.1. Sorting Data
To sort the data by a specific field, simply click in the field name at the top of the list. Sorting will alternate between ascending and descending order.
5.1.2.2.2. Filtering Data
To filter the displayed data, click on the "Filter" button at the top of the Data Browser screen. Clicking this button displays the filtering options. These options allow you to filter the displayed data by one or more fields. Filtering supports the following JSON data types:
-
String- allows to filter text-based fields -
Number- filters any numerical value -
Boolean- accepts true and false values
You can filter inside nested objects using '.' character. For example, using author.name as the filter key will filter by the value of the name property of author objects inside a collection of documents with the following structure:
{
"title":"",
"author":{
"name":"John"
}
}5.1.2.3. Editing Data
Editing data in the Data Browser can be done using either the Inline or Advanced Editor
- The Inline Editor is used to edit simple data in a collection (for example, changing the text in a single field).
- The Advanced Editor is used to edit more complex data types. This can be done using an interactive Dynamic Editor or a Raw JSON editor.
5.1.2.3.1. Editing Using the Inline Editor
To edit an entry using the Inline Editor, select the Edit option to the right of a data entry and select Edit Inline. The option will turn to a green tick and black arrow icons as shown in the following picture.
When a field is too complex to edit in the Inline Editor, the "Advanced Editor Only" text is shown. This field is editable only in the Advanced Editor.
When finished updating the entry, select the green tick button to commit the changes to the data or the black arrow button to cancel any changes made.
5.1.2.3.2. Editing Using the Advanced Editor
The advanced editor is used to edit more complex data types (for example, where a field is composed of multiple nested fields).
To open the advanced editor, select the Edit option to the right of a data entry and select Advanced Editor.
The advanced editor has two modes
- A Dynamic Editor to interactively add/edit fields.
- A Raw JSON Editor to directly edit the data in JSON format.
5.1.2.3.2.1. Editing Using the Dynamic Editor
The Dynamic Editor is an interactive editor for JSON data. It presents a structured view of each field to allow adding/editing complex data types.
The actions menu provides all the functionality needed to manage complex fields for the entry.
The options available here are
- Type: The type option changes the data type of the field to an array, JSON object or string. It is also possible to set the field to auto, where the data type is automatically selected from the data entered.
- Sort: The sort option sorts the sub-fields of a complex type in ascending or descending order.
- Append: The append option adds a field after the object selected.
- Insert: The insert option inserts a field before the object selected.
- Duplicate: The duplicate option copies the object selected and appends it to the end of the selected object.
- Remove: The remove option deletes the field from the entry.
5.1.2.3.2.2. Editing Using the Raw JSON Editor
The Raw Editor allows for editing the JSON representation of the data. It is important to ensure that the data entered is in valid JSON format. The JSON data can be displayed in either formatted or compact form.
5.1.3. Exporting and Importing Data
5.1.3.1. Exporting Data
The Export function built into the Data Browser interface is intended for testing and review purposes only. To export your data collections from a Cloud App or service, use FHC.
Data is exported from the Data Browser by using the 'Export' dropdown menu. Three formats are available:
- JSON
- CSV
- BSON (Mongo Dump)
After clicking on the export button for your chosen format, a .zip file will be downloaded. The contents of this is your data.
To export all collections contained within your app, use the 'Export' dropdown in the toolbar on the collection listing screen. To export an individual collection’s data, use the 'Export' dropdown from within that collection’s data listing.
Exporting data should give you a pretty good idea of the formats expected for import. This schema for each format is documented in more detail below.
5.1.3.2. Importing Data
The Import function built into the Data Browser interface is intended for testing and review purposes only. To import your data collections from a Cloud App or service, use FHC.
You can import data into the data browser by clicking the 'Import' button on the collection listing screen. Supported formats are:
- JSON
- CSV
- BSON (Mongo Dump)
- ZIP archives containing any of the previous 3 formats
Every file corresponds to a collection to be imported. The name of that file corresponds to the collection name your data will be imported to.
If a collection does not already exist, we will create it. If the collection already exists, imported documents are appended to the existing contents.
5.1.3.2.1. Importing Formats
Now, we will document the expected formatting of the different types of import. In each case, we’re importing a fictional data set of fruit. The collection name once imported will be fruit.
Each example contains each type supported for import: String, number and boolean. Remember, complex object types are not supported.
5.1.3.2.2. Importing JSON
JSON formatted imports are just JSON arrays of documents.
fruit.json:
[
{
"name":"plums",
"price":2.99,
"quantity":45,
"onSale":true,
"_id":"53767254db8fc14837000002"
},
{
"name":"pears",
"price":2.5,
"quantity":20,
"onSale":true,
"_id":"53767254db8fc14837000003"
}
]5.1.3.2.3. Importing CSV
To import CSV, it’s important to keep in mind the separator, delimiter and newline settings used by the platform:
Here’s a sample file.
fruit.csv :
name,price,quantity,onSale,_id "plums",2.99,45,true,53767254db8fc14837000002 "pears",2.5,20,true,53767254db8fc14837000003
5.1.3.2.4. Importing BSON or MongoDump Output
Running the mongodump tool is a convenient way to export the data of an existing MongoDB Database. This tool will create a directory called dump, and output a series of folders and subfolders containing the database, then subsequent collection names.
To import these collections into a RHMAP database, simply take the output .bson files, and import these directly. We don’t need the directory structure, or the outputted metadata .json files. Since BSON is a binary format, it doesn’t make sense to show an example here - instead, you can download the file.
We can also view the data inside a .bson file using the bsondump tool supplied with any install of mongodb: bsondump fruit.bson:
{ "name" : "plums", "price" : 2.99, "quantity" : 45, "onSale" : true, "_id" : ObjectId( "53767254db8fc14837000002" ) }
{ "name" : "pears", "price" : 2.5, "quantity" : 20, "onSale" : true, "_id" : ObjectId( "53767254db8fc14837000003" ) }
2 objects found5.1.4. Upgrading the Database
If you need to perform database operations beyond those provided by the $fh.db API, you can access the database directly using the MongoDB driver. To enable direct access to the database, it first has to be upgraded - migrated to a dedicated instance.
To upgrade your app’s database, click the Upgrade Database button in the top right corner of the Data Browser screen.
The following steps are performed by the platform during the upgrade:
- Your app is stopped.
- A new database is created specifically for the app.
-
The environment variable
FH_MONGODB_CONNURLis set for your app, containing the database connection string, which can be passed to the MongoDB driver.
In addition, if the database already contained data:
- Data are migrated from the old database to the new one.
- If everything has succeeded, data are removed from the old database.
- Data are validated in the new database.
Note: you may also need to update the contents of your application.js and your package.json files. If this is the case, you will be informed on the migrate screen.
After all data migration steps have completed, you have to redeploy the app.
After the database upgrade is complete, new collections with the prefix fhsync_ are created to enable sync functionality. Red Hat recommends that you keep these collections, even if you do not intend to use sync functionality.
5.2. Exporting Application Data
Overview
It is often necessary to export data from a database of a Cloud App for purposes of creating backups, testing or setting up other hosted databases with existing data. fhc allows you to export all data from a hosted database associated with a Cloud App or service.
For a full reference, run fhc help appdata export to see a list of available commands. Run fhc help appdata export <command> for a reference of a particular command.
Requirements
- RHMAP version 3.11.0 or later
- fhc version 2.9.0 or later
5.2.1. Exported Data Format
All collections associated with a Cloud App or Service are exported into a single tar archive, comprising each of the individual collections as a compressed binary JSON (BSON) file. The name of each BSON file matches the name of the collection from which it originates.
Example:
export.tar
|__ <COLLECTION_1_NAME>.bson.gz
|__ <COLLECTION_2_NAME>.bson.gzThe BSON files are compatible with standard MongoDB backup and restore tools. See Back Up and Restore with MongoDB Tools for more information.
5.2.2. Exporting Application Data
The process of exporting application data involves three main steps:
5.2.2.1. Starting a New Export Job
To start a new export job, enter the following command:
fhc appdata export start --appId=<APP_ID> --envId=<ENV_ID> [--stopApp=[y/n>]]
-
APP_ID- ID of the Cloud App or service -
ENV_ID- ID of the deployment environment where the app is running
Once done, you receive a prompt asking whether you want to stop the app during data export.
-
Choosing
nleaves the Cloud App running during the export job. If new data is added to any of the collections during the export, it will not be included in the current export job. -
Choosing
ystops the Cloud App once the export job starts. You have to restart the app manually once the export job is finished.
To skip the prompt after running the command (for example, when scripting fhc), provide the optional --stopApp flag with the command before running.
If another export job is already running for the same app in the same environment, the start command will exit without performing any action.
5.2.2.2. Querying the Status of an Export Job
Once the job starts, the command line tool prints out the following status command with all the relevant fields already filled in.
fhc appdata export status --appId=<APP_ID> --envId=<ENV_ID> --jobId=<JOB_ID> [--interval=<MILLISECONDS>]
To query the status of an export job, copy and paste this command into the shell.
To keep the command running and periodically reporting the job status, include the optional --interval flag and specify the interval at which the status of the job is to be queried.
Once the job is finished, the status command returns the job status as complete.
5.2.2.3. Downloading the Exported Data
To download your exported data once the job is finished, run the following download command:
fhc appdata export download --appId=<APP_ID> --envId=<ENV_ID> --jobId=<JOB_ID> --file=<FILENAME>
-
APP_ID- ID of the Cloud App or Service -
ENV_ID- ID of the deployment environment where the app is running -
JOB_ID- ID of the export job and the corresponding export file FILENAME- path to the file in which the exported data is to be storedIf a file already exists at the specified location, the download command exits without performing any action.
5.3. Importing Application Data
Overview
After exporting application data with fhc appdata export, you can use fhc appdata import to import the data from the file system to a hosted database associated with a Cloud App or service.
For a full reference, run fhc help appdata import to see a list of available commands. Run fhc help appdata import <command> for a reference of a particular command.
Requirements
- RHMAP version 3.12.0 or later
- fhc version 2.10.0 or later
- The target application must have an upgraded database. See Upgrading the Database for more information.
-
The format of the file to be imported must be the same as created by the
fhc appdata export. See Exported data format for more details.
5.3.1. Importing Application Data
The process of importing application data involves two main steps:
5.3.1.1. Starting a New Import Job
To start a new import job, enter the following command:
fhc appdata import start --appId=<APP_ID> --envId=<ENV_ID> --path=<FILE_PATH>
-
APP_ID- ID of the Cloud App or service -
ENV_ID- ID of the deployment environment where the app is running -
FILE_PATH- Path to the file to be imported
Upon execution, the command starts uploading the provided file, and keeps running without printing any messages until the upload is finished. Once the upload is finished, the command exits and the import job starts.
If another import job is already running for the same app in the same environment, the start command exits without performing any action.
5.3.1.2. Querying the Status of an Import Job
Once the job starts, the command line tool prints out the following status command with all the relevant fields already filled in.
fhc appdata import status --appId=<APP_ID> --envId=<ENV_ID> --jobId=<JOB_ID> [--interval=<MILLISECONDS>]
To query the status of an import job, copy and paste this command into the shell.
To keep the command running and periodically reporting the job status, include the optional --interval flag and specify the interval at which the status of the job is to be queried.
Once the job is finished, the status command returns the job status as complete.

Where did the comment section go?
Red Hat's documentation publication system recently went through an upgrade to enable speedier, more mobile-friendly content. We decided to re-evaluate our commenting platform to ensure that it meets your expectations and serves as an optimal feedback mechanism. During this redesign, we invite your input on providing feedback on Red Hat documentation via the discussion platform.