Chapter 5. Storing Data

5.1. Accessing the Database from Cloud Apps

5.1.1. Overview

In an RHMAP 4.x MBaaS based on OpenShift 3, all components of the MBaaS run within a single OpenShift project, together with a shared MongoDB replica set. Depending on how the MBaaS was installed, the replica set runs either on a single node, or on multiple nodes, and may be backed by persistent storage. The recommended production-grade MongoDB setup for an MBaaS has 3 replicas, each backed by persistent storage.

Each Cloud App deployed to the MBaaS has its own OpenShift project. However, the database of a Cloud App is created in the shared MongoDB instance. Therefore, all management operations on the persistent data of Cloud Apps and the MBaaS, such as backup, or replication can be centralized. At the same time, the data of individual Cloud Apps is isolated in separate databases.

5.1.2. Accessing data in the MongoDB in the MBaaS

A simple way to store data is using the $fh.db API, which provides methods for create, read, update, delete, and list operations. See the $fh.db API documentation for more information.

If you need the full capability of a native MongoDB driver, or want to use another library to access the data, such as Mongoose, you can use the connectionString method of the $fh.db API to retrieve the connection string to the MongoDB instance:

$fh.db({
 "act" : "connectionString"
}, function(err, connectionString){
  console.log('connectionString=', connectionString);
});
Note

To avoid concurrency issues, we recommend using either the $fh.db API or a direct connection to the database, but not both at the same time.

5.2. Data Browser

5.2.1. Overview

The Data Browser section of the App Studio allows a developer to:

  • Graphically and interactively view the data associated with their app.
  • View, create and delete collections.
  • Modify data in a collection.

5.2.2. Using the data browser

5.2.2.1. Viewing/Adding Collections

The collections associated with an app can be viewed by selecting the Data Browser tab in the Cloud Management section of the Studio.

List Collections for an App

This screen has two controls located at the top of the collection list

List Collection Options

These buttons

  • Add a collection.
  • Refresh the list of collections.

Clicking on the button to add a collection prompts you to enter the collection name. Click on the Create button to create the collection.

Add New Collection

5.2.2.2. Viewing Data In A Collection

To view the data stored in a collection simply click on one of the collections listed in the Data Browser. This view shows the data accociated with the collection.

List Data For A Collection

At the top of the screen are the main listing functions

List Data Options

These buttons allow you to

  • Switch Collection. Selecting this option presents you with a list of collections for the app. Click on a collection to list the data in that collection.
  • Add an entry to the collection.
  • Import & Export data ( documented later ).
5.2.2.2.1. Sorting Data

To sort the data by a specific field, simply click in the field name at the top of the list. Sorting will alternate between ascending and descending order.

5.2.2.2.2. Filtering Data

To filter the displayed data, click on the "Filter" button at the top of the Data Browser screen. Clicking this button displays the filtering options. These options allow you to filter the displayed data by one or more fields. Filtering supports the following JSON data types:

  • String - allows to filter text-based fields
  • Number - filters any numerical value
  • Boolean - accepts true and false values

Filter Data

Note

You can filter inside nested objects using '.' character. For example, using author.name as the filter key will filter by the value of the name property of author objects inside a collection of documents with the following structure:

{
   "title":"",
   "author":{
       "name":"John"
   }
}

5.2.2.3. Editing Data

Editing data in the Data Browser can be done using either the Inline or Advanced Editor

  • The Inline Editor is used to edit simple data in a collection (for example, changing the text in a single field).
  • The Advanced Editor is used to edit more complex data types. This can be done using an interactive Dynamic Editor or a Raw JSON editor.
5.2.2.3.1. Editing Using the Inline Editor

To edit an entry using the Inline Editor, select the Edit option to the right of a data entry and select Edit Inline. The option will turn to a green tick and black arrow icons as shown in the following picture.

Edit Inline

When a field is too complex to edit in the Inline Editor, the "Advanced Editor Only" text is shown. This field is editable only in the Advanced Editor.

When finished updating the entry, select the green tick button to commit the changes to the data or the black arrow button to cancel any changes made.

5.2.2.3.2. Editing Using the Advanced Editor

The advanced editor is used to edit more complex data types (for example, where a field is composed of multiple nested fields).

To open the advanced editor, select the Edit option to the right of a data entry and select Advanced Editor.

Advanced Editor

The advanced editor has two modes

  • A Dynamic Editor to interactively add/edit fields.
  • A Raw JSON Editor to directly edit the data in JSON format.
5.2.2.3.2.1. Editing Using the Dynamic Editor

The Dynamic Editor is an interactive editor for JSON data. It presents a structured view of each field to allow adding/editing complex data types.

Dynamic Editor

The actions menu provides all the functionality needed to manage complex fields for the entry.

Dynamic Editor

The options available here are

  • Type: The type option changes the data type of the field to an array, JSON object or string. It is also possible to set the field to auto, where the data type is automatically selected from the data entered.
  • Sort: The sort option sorts the sub-fields of a complex type in ascending or descending order.
  • Append: The append option adds a field after the object selected.
  • Insert: The insert option inserts a field before the object selected.
  • Duplicate: The duplicate option copies the object selected and appends it to the end of the selected object.
  • Remove: The remove option deletes the field from the entry.
5.2.2.3.2.2. Editing Using the Raw JSON Editor

The Raw Editor allows for editing the JSON representation of the data. It is important to ensure that the data entered is in valid JSON format. The JSON data can be displayed in either formatted or compact form.

Raw Editor

5.2.3. Exporting and Importing Data

5.2.3.1. Exporting Data

Note

The Export function built into the Data Browser interface is intended for testing and review purposes only. To export your data collections from a Cloud App or service, use FHC.

Data is exported from the Data Browser by using the 'Export' dropdown menu. Three formats are available:

  • JSON
  • CSV
  • BSON (Mongo Dump)

After clicking on the export button for your chosen format, a .zip file will be downloaded. The contents of this is your data.

To export all collections contained within your app, use the 'Export' dropdown in the toolbar on the collection listing screen. To export an individual collection’s data, use the 'Export' dropdown from within that collection’s data listing.

Exporting data should give you a pretty good idea of the formats expected for import. This schema for each format is documented in more detail below.

5.2.3.2. Importing Data

Note

The Import function built into the Data Browser interface is intended for testing and review purposes only. To import your data collections from a Cloud App or service, use FHC.

You can import data into the data browser by clicking the 'Import' button on the collection listing screen. Supported formats are:

  • JSON
  • CSV
  • BSON (Mongo Dump)
  • ZIP archives containing any of the previous 3 formats

Every file corresponds to a collection to be imported. The name of that file corresponds to the collection name your data will be imported to.
If a collection does not already exist, we will create it. If the collection already exists, imported documents are appended to the existing contents.

5.2.3.2.1. Importing Formats

Now, we will document the expected formatting of the different types of import. In each case, we’re importing a fictional data set of fruit. The collection name once imported will be fruit.
Each example contains each type supported for import: String, number and boolean. Remember, complex object types are not supported.

5.2.3.2.2. Importing JSON

JSON formatted imports are just JSON arrays of documents.

fruit.json:

[
  {
    "name":"plums",
    "price":2.99,
    "quantity":45,
    "onSale":true,
    "_id":"53767254db8fc14837000002"
  },
  {
    "name":"pears",
    "price":2.5,
    "quantity":20,
    "onSale":true,
    "_id":"53767254db8fc14837000003"
  }
]
5.2.3.2.3. Importing CSV

To import CSV, it’s important to keep in mind the separator, delimiter and newline settings used by the platform:

Here’s a sample file.

fruit.csv :

name,price,quantity,onSale,_id
"plums",2.99,45,true,53767254db8fc14837000002
"pears",2.5,20,true,53767254db8fc14837000003
5.2.3.2.4. Importing BSON or MongoDump Output

Running the mongodump tool is a convenient way to export the data of an existing MongoDB Database. This tool will create a directory called dump, and output a series of folders and subfolders containing the database, then subsequent collection names.
To import these collections into a RHMAP database, simply take the output .bson files, and import these directly. We don’t need the directory structure, or the outputted metadata .json files. Since BSON is a binary format, it doesn’t make sense to show an example here - instead, you can download the file.

We can also view the data inside a .bson file using the bsondump tool supplied with any install of mongodb: bsondump fruit.bson:

{ "name" : "plums", "price" : 2.99, "quantity" : 45, "onSale" : true, "_id" :   ObjectId( "53767254db8fc14837000002" ) }
{ "name" : "pears", "price" : 2.5, "quantity" : 20, "onSale" : true, "_id" :  ObjectId( "53767254db8fc14837000003" ) }
2 objects found

5.3. Exporting Application Data

Overview

It is often necessary to export data from a database of a Cloud App for purposes of creating backups, testing or setting up other hosted databases with existing data. fhc allows you to export all data from a hosted database associated with a Cloud App or service.

For a full reference, run fhc help appdata export to see a list of available commands. Run fhc help appdata export <command> for a reference of a particular command.

Requirements

  • RHMAP version 3.11.0 or later
  • fhc version 2.9.0 or later

5.3.1. Exported Data Format

All collections associated with a Cloud App or Service are exported into a single tar archive, comprising each of the individual collections as a compressed binary JSON (BSON) file. The name of each BSON file matches the name of the collection from which it originates.

Example:

    export.tar
    |__ <COLLECTION_1_NAME>.bson.gz
    |__ <COLLECTION_2_NAME>.bson.gz

The BSON files are compatible with standard MongoDB backup and restore tools. See Back Up and Restore with MongoDB Tools for more information.

5.3.2. Exporting Application Data

The process of exporting application data involves three main steps:

5.3.2.1. Starting a New Export Job

To start a new export job, enter the following command:

fhc appdata export start --appId=<APP_ID> --envId=<ENV_ID> [--stopApp=[y/n>]]

  • APP_ID - ID of the Cloud App or service
  • ENV_ID - ID of the deployment environment where the app is running

Once done, you receive a prompt asking whether you want to stop the app during data export.

  • Choosing n leaves the Cloud App running during the export job. If new data is added to any of the collections during the export, it will not be included in the current export job.
  • Choosing y stops the Cloud App once the export job starts. You have to restart the app manually once the export job is finished.

To skip the prompt after running the command (for example, when scripting fhc), provide the optional --stopApp flag with the command before running.

If another export job is already running for the same app in the same environment, the start command will exit without performing any action.

5.3.2.2. Querying the Status of an Export Job

Once the job starts, the command line tool prints out the following status command with all the relevant fields already filled in.

fhc appdata export status --appId=<APP_ID> --envId=<ENV_ID> --jobId=<JOB_ID> [--interval=<MILLISECONDS>]

To query the status of an export job, copy and paste this command into the shell.

To keep the command running and periodically reporting the job status, include the optional --interval flag and specify the interval at which the status of the job is to be queried.

Once the job is finished, the status command returns the job status as complete.

5.3.2.3. Downloading the Exported Data

To download your exported data once the job is finished, run the following download command:

fhc appdata export download --appId=<APP_ID> --envId=<ENV_ID> --jobId=<JOB_ID> --file=<FILENAME>

  • APP_ID - ID of the Cloud App or Service
  • ENV_ID - ID of the deployment environment where the app is running
  • JOB_ID - ID of the export job and the corresponding export file
  • FILENAME - path to the file in which the exported data is to be stored

    If a file already exists at the specified location, the download command exits without performing any action.

5.4. Importing Application Data

Overview

After exporting application data with fhc appdata export, you can use fhc appdata import to import the data from the file system to a hosted database associated with a Cloud App or service.

For a full reference, run fhc help appdata import to see a list of available commands. Run fhc help appdata import <command> for a reference of a particular command.

Requirements

  • RHMAP version 3.12.0 or later
  • fhc version 2.10.0 or later
  • The format of the file to be imported must be the same as created by the fhc appdata export. See Exported data format for more details.

5.4.1. Importing Application Data

The process of importing application data involves two main steps:

5.4.1.1. Starting a New Import Job

To start a new import job, enter the following command:

fhc appdata import start --appId=<APP_ID> --envId=<ENV_ID> --path=<FILE_PATH>

  • APP_ID - ID of the Cloud App or service
  • ENV_ID - ID of the deployment environment where the app is running
  • FILE_PATH - Path to the file to be imported

Upon execution, the command starts uploading the provided file, and keeps running without printing any messages until the upload is finished. Once the upload is finished, the command exits and the import job starts.

If another import job is already running for the same app in the same environment, the start command exits without performing any action.

5.4.1.2. Querying the Status of an Import Job

Once the job starts, the command line tool prints out the following status command with all the relevant fields already filled in.

fhc appdata import status --appId=<APP_ID> --envId=<ENV_ID> --jobId=<JOB_ID> [--interval=<MILLISECONDS>]

To query the status of an import job, copy and paste this command into the shell.

To keep the command running and periodically reporting the job status, include the optional --interval flag and specify the interval at which the status of the job is to be queried.

Once the job is finished, the status command returns the job status as complete.