Chapter 9. Backup and Restore

To backup and restore your Ansible Automation Platform deployment, it is vital to ensure automatic backups have been set for EFS and that these backups are accessible for restoration.

When you create an Ansible Automation Platform deployment, automatic Amazon Elastic File System (EFS) backups are set by default. However, it is prudent to check that those backups can be restored before a disaster scenario.

In addition, it is also extremely important that regular manual snapshots of the Amazon Relational Database Service (RDS) are taken to ensure a deployment can be restored as close as possible to its previous working state.

9.1. Backing up the Ansible Automation Platform deployment

Use the following procedures to ensure a smooth backup process.

9.1.1. AWS permissions

Before starting the backup process, the AWS account used to execute a backup must have the following permissions granted.

Service/PermissionResources

backup

  • backup:DescribeBackupJob
  • backup:ListRecoveryPointsByBackupVault
  • backup:StartBackupJob
 

cloudformation

  • cloudformation:DescribeStacks
  • cloudformation:DescribeStackResources
 

elasticfilesystem

  • elasticfilesystem:DescribeFileSystems
 

iam

  • iam:PassRole

arn:aws:iam::*:`role/service-role/AWSBackupDefaultServiceRole`

rds

  • rds:CreateDBSnapshot
  • rds:DescribeDBSnapshots
 

secretsmanager

  • secretsmanager:GetSecretValue
 

s3

  • s3:CreateBucket
  • s3:GetObject
  • s3:PutObject
 

9.1.2. Setting the ansible-on-clouds-ops container image

The ansible-on-clouds-ops image tag should match the version of your foundation deployment. For example, if your foundation deployment version is 2.4.20230630-00, pull the ansible-on-clouds-ops image with tag 2.4.20230630.

Procedure

  1. Pull the ansible-on-clouds-ops container image with the same tag version as the foundation deployment.

    Note

    Before pulling the docker image, ensure you are logged in to registry.redhat.io using docker. Use the following command to login to registry.redhat.io.

    $ docker login registry.redhat.io

    For more information about registry login, see Registry Authentication

    $ export IMAGE=registry.redhat.io/ansible-on-clouds/ansible-on-clouds-ops-rhel9:2.4.20230630
    $ docker pull $IMAGE --platform=linux/amd64

9.1.3. Generating the backup data file

The following commands create a directory, and populate it with an empty data template that, when completed, will be used during the backup.

Procedure

  1. Create a folder to hold the configuration

    $ mkdir command_generator_data
  2. Populate the command_generator_data folder with the configuration file template. This creates the backup.yml file within the command-generator_data directory.

    Note

    On Linux, any file or directory created by the command generator is owned by root:root by default. To change the ownership of the files and directories, you can run the sudo chmod command after the files are created. For more information, read Command generator - Linux files owned by root.

    $ docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE \
      command_generator_vars aws_backup_stack \
      --output-data-file /data/backup.yml
  3. After running these commands, a $(pwd)/command_generator_data/backup.yml template file is created. This template file resembles the following:

    aws_backup_stack:
      ansible_config_path:
      cloud_credentials_path:
      deployment_name:
      extra_vars:
        aws_backup_iam_role_arn:
        aws_backup_vault_name: Default
        aws_region:
        aws_s3_bucket:
        aws_ssm_bucket_name:
        backup_prefix: aoc-backup

9.1.4. Updating the backup data file

You must populate the data file before triggering the backup. The following variables are parameters listed in the data file.

  • ansible_config_path (Optional) Only use if overriding with a customer ansible_config.
  • cloud_credentials_path is the path to your AWS credentials file.
  • deployment_name is the name of the foundation deployment. This is the same name you used when you deployed the foundation.
  • aws_backup_iam_role_arn is the Amazon Resource Name (ARN) of the AWS IAM Role that has permissions to perform backup operations.

    Note

    You can use the AWS Backup Default Service Role for this which has the format arn:aws:iam::<Your AWS Account Number>:role/service-role/AWSBackupDefaultServiceRole.

  • aws_backup_vault_name is the name of the backup vault that will hold the EFS recovery points.

    Note

    Please ensure the referenced backup vault exists and the AWS Backup IAM role provided above has the required permissions to create an EFS recovery point inside the vault.

  • aws_region is the region where the foundation deployment is deployed.
  • aws_s3_bucket is the name of the S3 bucket where the backup files are stored. This creates a new bucket if one does not already exist. Every backup is stored in the bucket.
    For guidance on bucket naming, see Bucket naming rules.
    For guidance on creating S3 buckets, see AWS Creating a bucket.
  • aws_ssm_bucket_name is the name of the S3 bucket where temporary configuration files for the AWS SSM are stored. You can use an existing bucket or create a new one.

    Note

    The aws_ssm_bucket_name parameter is ONLY used to store temporary config files. It does NOT need to be saved for use in other playbooks. Any valid existing bucket can be used. For more information on creating S3 buckets, read AWS Creating A Bucket in the AWS documentation.

  • backup_prefix is a prefix you would like to add to the backup name (default: aoc-backup)

When you have populated the data file, it should resemble the following. The values in this file are provided as examples:

Note

The optional value ansible_config_path has been removed. If you do not wish to use this optional value and want to use the default value for this variables, you 'must' also remove it for your data file like it was done in the example below. If you wish to use this optional variable, it 'must' be included in the data file and be assigned a value.

aws_backup_stack:
  cloud_credentials_path: ~/.aws/credentials
  deployment_name: AnsibleAutomationPlatform
  extra_vars:
    aws_backup_iam_role_arn: arn:aws:iam::<Your AWS Account Number>:role/service-role/AWSBackupDefaultServiceRole
    aws_backup_vault_name: Default
    aws_region: us-east-1
    aws_s3_bucket: ansible-automation-platform-bucket
    aws_ssm_bucket_name: aap-ssm-bucket
    backup_prefix: aoc-backup

9.1.5. Running the backup playbook

The following procedure runs the backup playbook as a container.

Procedure

  1. To run the backup, run the command generator.

    $ docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE command_generator aws_backup_stack --data-file /data/backup.yml

    This generates the backup CLI command

    ----------------------------------------------
    docker run --rm --env PLATFORM=AWS -v ~/.aws/credentials:/home/runner/.aws/credentials:ro \
    --env ANSIBLE_CONFIG=../aws-ansible.cfg --env DEPLOYMENT_NAME=AnsibleAutomationPlatform --env GENERATE_INVENTORY=true  \
    $IMAGE redhat.ansible_on_clouds.aws_backup_stack \
    -e 'aws_foundation_stack_name=AnsibleAutomationPlatform aws_region=us-east-1 aws_backup_vault_name=Default \
    aws_backup_iam_role_arn=arn:aws:iam::<Your AWS Account Number>:role/service-role/AWSBackupDefaultServiceRole \
    aws_s3_bucket=ansible-automation-platform-bucket aws_ssm_bucket_name=aap-ssm-bucket backup_prefix=aoc-backup'
    ===============================================
  2. Run the generated command to trigger the backup.

    $ docker run --rm --env PLATFORM=AWS -v ~/.aws/credentials:/home/runner/.aws/credentials:ro \
    --env ANSIBLE_CONFIG=../aws-ansible.cfg --env DEPLOYMENT_NAME=AnsibleAutomationPlatform --env GENERATE_INVENTORY=true  \
    $IMAGE redhat.ansible_on_clouds.aws_backup_stack \
    -e 'aws_foundation_stack_name=AnsibleAutomationPlatform aws_region=us-east-1 aws_backup_vault_name=Default \
    aws_backup_iam_role_arn=arn:aws:iam::<Your AWS Account Number>:role/service-role/AWSBackupDefaultServiceRole \
    aws_s3_bucket=ansible-automation-platform-bucket aws_ssm_bucket_name=aap-ssm-bucket backup_prefix=aoc-backup'
  3. The backup can take several minutes to complete, depending on the database size. A successful backup returns a log similar to the following:

    {
        "msg": [
            "Successfully backed up AnsibleAutomationPlatform!",
            "Please note below the bucket name, region and backup name which are required for restore process.",
            "aws_s3_bucket: ansible-automation-platform-bucket ",
            "aws_region: us-east-1",
            "aws_backup_name: ansible-automation-platform-bucket-20230706T163309",
            "Your backup files can be found at:",
            "https://s3.console.aws.amazon.com/s3/buckets/ansible-automation-platform-bucket?region=us-east-1&prefix=aoc-backup-AnsibleAutomationPlatform-20230706T163309/&showversions=false"
        ]
    }
  4. Your Ansible Automation Platform from AWS Marketplace deployment is now successfully backed up. As the log shows, the playbook successfully creates a backup folder in the S3 bucket specified above.

9.1.6. Deleting backups

There are two playbooks to delete backups:

  • Use the aws_backup_delete playbook which deletes a single backup.
  • Use the aws_backups_delete playbook which deletes multiple backups at once.

aws_backups_delete takes a array of strings ["backup1","backup2",…​], while aws_backup_delete takes only one string, that being the name of a specific backup, "backup1".

The use of aws_backups_delete is desscribed in this section.

Procedure

  1. Populate the command_generator_data directory with the configuration file template.

    Note

    On Linux, any file or directory created by the command generator is owned by root:root by default. To change the ownership of the files and directories, you can run the sudo chmod command after the files are created. For more information, read Command generator - Linux files owned by root.

    docker run --rm -v $(pwd)/command_generator_data/:/data $IMAGE command_generator_vars aws_backups_delete --output-data-file /data/backups_delete.yml

    Produces the following output:

    ===============================================
    Playbook: aws_backups_delete
    Description: This playbook delete a specified backup.
    -----------------------------------------------
    This playbook delete a specified backup
    
    -----------------------------------------------
    Command generator template:
    
    docker run --rm -v <local_data_file_directory>:/data $IMAGE command_generator aws_backups_delete --data-file /data/backups_delete.yml
  2. After running the command, a $(pwd)/command_generator_data/backups_delete.yml template file is created. This template file resembles the following:

    aws_backups_delete:
      cloud_credentials_path:
      extra_vars:
        aws_backup_names:
        aws_region:
        aws_s3_bucket:
        delete:

The aws_backup_names parameter must specify an array of strings, for example, ["backup1","backup2"]. The delete parameter must be set to true to successfully delete.

  1. To delete the backups, run the command generator to generate the aws_backups_delete command.

    docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE command_generator aws_backups_delete --data-file /data/backups_delete.yml

    Resulting in the following ouput:

    Command to run playbook:
    
    docker run --rm --env PLATFORM=AWS -v ~/.aws/credentials:/home/runner/.aws/credentials:ro \
    --env ANSIBLE_CONFIG=../aws-ansible.cfg  $IMAGE redhat.ansible_on_clouds.aws_backups_delete \
    -e 'aws_region=<region> aws_s3_bucket=<bucket> aws_backup_names=["backup1","backup2"] delete=True'
    ===============================================
  2. Run the supplied backup command to delete the backups.
  3. When the playbook has finished running, the output resembles the following:

    TASK [redhat.ansible_on_clouds.standalone_aws_backup_delete : [delete_backup] Dry-run message] ***
    skipping: [localhost]
    
    PLAY RECAP *********************************************************************
    localhost                  : ok=21   changed=2    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0

9.1.6.1. Failing to delete a backup

If the deletion of backup fails, take the following steps.

Procedure

  1. Navigate to the bucket containing the backup.
  2. Locate the directory that has the name of the backup.
  3. Open the backup directory.
  4. Open the file names restore-vars.json and note last part of the Recovery_Point_ARN and the Backup_Vault_Name.
  5. Navigate to AWS backup.
  6. Select Backup Vault.
  7. Select the vault with the name you noted from Backup_Vault_Name.
  8. Search for the Recovery_Point_ARN.
  9. Delete the Recovery_Point_ARN if not already deleted.
  10. Navigate to the bucket containing the backup.
  11. Delete the directory having the name of the backup.

9.2. Restoring the Ansible Automation Platform deployment

Use the following procedures to ensure a smooth restore process.

Note

A restored deployment contains the same VPC networking setup as the original backed up deployment. If a backed up deployment was deployed within an existing VPC, its restored deployment is also deployed into that VPC. If a backed up deployment was deployed with a new VPC, its restored deployment is also deployed with a new VPC.

9.2.1. AWS permissions

You must have the following permissions granted to restore the deployment you are planning to restore.

The following table contains a list of necessary IAM policies

For deployment with a new VPCFor deployment with an existing VPC

autoscaling

  • autoscaling:CreateAutoScalingGroup
  • autoscaling:CreateLaunchConfiguration
  • autoscaling:DeleteAutoScalingGroup
  • autoscaling:DeleteLaunchConfiguration
  • autoscaling:DescribeAutoScalingGroups
  • autoscaling:DescribeAutoScalingInstances
  • autoscaling:DescribeLaunchConfigurations
  • autoscaling:DescribeScalingActivities
  • autoscaling:UpdateAutoScalingGroup

autoscaling

  • autoscaling:CreateAutoScalingGroup
  • autoscaling:CreateLaunchConfiguration
  • autoscaling:DeleteAutoScalingGroup
  • autoscaling:DeleteLaunchConfiguration
  • autoscaling:DescribeAutoScalingGroups
  • autoscaling:DescribeAutoScalingInstances
  • autoscaling:DescribeLaunchConfigurations
  • autoscaling:DescribeScalingActivities
  • autoscaling:UpdateAutoScalingGroup

backup

  • backup:DescribeRestoreJob
  • backup:StartRestoreJob

backup

  • backup:DescribeRestoreJob
  • backup:StartRestoreJob

cloudformation

  • cloudformation:CreateChangeSet
  • cloudformation:CreateStack
  • cloudformation:CreateUploadBucket
  • cloudformation:DeleteStack
  • cloudformation:DescribeChangeSet
  • cloudformation:DescribeStackEvents
  • cloudformation:DescribeStacks
  • cloudformation:ExecuteChangeSet
  • cloudformation:GetStackPolicy
  • cloudformation:GetTemplateSummary
  • cloudformation:ListChangeSets
  • cloudformation:ListStackResources
  • cloudformation:ListStacks
  • cloudformation:TagResource
  • cloudformation:UpdateStack
  • cloudformation:ValidateTemplate

cloudformation

  • cloudformation:CreateChangeSet
  • cloudformation:CreateStack
  • cloudformation:CreateUploadBucket
  • cloudformation:DeleteStack
  • cloudformation:DescribeChangeSet
  • cloudformation:DescribeStackEvents
  • cloudformation:DescribeStacks
  • cloudformation:ExecuteChangeSet
  • cloudformation:GetStackPolicy
  • cloudformation:GetTemplateSummary
  • cloudformation:ListChangeSets
  • cloudformation:ListStackResources
  • cloudformation:ListStacks
  • cloudformation:TagResource
  • cloudformation:UpdateStack
  • cloudformation:ValidateTemplate

ec2

  • ec2:AllocateAddress
  • ec2:AssociateAddress
  • ec2:AssociateNatGatewayAddress
  • ec2:AssociateRouteTable
  • ec2:AssociateSubnetCidrBlock
  • ec2:AttachInternetGateway
  • ec2:AuthorizeSecurityGroupEgress
  • ec2:AuthorizeSecurityGroupIngress
  • ec2:CreateInternetGateway
  • ec2:CreateNatGateway
  • ec2:CreateRoute
  • ec2:CreateRouteTable
  • ec2:CreateSecurityGroup
  • ec2:CreateSubnet
  • ec2:CreateSubnetCidrReservation
  • ec2:CreateTags
  • ec2:CreateVpc
  • ec2:DeleteInternetGateway
  • ec2:DeleteNatGateway
  • ec2:DeleteRoute
  • ec2:DeleteRouteTable
  • ec2:DeleteSecurityGroup
  • ec2:DeleteSubnet
  • ec2:DeleteSubnetCidrReservation
  • ec2:DeleteVpc
  • ec2:DescribeAccountAttributes
  • ec2:DescribeAddresses
  • ec2:DescribeAddressesAttribute
  • ec2:DescribeAvailabilityZones
  • ec2:DescribeInstances
  • ec2:DescribeInternetGateways
  • ec2:DescribeKeyPairs
  • ec2:DescribeNatGateways
  • ec2:DescribeRouteTables
  • ec2:DescribeSecurityGroups
  • ec2:DescribeSubnets
  • ec2:DescribeVpcs
  • ec2:DetachInternetGateway
  • ec2:DisassociateAddress
  • ec2:DisassociateNatGatewayAddress
  • ec2:DisassociateRouteTable
  • ec2:DisassociateSubnetCidrBlock
  • ec2:GetSubnetCidrReservations
  • ec2:ModifyAddressAttribute
  • ec2:ModifyVpcAttribute
  • ec2:ReleaseAddress
  • ec2:RevokeSecurityGroupEgress
  • ec2:RevokeSecurityGroupIngress

ec2

  • ec2:RevokeSecurityGroupEgress
  • ec2:RevokeSecurityGroupIngress
  • ec2:DescribeKeyPairs
  • ec2:CreateSecurityGroup
  • ec2:DescribeSecurityGroups
  • ec2:DeleteSecurityGroup
  • ec2:CreateTags
  • ec2:AuthorizeSecurityGroupEgress
  • ec2:AuthorizeSecurityGroupIngress
  • ec2:DescribeInstances

elasticfilesystem

  • elasticfilesystem:CreateAccessPoint
  • elasticfilesystem:CreateFileSystem
  • elasticfilesystem:CreateMountTarget
  • elasticfilesystem:DeleteAccessPoint
  • elasticfilesystem:DeleteFileSystem
  • elasticfilesystem:DeleteMountTarget
  • elasticfilesystem:DescribeAccessPoints
  • elasticfilesystem:DescribeBackupPolicy
  • elasticfilesystem:DescribeFileSystemPolicy
  • elasticfilesystem:DescribeFileSystems
  • elasticfilesystem:DescribeLifecycleConfiguration
  • elasticfilesystem:DescribeMountTargets

elasticfilesystem

  • elasticfilesystem:CreateAccessPoint
  • elasticfilesystem:CreateFileSystem
  • elasticfilesystem:CreateMountTarget
  • elasticfilesystem:DeleteAccessPoint
  • elasticfilesystem:DeleteFileSystem
  • elasticfilesystem:DeleteMountTarget
  • elasticfilesystem:DescribeAccessPoints
  • elasticfilesystem:DescribeBackupPolicy
  • elasticfilesystem:DescribeFileSystemPolicy
  • elasticfilesystem:DescribeFileSystems
  • elasticfilesystem:DescribeLifecycleConfiguration
  • elasticfilesystem:DescribeMountTargets

elasticloadbalancing

  • elasticloadbalancing:AddTags
  • elasticloadbalancing:CreateListener
  • elasticloadbalancing:CreateLoadBalancer
  • elasticloadbalancing:CreateTargetGroup
  • elasticloadbalancing:DeleteListener
  • elasticloadbalancing:DeleteTargetGroup
  • elasticloadbalancing:DescribeListeners
  • elasticloadbalancing:DescribeTargetGroups
  • elasticloadbalancing:ModifyLoadBalancerAttributes
  • elasticloadbalancing:ModifyTargetGroupAttributes

elasticloadbalancing

  • elasticloadbalancing:AddTags
  • elasticloadbalancing:CreateListener
  • elasticloadbalancing:CreateLoadBalancer
  • elasticloadbalancing:CreateTargetGroup
  • elasticloadbalancing:DeleteListener
  • elasticloadbalancing:DeleteTargetGroup
  • elasticloadbalancing:DescribeListeners
  • elasticloadbalancing:DescribeTargetGroups
  • elasticloadbalancing:ModifyLoadBalancerAttributes
  • elasticloadbalancing:ModifyTargetGroupAttributes

iam

  • iam:AddRoleToInstanceProfile
  • iam:AttachRolePolicy
  • iam:CreateInstanceProfile
  • iam:CreateRole
  • iam:DeleteInstanceProfile
  • iam:DeleteRole
  • iam:DeleteRolePolicy
  • iam:DetachRolePolicy
  • iam:GetRolePolicy
  • iam:ListRoles
  • iam:PassRole
  • iam:PutRolePolicy
  • iam:RemoveRoleFromInstanceProfile
  • iam:TagRole

iam

  • iam:AddRoleToInstanceProfile
  • iam:AttachRolePolicy
  • iam:CreateInstanceProfile
  • iam:CreateRole
  • iam:DeleteInstanceProfile
  • iam:DeleteRole
  • iam:DeleteRolePolicy
  • iam:DetachRolePolicy
  • iam:GetRolePolicy
  • iam:ListRoles
  • iam:PassRole
  • iam:PutRolePolicy
  • iam:RemoveRoleFromInstanceProfile
  • iam:TagRole

kms

  • kms:CreateGrant
  • kms:Decrypt
  • kms:DescribeKey
  • kms:GenerateDataKey

kms

  • kms:CreateGrant
  • kms:Decrypt
  • kms:DescribeKey
  • kms:GenerateDataKey

rds

  • rds:AddTagsToResource
  • rds:CreateDBInstance
  • rds:CreateDBSubnetGroup
  • rds:DeleteDBInstance
  • rds:DeleteDBSubnetGroup
  • rds:DescribeDBInstances
  • rds:DescribeDBSnapshots
  • rds:DescribeDBSubnetGroups
  • rds:ModifyDBInstance
  • rds:RestoreDBInstanceFromDBSnapshot

rds

  • rds:AddTagsToResource
  • rds:CreateDBInstance
  • rds:CreateDBSubnetGroup
  • rds:DeleteDBInstance
  • rds:DeleteDBSubnetGroup
  • rds:DescribeDBInstances
  • rds:DescribeDBSnapshots
  • rds:DescribeDBSubnetGroups
  • rds:ModifyDBInstance
  • rds:RestoreDBInstanceFromDBSnapshot

s3

  • s3:CreateBucket
  • s3:GetObject
  • s3:PutObject

s3

  • s3:CreateBucket
  • s3:GetObject
  • s3:PutObject

secretsmanager

  • secretsmanager:CreateSecret
  • secretsmanager:DeleteSecret
  • secretsmanager:GetRandomPassword
  • secretsmanager:GetSecretValue
  • secretsmanager:PutSecretValue
  • secretsmanager:TagResource

secretsmanager

  • secretsmanager:CreateSecret
  • secretsmanager:DeleteSecret
  • secretsmanager:GetRandomPassword
  • secretsmanager:GetSecretValue
  • secretsmanager:PutSecretValue
  • secretsmanager:TagResource

sns

  • sns:ListTopics

sns

  • sns:ListTopics

9.2.2. Setting the ansible-on-clouds-ops container image

The ansible-on-clouds-ops image tag should match the version of your foundation deployment. For example, if your foundation deployment version is 2.4.20230630-00, pull the ansible-on-clouds-ops image with tag 2.4.20230630.

Procedure

  1. Pull the ansible-on-clouds-ops container image with the same tag version as the foundation deployment.

    Note

    Before pulling the docker image, ensure you are logged in to registry.redhat.io using docker. Use the following command to login to registry.redhat.io.

    $ docker login registry.redhat.io

    For more information about registry login, see Registry Authentication

    $ export IMAGE=registry.redhat.io/ansible-on-clouds/ansible-on-clouds-ops-rhel9:2.4.20230630
    $ docker pull $IMAGE --platform=linux/amd64

9.2.3. Generating the restore data file

The following commands create a directory, and populate it with an empty data template that, when completed, is used during the restore.

Procedure

  1. Create a folder to hold the configuration

    $ mkdir command_generator_data
  2. Populate the $(pwd)/command_generator_data folder with the configuration file template.

    Note

    On Linux, any file or directory created by the command generator is owned by root:root by default. To change the ownership of the files and directories, you can run the sudo chmod command after the files are created. For more information, read Command generator - Linux files owned by root.

    $ docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE \
      command_generator_vars aws_restore_stack \
      --output-data-file /data/restore.yml
  3. After running these commands, a $(pwd)/command_generator_data/restore.yml template file is created. This template file resembles the following:

    aws_restore_stack:
      ansible_config_path:
      cloud_credentials_path:
      deployment_name:
      extra_vars:
        aws_backup_iam_role_arn:
        aws_backup_name:
        aws_backup_restore_point_arn:
        aws_backup_vault_name:
        aws_rds_db_snapshot_arn:
        aws_region:
        aws_s3_bucket:
        aws_ssm_bucket_name:

9.2.4. Updating the restore data file

You must populate the data file before triggering the restore. The following variables are parameters listed in the data file.

  • ansible_config_path (Optional) Only use if overriding with a custom ansible_config.
  • cloud_credentials_path is the path to your AWS credentials file.
  • deployment_name is the name you want for your restored deployment
  • aws_backup_iam_role_arn (Optional) is the Amazon Resource Name (ARN) of the AWS IAM Role that has permissions to perform backup operations.

    Note

    If provided, the playbook defaults to this value over the value of the IAM role referenced in the backup file on S3. For further information, see Backing up the Ansible Automation Platform deployment

  • aws_backup_name is the name of the backup folder stored on S3. This value was outputted after running the backup playbook.
  • aws_backup_restore_point_arn (Optional) is the ARN of the of the EFS recovery point you want to use for restore.

    Note

    If provided, the playbook defaults to this value over the EFS restore point referenced in the backup file on S3. See AWS permissions.

    You must also ensure that the provided restore point is from a version of Ansible on Clouds (AoC) that matches the version of the ansible-on-clouds-ops container image running the restore. For example, a restore point taken from a deployment on AoC version 2.4.20230630-00 needs to use the ansible-on-clouds-ops container image version 2.4.20230630.

  • aws_backup_vault_name (Optional) is the name of the backup vault that holds the EFS recovery points.

    Note

    If provided, the playbook defaults to this value over the backup vault referenced in the backup file on S3 which was generated when running the backup operation. See AWS permissions. You must also ensure that this backup vault matches the one used to backup the deployment.

  • aws_rds_db_snapshot_arn (Optional) is the ARN of the RDS snapshot you want to use for restore.

    Note

    If provided, the playbook defaults to this value over the RDS snapshot referenced in the backup file on S3. You must also ensure that the provided RDS snapshot is from a version of Ansible on Clouds that matches the version of the ansible-on-clouds-ops container image running the restore. For example, an RDS snapshot taken from a deployment on AoC version 2.4.20230630-00 needs to use the ansible-on-clouds-ops container image version 2.4.20230630.

  • aws_region is the region where the foundation deployment is deployed.
  • aws_s3_bucket is the name of the S3 bucket where the backup files are stored. This must be the same bucket used for backup.
  • aws_ssm_bucket_name is the name of the S3 bucket where temporary configuration files for the AWS SSM are stored. You can use an existing bucket or create a new one.

    Note

    The aws_ssm_bucket_name parameter is ONLY used to store temporary config files. It does NOT need to be saved for use in other playbooks. Any valid existing bucket can be used. For more information on creating S3 buckets, read AWS Creating A Bucket in the AWS documentation.

    When you have populated the data file, it should resemble the following. The values in this file are provided as examples.

    Note

    The optional values in this data file example have been removed. If you do not wish to use these optional values and want to use the default values for these variables, you 'must' also remove them for your data file like it was done in the example below. If you wish to use these optional variables, they 'must' be included in the data file and be assigned a value.

    aws_restore_stack:
      cloud_credentials_path: ~/.aws/credentials
      deployment_name: AnsibleAutomationPlatform
      extra_vars:
        aws_backup_name: ansible-automation-platform-bucket-20230706T163309
        aws_region: us-east-1
        aws_s3_bucket: ansible-automation-platform-bucket
        aws_ssm_bucket_name: aap-ssm-bucket

9.2.5. Running the restore playbook

The following procedure runs the restore playbook as a container.

Procedure

  1. To run the restore, run the command generator.

    $ docker run --rm -v $(pwd)/command_generator_data:/data $IMAGE command_generator aws_restore_stack --data-file /data/restore.yml

    This generates the restore CLI command

    ----------------------------------------------
    docker run --rm --env PLATFORM=AWS -v ~/.aws/credentials:/home/runner/.aws/credentials:ro --env ANSIBLE_CONFIG=../aws-ansible.cfg \
    --env DEPLOYMENT_NAME=AnsibleAutomationPlatform --env GENERATE_INVENTORY=true --env CHECK_GENERATED_INVENTORY=false  \
    $IMAGE redhat.ansible_on_clouds.aws_restore_stack -e 'aws_foundation_stack_name=AnsibleAutomationPlatform \
    aws_backup_name=ansible-automation-platform-bucket-20230706T163309 aws_region=us-east-1 \
    aws_s3_bucket=ansible-automation-platform-bucket aws_ssm_bucket_name=aap-ssm-bucket'
    ===============================================
  2. Run the generated command to trigger the restore.

    $ docker run --rm --env PLATFORM=AWS -v ~/.aws/credentials:/home/runner/.aws/credentials:ro --env ANSIBLE_CONFIG=../aws-ansible.cfg \
    --env DEPLOYMENT_NAME=AnsibleAutomationPlatform --env GENERATE_INVENTORY=true --env CHECK_GENERATED_INVENTORY=false  \
    $IMAGE redhat.ansible_on_clouds.aws_restore_stack -e 'aws_foundation_stack_name=AnsibleAutomationPlatform \
    aws_backup_name=ansible-automation-platform-bucket-20230706T163309 aws_region=us-east-1 \
    aws_s3_bucket=ansible-automation-platform-bucket aws_ssm_bucket_name=aap-ssm-bucket'
  3. The playbook can take some time to create a restored deployment. When you have run the playbook successfully, a restored deployment is available in AWS CloudFormation matching the name provided for it in the restore.yml file.
Note

A restored deployment contains the same VPC networking setup as the original backed up deployment. If a backed up deployment was deployed within an existing VPC, its restored deployment is also deployed into that VPC. If a backed up deployment was deployed with a new VPC, its restored deployment is also deployed with a new VPC.