AWS Server Setup#

ℹ️ Requires an Organization plan

Technical Requirements#

  • An AWS EC2 instance running Amazon Linux AMI (used in this guide)

  • At least EC2 t2.large (2 vCPUs and 8 GiB RAM)

  • Docker-engine version > 23.0.5 (Installation instructions in this guide)

  • Docker-compose version > 2.18.0 (Installation instructions in this guide)

  • 30 GB storage for the Instance

  • 80 GB additional disk storage for data (Instructions in this guide)

  • AWS S3 Bucket for S3 file storage (Instructions in this guide)

  • (Optional) A user identity management service such as Azure AD or LDAP for SSO authentication.

Other Requirements#

Make sure your firewall supports HTTP2 connections. Anchorpoint uses the gRPC protocol to communicate with the server, which is based on HTTP2. It has a fallback for an HTTP1 gateway, however gRPC improves the speed and effiency of realtime updates to the Anchorpoint client. We highly recommend that you start an Anchorpoint cloud trial to evaluate whether gRPC works in your environment.

Licensing#

Licensing depends on the number of users you will have. You need to contact us for a quote. You can also request a free trial license to test the self hosted environment. We also offer volume discounts if your number of users is higher than 25. If you agree, we will send you a payment link. Once the payment is made, we will send you the license key.

EC2 Setup#

In your AWS Console select the Zone you want to launch the instance in. Navigate to “EC2” / “Instances” and click “Launch Instances”.

In the “Name and tags” section:

  • Choose a name for the instance e.g. AnchorpointVM

In the “Application and OS Images” section:

  • Choose Amazon Linux in QuickStart as OS Image (Amazon Linux 2023 AMI in x64 64-bit)

In the “Instance type” section:

  • Choose Instance type t2.large

In the Key pair (login) section:

  • Create a Key pair for ssh login (or select a key pair when you already have one for other instances)

  • Use .ppk file creation if you want to use ssh on windows via putty (you can also use .pem format and convert it with puttygen afterwards)

In the “Network settings” section:

  • Use the default vpc or create your own vpc

  • Enable “Auto-assgin public IP” for an public IP address that you will use for the A record of your subdomain

  • Check “Allow SSH traffic from anywhere” (or use your IP address, but not that you have to update it if your IP address should change)

  • Check “Allow HTTPS traffic from the internet”

In the “Configure storage” section:

  • Choose 30 GiB gp3 as Root volume

  • Click “Add new volume” and choose 80 GiB gp2 as EBS volume

  • (Optional) choose “Advanced” to e.g. change “Encrypted” setting of each volume

(Optional) In “Advanced Details” section:

  • Choose e.g. “Termination Protection” or “Stop Protection” if you want to prevent termination or stop of the instance

Launch the instance via “Launch instance” button

S3 Bucket Setup#

In your AWS Console go to “S3” and click “Create bucket”

In the “General configuration” section:

  • Select a unique bucket name (needs to be globally unique on AWS)

In the “Block Public Access settings for this bucket”

  • Deselect “Block all public access”

  • Select “I acknowledge that the current settings…” (we will create a bucket policy that will restrict the access)

In “Bucket Versioning”

  • Choose if you want to have bucket versioning enabled or not (enabling will store each version of a file and will increase storage size, but you could in special cases revert to a specific version if necessary)

In “Default encryption”

  • keep everything as default (which should be “Server-side encryption with Amazon S3…” and Bucket Key “Enabled”)

Click on “Create Bucket”

In the bucket overview click on your created bucket and select the tab “Permissions”.

In the “Permissions” tab:

  • Click on Edit on “Bucket policy” and add the following policy and make sure to replace <your_bucket_name_here> with the name of your bucket

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObjectVersion",
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::<your_bucket_name_here>/*/public/*"
        }
    ]
}

Now navigate to the IAM (Identity and Access Management) in your AWS Console.

  • Click on “Create User”

  • Enter a describing name for the user (e.g. ap-bucket-access)

  • Click “Next” and “Create”

  • Select the user from the users list

  • Select “Add Permissions” and click on “Create inline policy”

  • Click on “JSON” to change the view to the json editor and copy in the following content. Make sure to replace <your_bucket_name_here> with the name of your bucket

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::<your_bucket_name_here>",
                "arn:aws:s3:::<your_bucket_name_here>/*"
            ]
        }
    ]
}
  • Click “Next”

  • Set a policy name e.g. ap_bucket_access

  • Click “Create Policy”

  • Click on “Security credentials”

  • Click on “Create access key”

  • Choose “Other” and click “Next”

  • Set a description like e.g. “anchorpoint S3 access key”

  • Click on “Create access key”

  • Copy the “Access key” and the “Secret access key” (optionally you can also download the .csv file)

  • Click on “Done”

EC2 Connection#

Connect via ssh to your vm (on windows e.g. with putty. You can use puttygen to convert the pem ssh key to a pkk key)

EC2 Preparation#

Mounting the attached drive#

Format the attached data disk by first running lsblk to get the name of the drive:

lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd”

Use the name of the drive (sdb in this example) for the following commands:

- sudo parted /dev/sdb --script mklabel gpt mkpart xfspart xfs 0% 100%
- sudo mkfs.xfs /dev/sdb1
- sudo partprobe /dev/sdb1

Mount the drive by creating a folder for the mount. In this example we mount to /datadrive

sudo mkdir /datadrive
sudo mount /dev/sdb1 /datadrive

Add the mount to fstab to ensure remount on reboot

First search for the UUID of the drive

sudo blkid

copy the UUID of your drive e.g. 33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e and open /etc/fstab with e.g. vim to edit it

sudo vim /etc/fstab

add an entry similiar to this (press i for insert mode)

UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e   /datadrive   xfs   defaults,nofail   1   2

press escape and save by typing :wq

Install the stack#

In the following sections we describe the setup process using our cli tool.

Installing docker#

You can install docker e.g. with apt. The latest docker installations ship with docker compose v2 automatically.

sudo apt update
sudo apt install curl apt-transport-https ca-certificates software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce -y

Add the current user to the Docker group

sudo usermod -aG docker $USER
newgrp docker

Check if the user was added to the Docker group

groups $USER

To start docker as service automatically run

sudo systemctl enable docker

Setup folder structure#

Create a folder on your additional attached disk storage for the data of the Anchorpoint stack. In our example the disk is mounted on /datadrive

mkdir /datadrive/anchorpoint
cd /datadrive/anchorpoint

Cli Tool#

Download our cli tool for linux. A documentation of all its commands can be found here. The commands will be used in the following sections.

curl https://s3.eu-central-1.amazonaws.com/releases.anchorpoint.app/SelfHosted/ap-self-hosted-cli/ap-self-hosted-cli-linux-amd64 -o selfhost-cli

chmod +x selfhost-cli

Setup Environment variables before running cli install#

You can setup environment variables in your .bash_profile so that you do not have to insert them for every cli command. We will set the install directory and the license key.

cd ~/
touch .bash_profile
vim ~/bash_profile

Add the following lines (press i for insert mode in vim):

export AP_INSTALL_DIR=/datadrive/anchorpoint/install
export AP_LICENSE=<your_license_key_here>

Press escape and save with :wq and call source:

source ~/bash_profile

Check AP_INSTALL_DIR and AP_LICENSE with

echo $AP_INSTALL_DIR
echo $AP_LICENSE

Setup a subdomain#

Create a subdomain on your internet service provider

  • Set a A Record for your public ip of the azure vm for your subdomain

    • You can find the public ip in the overview of your virtual machine in the Azure portal

Installing into the data folder#

Create a subfolder in your anchorpoint folder

mkdir /datadrive/anchorpoint/install

Call the selfhost-cli intall command

cd /datadrive/anchorpoint
./selfhost-cli install
  • Domain - set the subdomain you created on your internet service provider

  • Enable SSL

  • Use Let’s Encrypt if your server is publicly available (in our default case it is)

    • you can skip the email, because Let’s Encrypt does not send renew certificate emails anymore. We also do not need them, because Traefik will auto renew the SSL certificate

  • Do not use Minio as we use our created AWS S3 bucket

  • Use Postgres

  • Choose if you want to install the metrics stack or not

  • Continue the Installation

Checkout the self-host guide for more infos about the options.

Configure data paths for the attached disk#

Open the hidden .env file in the install directory

cd /datadrive/anchorpoint/install
vim .env

Change the folllowing lines (using i for insert mode):

...
S3_ACCESS_KEY=<your_generated_access_key_from_bucket_creation>
S3_SECRET_KEY=<your_generated_secret_key_from_bucket_creation>
S3_SERVER_URL=<https://s3.eu-central-1.amazonaws.com default so change if you use another region then eu-central-1>
S3_INTERNAL_URL=<s3.eu-central-1.amazonaws.com default so change if you use another region then eu-central-1>
S3_EXTERNAL_URL=<s3.eu-central-1.amazonaws.com default so change if you use another region then eu-central-1>
S3_BUCKET=<your_bucket_name_here>
S3_REGION=<your_region_of_the_bucket>
...

Make sure to create /datadrive/anchorpoint/install/data/grafana before starting and setting permissions since grafana uses custom user in container

mkdir /datadrive/anchorpoint/install/data/grafana
sudo chmod 777 -Rv /datadrive/anchorpoint/install/data/grafana

make sure to create /datadrive/anchorpoint/install/data/prometheus before starting and setting permissions since prometheus uses custom user in container

mkdir /datadrive/anchorpoint/install/data/prometheus
sudo chmod 777 -Rv /datadrive/anchorpoint/install/data/prometheus

make sure to create /datadrive/anchorpoint/install/data/loki before starting and setting permissions since loki uses custom user in container

- mkdir /datadrive/anchorpoint/install/data/loki
- sudo chmod 777 -Rv /datadrive/anchorpoint/install/data/loki

(Optional) How to use your own ssl certificates#

If you selected to use your own ssl certificates (self signed also possible) in the stack, you have to place your certificates in the /datadrive/anchorpoint/install/data/traefik/certs directory in your install directory and adjust the /datadrive/anchorpoint/install/data/traefik/dynamic_conf.yaml file in your install dictory to reference all your certificates. An example file content could look like this:

tls:
  certificates:
    - certFile: /data/traefik/certs/cert1.crt
      keyFile: /data/traefik/certs/cert1.key
    - certFile: /data/traefik/certs/cert2.crt
      keyFile: /data/traefik/certs/cert2.key

Note that the path in the config file is the path to the certificates inside the docker container not the path at the host. So keep it as /data/traefik/certs/ in the dynamic_conf.yaml and only adjust the name of the certificate/s and the key/s. Also note that if you have intermediate certificates you have to create one crt file with first the server certificate followed by any intermediate certificates in the same file. You can also find more information about the dynamic_conf.yaml on the traefik documentation here.

(Optional) How to send emails for user mentions#

If you want the Anchorpoint backend to send emails when a user is invited or mentioned in a comment you have to setup the EMAIL_ environment variables for smtp. You can also adjust the email templates from /datadrive/anchorpoint/install/config/ap_backend/templates/email directory.

Troubleshooting start problems#

If the containers do not start, or you cannot reach them via your provided domain or IP address, first check the container log outputs. You can use docker ps -a to view all running containers. Copy the container id and use docker logs {container_id} to print the latest container log outputs.

Also check that your DNS records are setup correctly if you are using a custom domain and that your firewall allows connections on the http / https port, the gprc port, and the MinIO ports.

(Optional) Setup your SSO provider in Keycloak#

Checkout our guide for SSO provider in Keycloak here.

Setup user accounts#

Checkout our guide for managing users in your self hosted environment here. After you set up the user accounts, login via the Anchorpoint desktop client as described here.

How to update the stack#

To update the stack, run the cli tool update command.

./selfhost-cli update

In case an update is available, it will download the latest version of the stack, including the latest Anchorpoint backend and client versions. The update will overwrite files in the installation directory, but will not change anything in your data directories. You can also use the check_update command to check if there is a new version available.

After the update is finsihed you can restart the stack by running the cli start command again. Note that while updating, the Anchorpoint clients will be in the offline mode. We generally recommend updating the stack after work when no users are currently using the application.

How to update your license#

Adjust your .bash_profile AP_LICENSE environment variable:

cd ~/
touch .bash_profile
vim ~/bash_profile

ADjust the following line (press i for insert mode in vim):

export AP_LICENSE=<your_license_key_here>

Press escape and save with :wq and call source:

source ~/bash_profile

Check AP_LICENSE contains your new license key with

echo $AP_INSTALL_DIR
echo $AP_LICENSE

You can update the self-hosting license by using the cli tool update_license command.

./selfhost-cli update_license

The command will stop and remove the ap_backend container, patch the .env file LICENSE_KEY environment variable and recreate the ap_backend container for you. Note that this will also result in a short downtime while the ap_backend container is not running. Similar to updating, we recommend updating the license when no users are currently using the application.

How to stop the stack#

To stop the stack you can use the cli tool stop command or use docker-compose stop in your installation directory.

Data that should be backed up regularly#

You should regularly backup your data in your install directory by e.g. backing up the datadrive disk regularly.