Map Hosting

MapTiler Cloud

The MapTiler Upload utility (introduced in MapTiler Engine 13.2) lets you easily upload tilesets in GeoPackage or MBTiles format directly to MapTiler Cloud:

  maptiler-upload upload filename.gpkg

This command uploads the file (tileset) to the MapTiler Cloud account that is associated with your MapTiler Engine license.

If you need to upload the data to another MapTiler Cloud account, use the --token parameter:

  maptiler-upload --token <your_auth_token> upload filename.gpkg

The --token value is the service token associated with the MapTiler Cloud account where you want to upload the tileset to.

MapTiler Server

To host MBTiles or GeoPackage from your own infrastructure, we recommend using MapTiler Server. Upload your maps with a few clicks and display them in your application using MapTiler SDK, MapLibre GL JS, Leaflet, OpenLayers, or CesiumJS. Go to MapTiler Server documentation to learn more.

Standard web server

It’s also possible to easily host your maps on a standard hosting, such as an ordinary company web server: Upload the directory with tiles to your web hosting and the layer is automatically available. The maps can be opened in any viewer supporting OGC WMTS standard.

Cloud hosting – CloudPush

CloudPush enables you to upload your map tiles to Amazon S3, Google Cloud Storage, or Microsoft Azure Blob hosting. Please note that this feature isn’t currently supported on Windows!

For detailed instructions, there is the Amazon S3 cloud upload guide that explains how to prepare and upload an MBTiles or GeoPackage file using the MapTiler Engine GUI.

The following examples apply to Amazon S3 storage. To use Google Cloud Storage or Microsoft Azure Blob, replace the s3 in the commands with gs or az respectively.

Upload tiles from an MBTiles file to S3:

  maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name add filename.mbtiles
Note that the Amazon S3 bucket must already exist, its Object Ownership must be set to ACLs enabled and Block Public Access must be switched off.

List all maps in the CloudPush tile storage:

  maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name list

Delete a map:

  maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name delete filename

Delete the whole CloudPush storage:

  maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name destroy

Access credentials

The Amazon access and the secure key are available via IAM service administration interface. The credentials for the Google Cloud Storage are under Enable interoperable access in the menu of the service. The Azure Blob Storage requires the Storage account name as Access Key and the Key from the Microsoft Azure Portal > Storage Accounts > Access keys.

Instead of providing the access credentials in every command, these can be set as system environment variables.

Example on Windows OS:

  REM for Amazon S3
  set AWS_ACCESS_KEY_ID=[THE_ACCESS_KEY]
  set AWS_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
  REM or for Google Cloud Storage
  set GOOG_ACCESS_KEY_ID=[THE_ACCESS_KEY]
  set GOOG_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
  REM or for Microsoft Azure Storage
  set AZURE_STORAGE_ACCOUNT=[ACCOUNT NAME]
  set AZURE_STORAGE_ACCESS_KEY=[KEY]

Example on Linux/macOS:

  # for Amazon S3
  export AWS_ACCESS_KEY_ID=[THE_ACCESS_KEY]
  export AWS_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
  # or for Google Cloud Storage
  export GOOG_ACCESS_KEY_ID=[THE_ACCESS_KEY]
  export GOOG_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
  # or for Microsoft Azure Storage
  export AZURE_STORAGE_ACCOUNT=[ACCOUNT NAME]
  export AZURE_STORAGE_ACCESS_KEY=[KEY]

And call the utility without these arguments:

  maptiler-cloudpush s3://bucket_name list
  maptiler-cloudpush s3://bucket_name add filename.mbtiles
  maptiler-cloudpush gs://bucket_name list
  maptiler-cloudpush az://bucket_name list

Advanced options

It is possible to use further options such as:

-create-bucket

Automatically creates bucket, if not existing

-no-index-json

Not handling metadata in CloudPush instance index.json

-raw

Same as –no-index-json

-basename [path]

Sets custom basename (default: basename of MBTiles file)

-private

Uploaded objects are private (default: public)

-emulator

Enable Azure Storage Emulator API

List of available parameters can be displayed by running ./maptiler-cloudpush without any parameter.

Example for using custom basename:

  maptiler-cloudpush --basename myfile s3://bucket_name add filename.mbtiles

Uploads tiles with URL format myfile/z/x/y.ext. Custom basename contains directory separators (slash), for example:

  maptiler-cloudpush --basename year/month/myfile s3://bucket_name add filename.mbtiles

Result will have URL in format: year/month/myfile/z/x/y.ext.

Region-specific hosting can be set up via environment variable AWS_BUCKET_REGION=[value] or with parameter -R [value].

Example for EU (Ireland) region:

  maptiler-cloudpush -R eu-west-1 s3://bucket_name add filename.mbtiles

The list of S3 regions is provided by the utility with --more-help argument or visible at https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

To enable uploading tiles into Azure Storage Emulator, you need to pass the parameter --emulator for each command:

Example for emulator (does not require credentials):

  maptiler-cloudpush --emulator az://bucket_name add filename.mbtiles

The Azure Storage uses the API of the version 2015-02-21.