Map Hosting
MapTiler Cloud
The MapTiler Upload utility (introduced in MapTiler Engine 13.2) lets you easily upload tilesets in GeoPackage or MBTiles format directly to MapTiler Cloud.
The basic command looks like this:
maptiler-upload upload filename.gpkg
It will result in the tileset being uploaded to the MapTiler Cloud account that is associated with your MapTiler Engine license.
However, if you need to upload the data to some other MapTiler Cloud account, you can do this by using the --token
parameter:
maptiler-upload --token <your_auth_token> upload filename.gpkg
where the --token
value is the Service Token associated with the MapTiler Cloud account you wish to upload the tileset to. Read this guide to learn more.
MapTiler Server
To host MBTiles or GeoPackage from your own infrastructure, we recommend using MapTiler Server. Simply upload the created maps within a few clicks and display them in your application with MapTiler SDK, MapLibre GL JS, Leaflet, OpenLayers, or CesiumJS. There is a standalone how-to guide describing the whole process of hosting with MapTiler Server.
Standard web server
On a standard hosting (such as an ordinary company web server) you can very simply host your maps. Just upload the directory with tiles to your web hosting and the layer is automatically available. Once uploaded, the produced maps can be opened in any viewer supporting OGC WMTS standard.
Cloud Hosting - CloudPush
The CloudPush command can be used for uploading tiles to Amazon S3, Google Cloud Storage or Microsoft Azure Blob hosting. Examples below assume using S3 storage. If you need to use Google Cloud Storage or Microsoft Azure Blob, just change the “s3” in the commands to “gs” or “az”, respectively. Full how-to with visual examples is available as a how-to article.
Cloud Push instance is initialized with the first uploaded map via this command line utility. It automatically creates an empty index.json
, in index.html
and sets WebSite configuration for this bucket. To get the required credentials, see the section Access credentials below.
Upload tiles from an MBTiles file to S3
maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name add filename.mbtiles
List all maps in the cloudpush tile storage
maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name list
Delete a map
maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name delete filename
Delete whole cloudpush storage
maptiler-cloudpush --access_key ACCESS_KEY --secret_key SECRET_KEY s3://bucket_name destroy
Access credentials
The Amazon access and the secure key are available via IAM service administration interface. The credentials for the Google Cloud Storage are under “Enable interoperable access” in the menu of the service. The Azure Blob Storage requires the Storage account name as Access Key and the Key from the Microsoft Azure Portal - Storage Accounts - Access keys.
Instead of providing the access credentials in every command these can be set as system environment variables.
Example on Windows OS:
REM for Amazon S3
set AWS_ACCESS_KEY_ID=[THE_ACCESS_KEY]
set AWS_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
REM or for Google Cloud Storage
set GOOG_ACCESS_KEY_ID=[THE_ACCESS_KEY]
set GOOG_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
REM or for Microsoft Azure Storage
set AZURE_STORAGE_ACCOUNT=[ACCOUNT NAME]
set AZURE_STORAGE_ACCESS_KEY=[KEY]
Example on Linux / macOS:
# for Amazon S3
export AWS_ACCESS_KEY_ID=[THE_ACCESS_KEY]
export AWS_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
# or for Google Cloud Storage
export GOOG_ACCESS_KEY_ID=[THE_ACCESS_KEY]
export GOOG_SECRET_ACCESS_KEY=[THE_SECRET_KEY]
# or for Microsoft Azure Storage
export AZURE_STORAGE_ACCOUNT=[ACCOUNT NAME]
export AZURE_STORAGE_ACCESS_KEY=[KEY]
and call the utility without these arguments:
maptiler-cloudpush s3://bucket_name list
maptiler-cloudpush s3://bucket_name add filename.mbtiles
maptiler-cloudpush gs://bucket_name list
maptiler-cloudpush az://bucket_name list
Advanced options
It is possible to use further options such as:
-create-bucket
automatically creates bucket, if not existing
-no-index-json
not handling metadata in CloudPush instance index.json
-raw
same as –no-index-json
-basename [path]
sets custom basename (default: basename of MBTiles file)
-private
uploaded objects are private (default: public)
-emulator
Enable Azure Storage Emulator API
List of available parameters can be displayed by running ./maptiler-cloudpush without any parameter
Example for using custom basename:
maptiler-cloudpush --basename myfile s3://bucket_name add filename.mbtiles
uploads tiles with URL format: myfile/z/x/y.ext
. Custom basename contains directory separators (slash), for example:
maptiler-cloudpush --basename year/month/myfile s3://bucket_name add filename.mbtiles
result will have URL in format: year/month/myfile/z/x/y.ext
.
Region-specific hosting can be set up via environment variable AWS_BUCKET_REGION=[value] or with parameter -R [value].
Example for EU (Ireland) region:
maptiler-cloudpush -R eu-west-1 s3://bucket_name add filename.mbtiles
The list of S3 regions is provided by the utility with --more-help
argument or visible at https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
To enable uploading tiles into Azure Storage Emulator, you need to pass the parameter --emulator
for each command:
Example for emulator, does not require credentials:
maptiler-cloudpush --emulator az://bucket_name add filename.mbtiles
The Azure Storage uses the API of the version 2015-02-21.