Introduction
My colleagues have a good contour line data in ArcGIS geodatabase (.gdb) format. I want to make vector tiles from contour lines stored in ArcGIS Geodatabase.
This article is about my second experiment of vector tile conversion from Esri geodatabase following my previous test written in the previous article. Please refer to my previous test at the following page if needed.
The previous experiment was done with a geodatabase having a single layer, while the second experiment (this time) will be done with a geodatabase having more than one layer.
I think it will significantly increase work efficiency to divide (clip) source data into certain tile extent prior to data conversion into vector tile. Therefore, I tried data conversion with multiple layers.
My Working Environment
- nodejs: v16.15.0
- npm: 8.5.5
- tippecanoe: v1.36.0
- GDAL: 3.4.1, released 2021/12/27
- Platform: Ubuntu 22.04 LTS (built on Docker for windows)
Preliminary Test: Comparison of GDAL exporting time to GeoJSONs (by Layers vs by BBOX)
Data used for the preliminary test 1
This time, I worked with the following geodatabase with 16 layers. Each layer contains features in the certain extent (the extent of the zoom level 6 tiles). The layers are prepared my colleague, and she clipped features into these extents.
For the purpose of the comparison, I also used the gdb files with the same extent but without layer structure (I mean all features in a single layer).
As both gdb files are for the same extent, and their sizes are around 1.4GB (1.38GB with 16 layers, and 1.37 GB with a single layer.).
Method 1: Specifying by layer
Given that layers are already decided, I checked reading time by specifying layer and writing time to GeoJSONs with the following command. (Note: the following script has line changes for easire recognition, while actual script does not have them.)
echo 6-32-20; date; ogr2ogr -f GeoJSONSeq test/6-32-20.geojsons test_area_z6.gdb t_6_32_20; date;
echo 6-32-21; date; ogr2ogr -f GeoJSONSeq test/6-32-21.geojsons test_area_z6.gdb t_6_32_21; date;
echo 6-32-22; date; ogr2ogr -f GeoJSONSeq test/6-32-22.geojsons test_area_z6.gdb t_6_32_22; date;
echo 6-32-23; date; ogr2ogr -f GeoJSONSeq test/6-32-23.geojsons test_area_z6.gdb t_6_32_23; date;
echo 6-33-20; date; ogr2ogr -f GeoJSONSeq test/6-33-20.geojsons test_area_z6.gdb t_6_33_20; date;
echo 6-33-21; date; ogr2ogr -f GeoJSONSeq test/6-33-21.geojsons test_area_z6.gdb t_6_33_21; date;
echo 6-33-22; date; ogr2ogr -f GeoJSONSeq test/6-33-22.geojsons test_area_z6.gdb t_6_33_22; date;
echo 6-33-23; date; ogr2ogr -f GeoJSONSeq test/6-33-23.geojsons test_area_z6.gdb t_6_33_23; date;
echo 6-34-20; date; ogr2ogr -f GeoJSONSeq test/6-34-20.geojsons test_area_z6.gdb t_6_34_20; date;
echo 6-34-21; date; ogr2ogr -f GeoJSONSeq test/6-34-21.geojsons test_area_z6.gdb t_6_34_21; date;
echo 6-34-22; date; ogr2ogr -f GeoJSONSeq test/6-34-22.geojsons test_area_z6.gdb t_6_34_22; date;
echo 6-34-23; date; ogr2ogr -f GeoJSONSeq test/6-34-23.geojsons test_area_z6.gdb t_6_34_23; date;
echo 6-35-20; date; ogr2ogr -f GeoJSONSeq test/6-35-20.geojsons test_area_z6.gdb t_6_35_20; date;
echo 6-35-21; date; ogr2ogr -f GeoJSONSeq test/6-35-21.geojsons test_area_z6.gdb t_6_35_21; date;
echo 6-35-22; date; ogr2ogr -f GeoJSONSeq test/6-35-22.geojsons test_area_z6.gdb t_6_35_22; date;
echo 6-35-23; date; ogr2ogr -f GeoJSONSeq test/6-35-23.geojsons test_area_z6.gdb t_6_35_23; date;
Method 2: Specifying by BBOX
Although it would need some additional time to clip the feature, I also checked reading/cliping time without layer and writing time to GeoJSONs with the following command. (Note: the following script has line changes for easire recognition, while actual script does not have them.)
echo 6-32-20; date; ogr2ogr -f GeoJSONSeq test2/6-32-20.geojsons -clipdst 0 52.48278022207821 5.625 55.77657301866769 test_area.gdb ; date;
echo 6-32-21; date; ogr2ogr -f GeoJSONSeq test2/6-32-21.geojsons -clipdst 0 48.922499263758255 5.625 52.48278022207821 test_area.gdb ; date;
echo 6-32-22; date; ogr2ogr -f GeoJSONSeq test2/6-32-22.geojsons -clipdst 0 45.08903556483103 5.625 48.922499263758255 test_area.gdb ; date;
echo 6-32-23; date; ogr2ogr -f GeoJSONSeq test2/6-32-23.geojsons -clipdst 0 40.97989806962013 5.625 45.08903556483103 test_area.gdb ; date;
echo 6-33-20; date; ogr2ogr -f GeoJSONSeq test2/6-33-20.geojsons -clipdst 5.625 52.48278022207821 11.25 55.77657301866769 test_area.gdb ; date;
echo 6-33-21; date; ogr2ogr -f GeoJSONSeq test2/6-33-21.geojsons -clipdst 5.625 48.922499263758255 11.25 52.48278022207821 test_area.gdb ; date;
echo 6-33-22; date; ogr2ogr -f GeoJSONSeq test2/6-33-22.geojsons -clipdst 5.625 45.08903556483103 11.25 48.922499263758255 test_area.gdb ; date;
echo 6-33-23; date; ogr2ogr -f GeoJSONSeq test2/6-33-23.geojsons -clipdst 5.625 40.97989806962013 11.25 45.08903556483103 test_area.gdb ; date;
echo 6-34-20; date; ogr2ogr -f GeoJSONSeq test2/6-34-20.geojsons -clipdst 11.25 52.48278022207821 16.875 55.77657301866769 test_area.gdb ; date;
echo 6-34-21; date; ogr2ogr -f GeoJSONSeq test2/6-34-21.geojsons -clipdst 11.25 48.922499263758255 16.875 52.48278022207821 test_area.gdb ; date;
echo 6-34-22; date; ogr2ogr -f GeoJSONSeq test2/6-34-22.geojsons -clipdst 11.25 45.08903556483103 16.875 48.922499263758255 test_area.gdb ; date;
echo 6-34-23; date; ogr2ogr -f GeoJSONSeq test2/6-34-23.geojsons -clipdst 11.25 40.97989806962013 16.875 45.08903556483103 test_area.gdb ; date;
echo 6-35-20; date; ogr2ogr -f GeoJSONSeq test2/6-35-20.geojsons -clipdst 16.875 52.48278022207821 22.5 55.77657301866769 test_area.gdb ; date;
echo 6-35-21; date; ogr2ogr -f GeoJSONSeq test2/6-35-21.geojsons -clipdst 16.875 48.922499263758255 22.5 52.48278022207821 test_area.gdb ; date;
echo 6-35-22; date; ogr2ogr -f GeoJSONSeq test2/6-35-22.geojsons -clipdst 16.875 45.08903556483103 22.5 48.922499263758255 test_area.gdb ; date;
echo 6-35-23; date; ogr2ogr -f GeoJSONSeq test2/6-35-23.geojsons -clipdst 16.875 40.97989806962013 22.5 45.08903556483103 test_area.gdb ; date;
How can we find BBOX (bounding box) coordinates?
There is a module from mapbox named "mapbox/tilebelt" for nodejs.
const tilebelt = require('@mapbox/tilebelt')
const bbox = tilebelt.tileToBBOX(32,20,6) //order is x, y, z
console.log(bbox)
The output will be in the order of [Xmin, Ymin, Xmax, Ymax]
Comparison Result
Comparison of Processing time (seconds)
area | Method 1: layer | Method 2: BBOX | GeoJSONs size | Diffrence (Method 2 - Method 1) |
---|---|---|---|---|
6-32-20 | 5 | 57 | 8 MB | 52 |
6-32-21 | 67 | 98 | 141 MB | 31 |
6-32-22 | 121 | 173 | 301 MB | 52 |
6-32-23 | 214 | 263 | 572 MB | 49 |
6-33-20 | 38 | 69 | 66 MB | 31 |
6-33-21 | 140 | 174 | 342 MB | 34 |
6-33-22 | 590 | 663 | 1,043 MB | 73 |
6-33-23 | 199 | 247 | 474 MB | 48 |
6-34-20 | 51 | 80 | 83 MB | 29 |
6-34-21 | 135 | 177 | 286 MB | 42 |
6-34-22 | 505 | 581 | 920 MB | 76 |
6-34-23 | 192 | 245 | 444 MB | 53 |
6-35-20 | 52 | 102 | 98 MB | 50 |
6-35-21 | 132 | 170 | 290 MB | 38 |
6-35-22 | 139 | 177 | 292 MB | 38 |
6-35-23 | 347 | 375 | 839 MB | 28 |
As expected, processing with layer (method 1) was always faster.
On the other hand, I think the method 2 is not so bad. Or, I might be able to say that the method 2 would be better.. The method 2 will save the time for ArcGIS preprocessing to make layers under a gdb. The differences between the two methods were from 28 seconds to 76 seconds (Average: 45.25, median: 45, stdev: 14).
If these difference (about 45 seconds) is mainly for processing spatial searching from about 1.4GB source data and clipping selected ones with the bounding box, and if such time is a linear function of the source data size, we would have need about 28 mins for searching features from the whole global database (52GB) and clipping them into a single extent.
45 sec * ( 52GB / 1.4GB ) / 60 (sec/min) = 27.9 min
Preliminary Test2: Comparison of GDAL exporting time with the different source sizes (with BBOX method)
When the source gdb was about 1.4 GB, it took about 45 seconds to search features with bbox. Let's see what happens with the same bbox when we increase the spatial extent of the source to have larger data size.
With the same bounding box used in the preliminary test 1, I just changed the extent of the source data to have the data size of 10.3 GB.
If processing time is a linear function of the source data size, the processing time would increase by around 286 sec guessed from the previous test.
45 sec * ((10.3GB - 1.4GB)/1.4GB) = 286 sec.
Preliminary test 2: result
Comparison of Processing time (seconds)
area | Method 2: BBOX from small gdb (1.4 GB) | Method 2: BBOX from large gdb (10.3 GB) | GeoJSONs size | Diffrence (large source - small source) |
---|---|---|---|---|
6-32-20 | 57 | 446 | 8 MB | 389 |
6-32-21 | 98 | 482 | 141 MB | 384 |
6-32-22 | 173 | 532 | 301 MB | 359 |
6-32-23 | 263 | 655 | 572 MB | 392 |
6-33-20 | 69 | 453 | 66 MB | 384 |
6-33-21 | 174 | 569 | 342 MB | 395 |
6-33-22 | 663 | 967 | 1,043 MB | 304 |
6-33-23 | 247 | 567 | 474 MB | 320 |
6-34-20 | 80 | 474 | 83 MB | 394 |
6-34-21 | 177 | 570 | 286 MB | 393 |
6-34-22 | 581 | 964 | 920 MB | 383 |
6-34-23 | 245 | 621 | 444 MB | 376 |
6-35-20 | 102 | 473 | 98 MB | 371 |
6-35-21 | 170 | 568 | 290 MB | 398 |
6-35-22 | 177 | 559 | 292 MB | 382 |
6-35-23 | 375 | 773 | 839 MB | 398 |
Interestingly, the difference of the processing time for each bbox is pretty stable ranging from 304 to 398 (average of 376 seconds). My guess was 286 sec, so it took more time, but I think this is still acceptable.
The average of the processing times for these 16 area was 605 seconds (= about 10 mins), so it would be good enough to run the conversion with some concurrent process.
Experiment
Step 1: Checking the gdal function.
We can access to a layer in geodatabase by adding a layer name after the gdb name as shown in the following command and figure. This time, I exported to GeoJSONs so that I can check the result.
ogr2ogr -f GeoJSONSeq 6-32-20.geojsons test_area_z6.gdb t_6_32_20
ogr2ogr -f GeoJSONSeq 6-32-21.geojsons test_area_z6.gdb t_6_32_21
Step 2: Simple Script
const config = require('config')
const Parser = require('json-text-sequence').parser
const { spawn } = require('child_process')
const srcs = config.get('srcs')
const ogr2ogrPath = config.get('ogr2ogrPath')
for (const src of srcs) { // if source is a single file, this loop is not necessary.
for (const tile of src.tiles){
const downstream = process.stdout
//console.log(`t_${tile[0]}_${tile[1]}_${tile[2]}`)
const parser = new Parser()
.on('data', f => {
f.tippecanoe = {
layer: src.layer,
minzoom: src.minzoom,
maxizoom: src.maxzoom
}
delete f.properties.SHAPE_Length //SHAPE_Length is not necessary
//console.log(JSON.stringify(f, null, 2)) // f when writing
//downstream.write(`\x1e${JSON.stringify(f.properties)}\n`)
downstream.write(`\x1e${JSON.stringify(f)}\n`)
})
const ogr2ogr = spawn(ogr2ogrPath, [
'-f', 'GeoJSONSeq',
'-lco', 'RS=YES',
'/vsistdout/',
src.url,
`t_${tile[0]}_${tile[1]}_${tile[2]}`
])
ogr2ogr.stdout.pipe(parser)
}
}
{
minzoom: 10
maxzoom: 12
srcs: [
{
url: test_area_z6.gdb
layer: elev
minzoom: 10
maxzoom: 12
tiles: [
[6,32,20]
[6,32,21]
[6,32,22]
[6,32,23]
[6,33,20]
[6,33,21]
[6,33,22]
[6,33,23]
[6,34,20]
[6,34,21]
[6,34,22]
[6,34,23]
[6,35,20]
[6,35,21]
[6,35,22]
[6,35,23]
]
}
]
ogr2ogrPath: ogr2ogr
tippecanoePath: /usr/local/bin/tippecanoe
dstDir: zxy
}
By running this script, we can see GeoJSON sequence as below.
I need to adjust this script so that I can obtain the vector tile by tile by tile. (I may need to use queue)
Consideration of maximum zoom level
This time, the source gdb was about 1.5 GB. When I made 16 mbtiles with zoom level from 10-13, the total size was about 2.3GB. When I made them with zoom level from 10-12, the total size was about two third of the that for ZL10 - 13.
Given that the original data is over 50 GB for global coverage, I think it would be a good idea to start the zoom level from 10 to 12.
Developing a Script
Learing from UNVT's past efforts, I have made a script to create vector tiles (mbtiles format for each spatial modules).
https://github.com/ubukawa/geodb5
- Use of "better-queue" module
- In order to concurrently process each area, "better-queue" module is used.
- Use of the spatial module
- Because the source data is large, outout vector tiles are in respective spatial extent, or spatial module. For example, we have 6-34-22.mbtiles for the extent of the 6-34-22 tile extent (z-x-y order).
- GeoJSON sequence was piped into the parser and forwarded to the vector tile conversion tool tippecanoe. But, it caused overflow of the data, so now GeoJSON sequence is exported as an intermediate file before converted into vector tiles.
// This is being edited.
const config = require('config')
const fs = require('fs')
const Queue = require('better-queue')
const { spawn } = require('child_process')
const Parser = require('json-text-sequence').parser
const tilebelt = require('@mapbox/tilebelt')
const srcdb = config.get('srcdb')
const ogr2ogrPath = config.get('ogr2ogrPath')
const tippecanoePath = config.get('tippecanoePath')
const minzoom = config.get('minzoom')
const maxzoom = config.get('maxzoom')
const mbtilesDir = config.get('mbtilesDir')
const geojsonsDir = config.get('geojsonsDir')
let keyInProgress = []
let idle = true
const isIdle = () => {
return idle
}
const fsOptions = {
encoding: "utf8"
}
const sleep = (wait) => {
return new Promise((resolve, reject) => {
setTimeout( () => {resolve()}, wait)
})
}
const queue = new Queue(async (t, cb) => {
const startTime = new Date()
const key = t.key
const tile = t.tile
const [z, x, y] = tile
const gjsPath = `${geojsonsDir}/inter-${key}.geojsons`
const tmpPath = `${mbtilesDir}/part-${key}.mbtiles`
const dstPath = `${mbtilesDir}/${key}.mbtiles`
const bbox = tilebelt.tileToBBOX([x, y, z])
keyInProgress.push(key)
console.log(`[${keyInProgress}] in progress`)
const FSstream = fs.createWriteStream(gjsPath, fsOptions)
const parser = new Parser()
.on('data', f => {
f.tippecanoe = {
layer: srcdb.layer,
minzoom: srcdb.minzoom,
maxzoom: srcdb.maxzoom
}
delete f.properties.SHAPE_Length
if ((f.properties.contour % 100) == 0){
f.tippecanoe.minzoom = srcdb.minzoom
} else if ((f.properties.contour % 40) == 0){
f.tippecanoe.minzoom = srcdb.minzoom + 2
} else {
f.tippecanoe.minzoom = srcdb.minzoom + 3
}
FSstream.write(`\x1e${JSON.stringify(f)}\n`)
})
.on('finish', () => {
FSstream.end()
const PendTime = new Date()
//console.log(`FS write end ${key}: ${startTime.toISOString()} --> ${PendTime.toISOString()}`)
//from here
const VTconversion = new Promise((resolve, reject)=>{
const tippecanoe = spawn(tippecanoePath, [
`--output=${tmpPath}`,
'--no-feature-limit',
'--no-tile-size-limit',
'--force',
'--simplification=2',
`--clip-bounding-box=${bbox.join(',')}`,
'--quiet',
`--minimum-zoom=${minzoom}`,
`--maximum-zoom=${maxzoom}`,
gjsPath
])
.on('exit', () => {
fs.renameSync(tmpPath, dstPath)
fs.unlinkSync(gjsPath)
//const endTime = new Date()
//console.log(`Tippecanoe: ${key} ends at ${endTime.toISOString()} (^o^)/`)
//keyInProgress = keyInProgress.filter((v) => !(v === key))
resolve()
})
})
.then(()=> {
const endTime = new Date()
console.log(` - ${key} ends: ${startTime.toISOString()} --> ${endTime.toISOString()} (^o^)/`)
keyInProgress = keyInProgress.filter((v) => !(v === key))
return cb()
})
//until here
})
const ogr2ogr = spawn(ogr2ogrPath, [
'-f', 'GeoJSONSeq',
'-lco', 'RS=YES',
'/vsistdout/',
'-clipdst', bbox[0], bbox[1], bbox[2], bbox[3],
srcdb.url
])
//just in case (from here)
while(!isIdle()){
await sleep(3000)
}
//just in case (until here)
ogr2ogr.stdout.pipe(parser)
// The following part is moved into .then of
// const endTime = new Date()
// console.log(`${key} ends: ${startTime} --> ${endTime} (^o^)/`)
// keyInProgress = keyInProgress.filter((v) => !(v === key))
// return cb()
},{
concurrent: config.get('concurrent'),
maxRetries: config.get('maxRetries'),
retryDelay: config.get('retryDelay')
})
const queueTasks = () => {
for (let tile of srcdb.tiles){
//for (let tile of [[6,32,20],[6,32,21],[6,32,22],[6,32,23],[6,33,20],[6,33,21],[6,33,22]]){
//for (let key of ['bndl1', 'bndl2', 'bndl3', 'bndl4', 'bndl5', 'bndl6']){
const key = `${tile[0]}-${tile[1]}-${tile[2]}`
queue.push({
key: key,
tile: tile
})
}
}
const shutdown = () => {
console.log('System shutdown (^_^)')
}
const main = async () =>{
const stTime = new Date()
console.log(`${stTime.toISOString()}: Production starts. `)
queueTasks()
queue.on('drain', () => {
const closeTime = new Date()
console.log(`Production ends: ${stTime.toISOString()} --> ${closeTime.toISOString()}`)
shutdown()
})
}
main()
The config file was:
{
minzoom: 10
maxzoom: 13
srcdb: {
url: test_area_z4_8_9.gdb
layer: elev
minzoom: 10
maxzoom: 13
tiles: [
[4,8,0]
[4,8,1]
[4,8,2]
[4,8,10]
[4,8,11]
[4,8,12]
[4,8,13]
[4,8,14]
[4,8,15]
[4,9,0]
[4,9,1]
[4,9,10]
[4,9,11]
[4,9,12]
[4,9,13]
[4,9,14]
[4,9,15]
[4,10,0]
[4,10,1]
[4,10,2]
[4,10,3]
[4,10,9]
[4,10,10]
[4,10,11]
[4,10,12]
[4,10,13]
[4,10,14]
[4,10,15]
[4,11,0]
[4,11,1]
[4,11,2]
[4,11,3]
[4,11,8]
[4,11,9]
[4,11,10]
[4,11,11]
[4,11,12]
[4,11,13]
[4,11,14]
[4,11,15]
[4,12,0]
[4,12,1]
[4,12,2]
[4,12,3]
[4,12,9]
[4,12,10]
[4,12,11]
[4,12,12]
[4,12,13]
[4,12,14]
[4,12,15]
[4,13,0]
[4,13,1]
[4,13,2]
[4,13,3]
[4,13,10]
[4,13,11]
[4,13,12]
[4,13,13]
[4,13,14]
[4,13,15]
[4,14,0]
[4,14,1]
[4,14,2]
[4,14,3]
[4,14,7]
[4,14,11]
[4,14,12]
[4,14,13]
[4,14,14]
[4,14,15]
[4,15,0]
[4,15,1]
[4,15,2]
[4,15,3]
[4,15,4]
[4,15,5]
[4,15,6]
[4,15,7]
[4,15,8]
[4,15,11]
[4,15,12]
[4,15,13]
[4,15,14]
[4,15,15]
[5,16,13]
[5,16,16]
[5,16,17]
[5,16,18]
[5,16,19]
[5,17,13]
[5,17,14]
[5,17,17]
[5,17,18]
[5,17,19]
[5,18,13]
[5,18,14]
[5,18,19]
[5,19,14]
[5,19,18]
[5,19,19]
[5,20,8]
[5,20,14]
[5,20,15]
[5,20,16]
[5,21,8]
[5,21,11]
[5,21,12]
[5,21,13]
[5,21,14]
[5,21,15]
[5,21,16]
[5,21,17]
[5,22,8]
[5,22,9]
[5,22,11]
[5,23,8]
[5,23,9]
[5,23,11]
[5,23,12]
[5,23,15]
[5,24,8]
[5,24,9]
[5,24,10]
[5,24,11]
[5,24,12]
[5,24,16]
[5,24,17]
[5,25,8]
[5,25,9]
[5,25,10]
[5,25,11]
[5,25,17]
[5,26,8]
[5,26,9]
[5,26,10]
[5,26,11]
[5,26,17]
[5,26,18]
[5,26,19]
[5,27,8]
[5,27,9]
[5,27,10]
[5,27,13]
[5,27,14]
[5,27,16]
[5,27,17]
[5,27,18]
[5,27,19]
[5,28,8]
[5,28,9]
[5,28,10]
[5,28,13]
[5,28,16]
[5,28,17]
[5,28,18]
[5,28,20]
[5,28,21]
[5,29,8]
[5,29,9]
[5,29,10]
[5,29,11]
[5,29,12]
[5,29,13]
[5,29,16]
[5,29,17]
[5,29,20]
[5,29,21]
[5,30,18]
[5,30,19]
[5,30,20]
[5,30,21]
[5,31,18]
[5,31,21]
[6,32,28]
[6,32,29]
[6,32,30]
[6,32,31]
[6,33,28]
[6,33,29]
[6,33,30]
[6,33,31]
[6,34,30]
[6,34,31]
[6,34,32]
[6,34,33]
[6,35,30]
[6,35,31]
[6,35,32]
[6,35,33]
[6,36,30]
[6,36,31]
[6,36,32]
[6,36,33]
[6,36,34]
[6,36,35]
[6,36,36]
[6,36,37]
[6,37,30]
[6,37,31]
[6,37,32]
[6,37,33]
[6,37,34]
[6,37,35]
[6,37,36]
[6,37,37]
[6,38,30]
[6,38,31]
[6,38,32]
[6,38,33]
[6,38,34]
[6,38,35]
[6,39,30]
[6,39,31]
[6,39,32]
[6,39,33]
[6,39,34]
[6,39,35]
[6,40,34]
[6,40,35]
[6,41,34]
[6,41,35]
[6,44,24]
[6,44,25]
[6,44,26]
[6,44,27]
[6,44,28]
[6,44,29]
[6,44,30]
[6,44,31]
[6,45,24]
[6,45,25]
[6,45,26]
[6,45,27]
[6,45,28]
[6,45,29]
[6,45,30]
[6,45,31]
[6,46,26]
[6,46,27]
[6,46,28]
[6,46,29]
[6,47,26]
[6,47,27]
[6,47,28]
[6,47,29]
[6,48,26]
[6,48,27]
[6,48,28]
[6,48,29]
[6,48,30]
[6,48,31]
[6,49,26]
[6,49,27]
[6,49,28]
[6,49,29]
[6,49,30]
[6,49,31]
[6,50,24]
[6,50,25]
[6,50,26]
[6,50,27]
[6,50,28]
[6,50,29]
[6,50,30]
[6,50,31]
[6,50,32]
[6,50,33]
[6,51,24]
[6,51,25]
[6,51,26]
[6,51,27]
[6,51,28]
[6,51,29]
[6,51,30]
[6,51,31]
[6,51,32]
[6,51,33]
[6,52,24]
[6,52,25]
[6,52,26]
[6,52,27]
[6,52,28]
[6,52,29]
[6,52,30]
[6,52,31]
[6,52,32]
[6,52,33]
[6,53,24]
[6,53,25]
[6,53,26]
[6,53,27]
[6,53,28]
[6,53,29]
[6,53,30]
[6,53,31]
[6,53,32]
[6,53,33]
[6,54,22]
[6,54,23]
[6,54,24]
[6,54,25]
[6,54,30]
[6,54,31]
[6,55,22]
[6,55,23]
[6,55,24]
[6,55,25]
[6,55,30]
[6,55,31]
[6,56,22]
[6,56,23]
[6,56,24]
[6,56,25]
[6,56,38]
[6,56,39]
[6,57,22]
[6,57,23]
[6,57,24]
[6,57,25]
[6,57,38]
[6,57,39]
[6,58,36]
[6,58,37]
[6,58,38]
[6,58,39]
[6,59,36]
[6,59,37]
[6,59,38]
[6,59,39]
[6,62,38]
[6,62,39]
[6,62,40]
[6,62,41]
[6,63,38]
[6,63,39]
[6,63,40]
[6,63,41]
[6,32,16]
[6,32,17]
[6,32,18]
[6,32,19]
[6,32,20]
[6,32,21]
[6,32,22]
[6,32,23]
[6,32,24]
[6,32,25]
[6,33,16]
[6,33,17]
[6,33,18]
[6,33,19]
[6,33,20]
[6,33,21]
[6,33,22]
[6,33,23]
[6,33,24]
[6,33,25]
[6,34,16]
[6,34,17]
[6,34,18]
[6,34,19]
[6,34,20]
[6,34,21]
[6,34,22]
[6,34,23]
[6,34,24]
[6,34,25]
[6,35,16]
[6,35,17]
[6,35,18]
[6,35,19]
[6,35,20]
[6,35,21]
[6,35,22]
[6,35,23]
[6,35,24]
[6,35,25]
[6,36,16]
[6,36,17]
[6,36,18]
[6,36,19]
[6,36,20]
[6,36,21]
[6,36,22]
[6,36,23]
[6,36,24]
[6,36,25]
[6,37,16]
[6,37,17]
[6,37,18]
[6,37,19]
[6,37,20]
[6,37,21]
[6,37,22]
[6,37,23]
[6,37,24]
[6,37,25]
[6,38,16]
[6,38,17]
[6,38,18]
[6,38,19]
[6,38,20]
[6,38,21]
[6,38,22]
[6,38,23]
[6,38,24]
[6,38,25]
[6,38,26]
[6,38,27]
[6,39,16]
[6,39,17]
[6,39,18]
[6,39,19]
[6,39,20]
[6,39,21]
[6,39,22]
[6,39,23]
[6,39,24]
[6,39,25]
[6,39,26]
[6,39,27]
[6,40,18]
[6,40,19]
[6,40,20]
[6,40,21]
[6,40,22]
[6,40,23]
[6,40,24]
[6,40,25]
[6,40,26]
[6,40,27]
[6,41,18]
[6,41,19]
[6,41,20]
[6,41,21]
[6,41,22]
[6,41,23]
[6,41,24]
[6,41,25]
[6,41,26]
[6,41,27]
[6,42,18]
[6,42,19]
[6,42,20]
[6,42,21]
[6,43,18]
[6,43,19]
[6,43,20]
[6,43,21]
[6,44,20]
[6,44,21]
[6,45,20]
[6,45,21]
[6,46,20]
[6,46,21]
[6,47,20]
[6,47,21]
[5,16,6]
[5,16,7]
[5,17,6]
[5,17,7]
[5,18,4]
[5,18,5]
[5,18,6]
[5,18,7]
[5,19,4]
[5,19,5]
[5,19,6]
[5,19,7]
]
}
ogr2ogrPath: ogr2ogr
tippecanoePath: /usr/local/bin/tippecanoe
dstDir: zxy
concurrent: 4
maxRetries: 3
retryDelay: 5000
mbtilesDir: mbtiles
geojsonsDir: geojsons
spinnerString: 15
}