sudo apt-get install git
git clone https://github.com/ckan/ckan.git
cd ckan/
git checkout tags/ckan-2.9.5
cp contrib/docker/.env.template contrib/docker/.env
cd contrib/docker
# docker-compose up -d --build
| import os | |
| from osgeo import gdal, ogr, osr | |
| # Goal: Convert a CSV of point coordinates to a geopackage | |
| # Create a test CSV | |
| with open('test.csv', 'w') as csv: | |
| csv.write('latitude,longitude\n') | |
| csv.write('61,-150') |
I couldn't find a complete specification for FIDs, but GDAL's vector data model docs say,
The feature id (FID) of a feature is intended to be a unique identifier for the feature within the layer it is a member of. Freestanding features, or features not yet written to a layer may have a null (OGRNullFID) feature id. The feature ids are modeled in OGR as a 64-bit integer; however, this is not sufficiently expressive to model the natural feature ids in some formats. For instance, the GML feature id is a string.
This suggests that all features read from a layer will have a defined FID, and all features will have an FID that is either defined or null. (Though it's not clear that that is guaranteed).
Format drivers may/may not enforce more properties of the FID. For instance, shapefile FIDs start at 0 while geopackage FIDs start at 1. Every time a shapefile changes, the FIDs are [re-ordered sequentially starting fro
Here's how I fixed the pygeoprocessing 2.3.3 release (with much help from James):
I uploaded an invalid sdist tarball to the 2.3.3 release on GitHub and PyPI.
I created a post release with the right sdist file. I repeated the entire release process with the version number 2.3.3.post0: updating HISTORY.rst, PRing that into main, pushing the new tag, waiting for artifacts to build, and uploading them to Github and PyPI.
I then yanked the 2.3.3 release on PyPI. This seems to be the best practice when a release is broken: PEP 592
-
HISTORY.rst: Model names list in comment at top of file -
HISTORY.rst: note that model was added -
installer/windows/invest_installer.nsi: windows start menu link -
installer/windows/invest_installer.nsi: windows data downloads list -
Makefile: user's guide commit hash -
Makefile: sample data commit hash -
Makefile: test data commit hash (if needed) -
Makefile: ZIPDIRS list -
scripts/invest-autotest.py: add to dictionary
Here are some examples of how to use pygeoprocessing for reclassification. Note: tested with pygeoprocessing 2.3.2.
pygeoprocessing provides the reclassify_raster function which can handle basic cases. See the docstring for details.
import pygeoprocessing
| import pygeoprocessing | |
| import numpy | |
| from osgeo import gdal, osr | |
| srs = osr.SpatialReference() | |
| srs.ImportFromEPSG(32731) # WGS84/UTM zone 31s | |
| projection_wkt = srs.ExportToWkt() | |
| arr = numpy.array([[0, 1, -1]], dtype=numpy.int16) | |
| base_nodata = 0 | |
| target_datatype = gdal.GDT_Int16 |
| #!/bin/sh | |
| # post-commit hook to keep one branch updated with all the changes from another | |
| # so that the target branch always has a superset of the changes in the source branch | |
| # do this by rebasing target off of source after each commit to source | |
| SOURCE_BRANCH=example/generate-docs | |
| BRANCH_TO_REBASE=task/31/models-A-D | |
| # get the current branch in the format "* branch_name" |
| import numpy | |
| import math | |
| from osgeo import gdal, osr | |
| import pygeoprocessing | |
| import timeit | |
| FLOAT_NODATA = -1 | |
| UINT8_NODATA = 255 |